This paper performs an analysis of the algorithm of password tokenization introduced by R. Veras et al. [1]. We show main limitations of this approach and propose a new tokenization… Click to show full abstract
This paper performs an analysis of the algorithm of password tokenization introduced by R. Veras et al. [1]. We show main limitations of this approach and propose a new tokenization algorithm RGramToken, based on frequency dictionaries of English words, bigrams and trigrams. Our approach allows better utilization of information about probabilitiy distribution of words and word combinations in a natural language. The results of comparison analysis of these two algorithms on specially prepared tests with warped phrases demonstrate higher efficiency of RGramToken and its robustness on low quality dictionaries.
               
Click one of the above tabs to view related content.