A fast average case algorithm for lyndon decomposition

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Fast Average Case Algorithm for Lyndon Decomposition

A simple algorithm, called LD, is described for computing the Lyndon decomposition of a word of length n. Although LD requires time O(nlogn) in the worst case, it is shown to require only (n) worst-case time for words which are \1-decomposable", and (n) average-case time for words whose length is small with respect to alphabet size. The main interest in LD resides in its application to the prob...

متن کامل

Average Cost of Duval's Algorithm for Generating Lyndon Words Average Cost of Duval's Algorithm for Generating Lyndon Words

The average cost of Duval's algorithm for generating all Lyndon words up to a given length in lexicographic order is proved to be asymptotically equal to (q + 1)=(q ? 1), where q is the size of the underlying alphabet. In particular, the average cost is independent of the length of the words generated. A precise evaluation of the constants is also given.

متن کامل

Average Cost of Duval's Algorithm for Generating Lyndon Words

Berstel, J. and M. Pocchiola, Average cost of Duval’s algorithm for generating Lyndon words, Theoretical Computer Science 132 (1994) 415-425. The average cost of Duval’s algorithm for generating all Lyndon words up to a given length in lexicographic order is proved to be asymptotically equal to (q+ l)/(ql), where 4 is the size of the underlying alphabet. In particular, the average cost is indep...

متن کامل

A Fast T-decomposition Algorithm

T-decomposition was first proposed and implemented as an algorithm by Mark Titchener. It has applications in communication of code sets and in the fields of entropy and similarity measurement. The first implementation of a T-decomposition algorithm by Titchener was subsequently followed by a faster version named tcalc, developed in conjunction with Scott Wackrow. An improved T-decomposition alg...

متن کامل

A New Algorithm for High Average-utility Itemset Mining

High utility itemset mining (HUIM) is a new emerging field in data mining which has gained growing interest due to its various applications. The goal of this problem is to discover all itemsets whose utility exceeds minimum threshold. The basic HUIM problem does not consider length of itemsets in its utility measurement and utility values tend to become higher for itemsets containing more items...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: International Journal of Computer Mathematics

سال: 1995

ISSN: 0020-7160,1029-0265

DOI: 10.1080/00207169508804408