Lattice Reduction Algorithms: Theory and Practice

نویسنده

  • Phong Q. Nguyen
چکیده

Lattice reduction algorithms have surprisingly many applications in mathematics and computer science, notably in cryptology. On the one hand, lattice reduction algorithms are widely used in publickey cryptanalysis, for instance to attack special settings of RSA and DSA/ECDSA. On the other hand, there are more and more cryptographic schemes whose security require that certain lattice problems are hard. In this talk, we survey lattice reduction algorithms, present their performances, and discuss the differences between theory and practice. Intuitively, a lattice is an infinite arrangement of points in R spaced with sufficient regularity that one can shift any point onto any other point by some symmetry of the arrangement. The simplest non-trivial lattice is the hypercubic lattice Z formed by all points with integral coordinates. The branch of number theory dealing with lattices (and especially their connection with convex sets) is known as geometry of numbers [24,41,12,5], and its origins go back to two historical problems: higher-dimensional generalizations of Euclid’s gcd algorithm and sphere packings. More formally, a lattice L is a discrete subgroup of R, or equivalently, the set of all integer combinations of n linearly independent vectors b1, . . . ,bn in R: L = {a1b1 + · · ·+ anbn, ai ∈ Z}. Such a set (b1, . . . ,bn) is called a basis of the lattice. The goal of lattice reduction is to find reduced bases, that is bases consisting of reasonably short and nearly orthogonal vectors. This is related to the reduction theory of quadratic forms developed by Lagrange [19], Gauss [11] and Hermite [14]. Lattice reduction algorithms have proved invaluable in many fields of computer science and mathematics (see the book [30]), notably public-key cryptanalysis where they have been used to break knapsack cryptosystems [32] and special cases of RSA and DSA, among others (see [26,21] and references therein). Reduced bases allow to solve the following important lattice problems, either exactly or approximately: – The most basic computational problem involving lattices is the shortest vector problem (SVP), which asks to find a nonzero lattice vector of smallest norm, given a lattice basis as input. SVP can be viewed as a geometric generalization of gcd computations: Euclid’s algorithm actually computes the smallest (in absolute value) non-zero linear combination of two integers, since gcd(a, b)Z = aZ+bZ, which means that we are replacing the integers a and b by an arbitrary number of vectors b1, . . . ,bn with integer coordinates. Since SVP is NP-hard under randomized reductions [3] (see [17,34] for surveys on the hardness of lattice problems), one is also interested in approximating SVP, i.e. to output a nonzero lattice vector of norm not much larger than the smallest norm. – The inhomogeneous version of SVP is called the closest vector problem (CVP); here we are given an arbitrary target vector in addition to the lattice basis and asked to find the lattice point closest to that vector. A popular particular case of CVP is Bounded Distance Decoding (BDD), where the target vector is known to be somewhat close to the lattice. The first SVP algorithm was Lagrange’s reduction algorithm [19], which solves SVP exactly in dimension two, in quadratic time. In arbitrary dimension, there are two types of SVP algorithms: 1. Exact algorithms. These algorithms provably find a shortest vector, but they are expensive, with a running time at least exponential in the dimension. Intuitively, these algorithms perform an exhaustive search of all extremely short lattice vectors, whose number is exponential in the dimension (in the worst case): in fact, there are lattices for which the number of shortest lattice vectors is already exponential. Exact algorithms can be split in two categories: (a) Polynomial-space exact algorithms. They are based on enumeration which dates back to the early 1980s with work by Pohst [33], Kannan [16], and Fincke-Pohst [6]. In its simplest form, enumeration is simply an exhaustive search for the best integer combination of the basis vectors. The best deterministic enumeration algorithm is Kannan’s algorithm [16], with super-exponential worst-case complexity, namely n polynomial-time operations (see [13]), where n denotes the lattice dimension. The enumeration algorithms used in practice (such as that of Schnorr-Euchner [37]) have a weaker preprocessing than Kannan’s algorithm [16], and their worst-case complexity is 2 2) polynomialtime operations. But it is possible to obtain substantial speedups using pruning techniques: pruning was introduced by Schnorr-Euchner [37] and Schnorr-Hörner [38] in the 90s, and recently revisited by Gama, Nguyen and Regev [10], where it was shown that one can reach a 2 heuristic speedup over basic enumeration. (b) Exponential-space exact algorithms. These algorithms have a better asymptotic running time, but they all require exponential space 2. The first algorithm of this kind is the randomized sieve algorithm of Ajtai, Kumar and Sivakumar (AKS) [4], with exponential worstcase complexity of 2 polynomial-time operations. Micciancio and Voulgaris [22] recently presented an alternative deterministic algorithm, which solves both CVP and SVP within 2 polynomial-time operations. Interestingly, there are several heuristic variants [31,23,43] of AKS with running time 2, where the O() constant is much less than that of the best provable algorithms known. For instance, the recent algorithm of Wang et al. [43] has time complexity 2 polynomial-time operations. 2. Approximation algorithms. These algorithms are much faster than exact algorithms, but they only output short lattice vectors, not necessarily the shortest one: they typically output a whole reduced basis, and are therefore lattice reduction algorithms. The first algorithm of this kind is the celebrated algorithm of Lenstra, Lenstra and Lovász (LLL) [20,30], which can approximate SVP to within a factor O((2/ √ 3)) in polynomial time: it can be viewed as an algorithmic version of Hermite’s inequality. Since the appearance of LLL, research in this area has focused on two topics: (a) Faster LLL. Here, one is interested in obtaining reduced bases of similar quality than LLL, possibly slightly worse, but with a smaller running time. This is achieved by a divide-and-conquer strategy (such as in [39,18]) or by using floating-point arithmetic (such as in [36,29,25]). The most popular implementations of LLL are typically heuristic floatingpoint variants, such as that of Schnorr-Euchner [37]: see the survey [42] on floating-point LLL. (b) Stronger LLL. Here, one is interested in obtaining better approximation factors than LLL, at the expense of the running time. Intuitively, LLL repeatedly uses two-dimensional reduction to find short lattice vectors in dimension n. Blockwise reduction algorithms [35,7,8] obtain better approximation factors by replacing this two-dimensional reduction subroutine by a higher-dimensional one, using exact SVP algorithms in low dimension. The best polynomial-time blockwise algorithm known [8] achieves a subexponential approximation factor 2 log logn)/ : it is an algorithmic version of Mordell’s inequality. In practice, a popular choice is the BKZ algorithm of Schnorr-Euchner [37] implemented in the NTL library [40], which is a heuristic variant of Schnorr’s blockwise algorithm [35]. The article [9] provides an experimental assessment of BKZ. Both categories are in fact complementary: all exact algorithms known first apply an approximation algorithm (typically at least LLL) as a preprocessing, while all blockwise algorithms call many times an exact algorithm in low dimension as a subroutine. Most of the SVP algorithms we mentioned can be adapted to CVP (see for instance [1]). The provable SVP algorithms are surveyed in [27]. The heuristic algorithms which we mentioned are such that their running time may no longer be proved, and/or there may not be any guarantee on the output (should the algorithm ever terminate). Heuristic algorithms can typically outperform provable algorithms in practice, for reasons still not well understood. Finally, it is folklore that lattice reduction algorithms behave better than their proved worst-case theoretical bounds. In the 80s, the early success of lattice reduction algorithms in cryptanalysis led to the belief that the strongest lattice reduction algorithms behaved as perfect oracles, at least in small dimension. But this belief showed its limits in the 90s with NP-hardness results and the development of lattice-based cryptography, following Ajtai’s worst-case/averagecase reduction [2] and the NTRU cryptosystem [15]. The articles [28,9] clarify what can be expected in practice, based on experimental results. Such assessments are important to better understand the gap between theory and practice, but also to evaluate the concrete security of lattice-based cryptography.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Lattice Enumeration Using Extreme Pruning

Lattice enumeration algorithms are the most basic algorithms for solving hard lattice problems such as the shortest vector problem and the closest vector problem, and are often used in public-key cryptanalysis either as standalone algorithms, or as subroutines in lattice reduction algorithms. Here we revisit these fundamental algorithms and show that surprising exponential speedups can be achie...

متن کامل

Factoring Polynomials over Number Fields

The purpose of these notes is to give a substantially self-contained introduction to the factorization of polynomials over number fields. In particular, we present Zassenhaus’ algorithm and a factoring algorithm using lattice reduction, which were, respectively, the best in practice and in theory, before 2002. We give references for the van Hoeij-Novocin algorithm, currently the best both in pr...

متن کامل

Practical HKZ and Minkowski Lattice Reduction Algorithms

Recently, lattice reduction has been widely used for signal detection in multiinput multioutput (MIMO) communications. In this paper, we present three novel lattice reduction algorithms. First, using a unimodular transformation, a significant improvement on an existing Hermite-Korkine-Zolotareff-reduction algorithm is proposed. Then, we present two practical algorithms for constructing Minkowsk...

متن کامل

Probabilistic Analysis of LLL Reduced Bases

Lattice reduction algorithms behave much better in practice than their theoretical analysis predicts, with respect to both output quality and runtime. In this paper we present a probabilistic analysis that proves an average-case bound for the length of the first basis vector of an LLL reduced basis which reflects LLL experiments much better. Additionally, we use the same method to generate aver...

متن کامل

Practical, Predictable Lattice Basis Reduction

Lattice reduction algorithms are notoriously hard to predict, both in terms of running time and output quality, which poses a major problem for cryptanalysis. While easy to analyze algorithms with good worst-case behavior exist, previous experimental evidence suggests that they are outperformed in practice by algorithms whose behavior is still not well understood, despite more than 30 years of ...

متن کامل

Fast Lattice Point Enumeration with Minimal Overhead

Enumeration algorithms are the best currently known methods to solve lattice problems, both in theory (within the class of polynomial space algorithms), and in practice (where they are routinely used to evaluate the concrete security of lattice cryptography). However, there is an uncomfortable gap between our theoretical understanding and practical performance of lattice point enumeration algor...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011