Reading notes An Introduction to Lenstra-Lenstra-Lovasz Lattice Basis Reduction Algorithm

Introduction

Definition 1.1(Lattice). A lattice is a discrete subgroup of generated by all the integer combinations of the vectors of some basis :

Theorem 1.1 (Minowski’s Theorem). Any convex, centrally symmetric body of volume contains a non-zero lattice point. is the determinant of a lattice .

Definition 1.2 (Fundamental Region). A set of a lattice if its translation ,taken over all , form a partition of .

Corollary 1.1.1 For any n-dimensional lattice , we have the length of shortest vector

Basis Reduction

​ Basis reduction is a process of reducing the basis of a lattice to a shorter basis while keeping the same.

​ There are some ways to change the basis but keep the same lattice :

      1. Swap two vectors in the basis.
      2. For a vector $b_i \in B $, use $-b_i$ instead.
      3. For a vector $b_i \in B$, add a linear combination of other basis vectors to it.

Basis reduction can help solving SVP, because if we can not reduce a basis anymore, the shortest basis vector should be the shortest vector of the lattice.

Now we start by solving the SVP in 2-dimensional case.

Definition 2.1 (Two dimensional reduced basis). A basis is said to be reduced if it satisfies following condition:

with the is called the orthogonal projection coefficient.

Theorem 2.1. Given a two dimensional lattice with basis rank 2, if is the length of the shortest vector in , then

Theorem 2.2 if a basis is reduced, then is the shortest vector.

​ About the Definition 2.1 and Theorem 2.1 , 2.2 , I think when we can say we find the shortest vector , the basis is said “already be reduced”. And it means that this basis satisfies the condition in the Definition 2.1.
​ So, if , the basis can be reduced continued.

The description of Gauss’s algorithm solved SVP in 2-dimensional.

  1. Start with basis , if . swap and .
  2. Compute If . let be the biggest integer that is smaller than , and let
  3. If , then swap and , and repeat step 2. Otherwise, output

According the algorithm above, I write the following script with the Sagemath.

1
2
3
4
5
6
7
8
9
10
11
def Gauss_2d_reduce(basis):
(b1,b2) = (basis[0],basis[1]) if basis[0].norm() < basis[1].norm() else (basis[1],basis[0])
while True:
u = (b1[0]*b2[0] + b1[1]*b2[1]) / b1.norm()^2
if u > 0.5:
m = int(u)
b2 = b2 - m*b1
if b1.norm() > b2.norm():
b1,b2 = b2,b1
else:
return([b1,b2])

The basis we found in Gauss algorithm is not exactly orthogonal, but it is the nearest basis we can get.

Gram-Schmidt Orthogonalization

To generalize the algorithm to n-dimensions, we need to find a way to construct n-dimensional orthogonal basis based on the given basis, which leads us to Gram-Schmidt Orthogonalization.

Theorem 3.1 (Gram-Schmidt Orthogonalization method). Given a basis of a subspace of , we define
$$
b_1^ = b_1 \\
b_2^
=b_2 - u_{1,2}b_1, \ \ \ \ \ \ u_{1,2} = {b_2 \cdot b_1^ \over b_1^ \cdot b_1^} \\
\vdots \\
b_m^
= b_m - \sum_{i<m}u_{i,m}b_i \ \ \ \ \ \ u_{i,m} = {b_m \cdot b_i^ \over b_i^ \cdot b_i^} \\
$$
Then, $ \{b_1^
,b_2^,\cdots,b_m^\} H_m$.

Based on this Theorem, if we set , then we have

Therefore, we can write the above formula in matrix form, .

where basis vectors are columns in and . Thus, we have

Obviously, we can know that

For better understanding the matrix , let’s write the matrix at length.

but before write down the matrix, I want to translate the formula in the Theorem 3.1.

$$
(b_1,b_2,\cdots,b_n)
= (b_1^,b_2^,\cdots,b_n^)
$$
I can’t understand! why $B = B^
U$ ?????

LLL Basis Reduction

Definition 4.1 (LLL reduced basis). Let be a basis for a n-dimensional Lattice , and be the orthogonal basis generated in Theorem 3.1, and we have . We say is a LLL reduced basis if it satisfies two conditions:

(1)
(2) For each

Remark. The constant is chosen for the simplicity of the paper. Any constant between and 1 can guarantee that the algorithm terminates in polynomial time.

Remark. The condition 2 emphasizes the ordering of the basis, like what we did in two dimensional case.

Given a basis in n-dimension, to get a LLL reduced basis, the LLL algorithm works as below.

LLL Algorithm-e7f3a7c1aa0d4556bc0474fc55d46791.jpg)

  • Step 1: Computed the most orthogonal basis based on Gram-Schmidt orthogonalization.

  • Step 2: Check the second condition.

    Because I don’t know how to choose the constant, I run the LLL Algorithm four times with different constants for one output.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    def LLL_Algorithm(Lattice,constant=1/4):
    n = len(Lattice[0])
    quickSort(Lattice,0,n-1)
    while True:
    finish = True
    for i in range(1,n+1):
    for k in range(1,i):
    m = int((Lattice[i-1]*Lattice[k-1]) / (Lattice[k-1] * Lattice[k-1]))
    Lattice[i-1] = Lattice[i-1] - m * Lattice[k-1]
    for i in range(n-1):
    u = (Lattice[i+1]*Lattice[i]) / (Lattice[i] * Lattice[i])
    if ((Lattice[i+1]+u*Lattice[i]).norm())^2 < ((constant) * (Lattice[i].norm())^2):
    Lattice[i],Lattice[i+1] = Lattice[i+1],Lattice[i]
    finish = False
    break
    if finish:
    if contant == 1:
    return Lattice
    else:
    return LLL_Algorithm(Lattice,constant+(1/4))
    def partition(Lattice, low, high):
    i = (low - 1)
    pivot = Lattice[high].norm()
    for j in range(low, high):
    if Lattice[j].norm() <= pivot:
    i = i + 1
    Lattice[i], Lattice[j] = Lattice[j], Lattice[i]
    Lattice[i + 1], Lattice[high] = Lattice[high], Lattice[i + 1]
    return (i + 1)

    def quickSort(Lattice, low, high):
    if low < high:
    pi = partition(Lattice, low, high)
    quickSort(Lattice, low, pi - 1)
    quickSort(Lattice, pi + 1, high)

Claim 4.1. If is a n-dimensional LLL reduced basis of Lattice , then is the length of the shortest vector of .

In this paper, the author proof the LLL algorithm terminates in polynomial time of in the next part. If you want to know how to proof that, you can read the original paper.

Conclusion

I think this paper make me understand the LLL Algorithm easily, but I want to know more detail about the LLL Algorithm. If I have time, I would like to see the original paper about LLL! -> https://link.springer.com/content/pdf/10.1007/BF01457454.pdf