Introduction

Definition 1.1(Lattice). A lattice $\mathcal L$ is a discrete subgroup of $H$ generated by all the integer combinations of the vectors of some basis $B$ :
$$
\mathcal L = \sum^m_{i=1}\mathbb Zb_i = \lbrace \sum^m_{i=1}z_ib_i, where \ z_i \in \mathbb Z,b_i \in B \rbrace .
$$
Theorem 1.1 (Minowski’s Theorem). Any convex, centrally symmetric body $S$ of volume $vol(S) > 2^ndet(\mathcal L)$ contains a non-zero lattice point. $det(\mathcal L)$ is the determinant of a lattice $\mathcal L$ .

Definition 1.2 (Fundamental Region). A set $\mathcal F \subseteq \mathbb R^n$ of a lattice $\mathcal F$ if its translation $x+\mathcal F = {x+y:y \in \mathcal F}$,taken over all $x \in \mathcal L$, form a partition of $\mathbb R^n$.

Corollary 1.1.1 For any n-dimensional lattice $\mathcal L$, we have the length of shortest vector $\lambda(\mathcal L) > \sqrt n \cdot det(\mathcal L)^{1 \over n}$

Basis Reduction

​ Basis reduction is a process of reducing the basis $B$ of a lattice $\mathcal L$ to a shorter basis $B’$ while keeping $\mathcal L$ the same.

​ There are some ways to change the basis but keep the same lattice :

      1. Swap two vectors in the basis.
      2. For a vector $b_i \in B $, use $-b_i$ instead.
      3. For a vector $b_i \in B$, add a linear combination of other basis vectors to it.

Basis reduction can help solving SVP, because if we can not reduce a basis anymore, the shortest basis vector should be the shortest vector of the lattice.

Now we start by solving the SVP in 2-dimensional case.

Definition 2.1 (Two dimensional reduced basis). A basis $(b_1,b_2)$ is said to be reduced if it satisfies following condition:
$$
|| b_1 || \leq || b_2 || \\
u = {b_1 \cdot b_2 \over ||b_1||^2} \leq {1\over 2}
$$
with the $u$ is called the orthogonal projection coefficient.

Theorem 2.1. Given a two dimensional lattice $\mathcal L$ with basis rank 2, if $\lambda$ is the length of the shortest vector in $\mathcal L$, then
$$
\lambda \leq \sqrt { {2 \over \sqrt3}det(\mathcal L)}.
$$
Theorem 2.2 if a basis ${b_1,b_2}$ is reduced, then $b_1$ is the shortest vector.

​ About the Definition 2.1 and Theorem 2.1 , 2.2 , I think when we can say we find the shortest vector $b_1$, the basis $(b_1,b_2)$ is said “already be reduced”. And it means that this basis satisfies the condition in the Definition 2.1.
​ So, if $u > {1\over 2}$, the basis can be reduced continued.

The description of Gauss’s algorithm solved SVP in 2-dimensional.

  1. Start with basis ${b_1,b_2}$, if $||b_1|| > ||b_2||$. swap $b_1$ and $b_2$.
  2. Compute $u = {b_1 \cdot b_2 \over ||b_1||^2}.$If $u>{1\over 2}$. let $m$ be the biggest integer that is smaller than $u$, and let $b_2 = b_2 - mb_1.$
  3. If $||b_1|| > ||b_2||$, then swap $b_1$ and $b_2$, and repeat step 2. Otherwise, output $b_1$

According the algorithm above, I write the following script with the Sagemath.

1
2
3
4
5
6
7
8
9
10
11
def Gauss_2d_reduce(basis):
(b1,b2) = (basis[0],basis[1]) if basis[0].norm() < basis[1].norm() else (basis[1],basis[0])
while True:
u = (b1[0]*b2[0] + b1[1]*b2[1]) / b1.norm()^2
if u > 0.5:
m = int(u)
b2 = b2 - m*b1
if b1.norm() > b2.norm():
b1,b2 = b2,b1
else:
return([b1,b2])

The basis we found in Gauss algorithm is not exactly orthogonal, but it is the nearest basis we can get.

Gram-Schmidt Orthogonalization

To generalize the algorithm to n-dimensions, we need to find a way to construct n-dimensional orthogonal basis based on the given basis, which leads us to Gram-Schmidt Orthogonalization.

Theorem 3.1 (Gram-Schmidt Orthogonalization method). Given a basis $\{b_1,b_2,\cdots,b_m\}$ of a subspace $H_m$ of $\mathbb R^n$, we define
$$
b_1^* = b_1 \\
b_2^*=b_2 - u_{1,2}b_1, \ \ \ \ \ \ u_{1,2} = {b_2 \cdot b_1^* \over b_1^* \cdot b_1^*} \\
\vdots \\
b_m^* = b_m - \sum_{i<m}u_{i,m}b_i \ \ \ \ \ \ u_{i,m} = {b_m \cdot b_i^* \over b_i^* \cdot b_i^*} \\
$$
Then, $ \{b_1^*,b_2^*,\cdots,b_m^*\} $ is an orthogonal basis of $H_m$.

Based on this Theorem, if we set $u_{m,m} = 1$, then we have
$$
b_m = \sum ^m_{i=1}u_{i,m}b_i
$$
Therefore, we can write the above formula in matrix form, $ B = B^*U $.

where basis vectors are columns in $B$ and $B^*$. Thus, we have
$$
U = \begin{pmatrix}
u_{1,1}&u_{1,2}&\cdots&u_{1,n} \\
0&u_{2,2}&\cdots&u_{2,n} \\
\vdots&\vdots&\ddots&\vdots \\
0&0&0&u_{n,n} \\
\end{pmatrix}
=
\begin{pmatrix}
1&u_{1,2}&\cdots&u_{1,n} \\
0&1&\cdots&u_{2,n} \\
\vdots&\vdots&\ddots&\vdots \\
0&0&0&1 \\
\end{pmatrix}
$$
Obviously, we can know that $det(U) = 1$

For better understanding the matrix , let’s write the matrix at length.

but before write down the matrix, I want to translate the formula in the Theorem 3.1.

$$
(b_1,b_2,\cdots,b_n)
\begin{pmatrix}
1&u_{1,2}&\cdots&u_{1,n} \\
0&1&\cdots&u_{2,n} \\
\vdots&\vdots&\ddots&\vdots \\
0&0&0&1 \\
\end{pmatrix} = (b_1^*,b_2^*,\cdots,b_n^*)
$$
I can’t understand! why $B = B^*U$ ?????

LLL Basis Reduction

Definition 4.1 (LLL reduced basis). Let$\{b_1,b_2,\cdots,b_n\}$ be a basis for a n-dimensional Lattice $\mathcal L$ , and $\{b_1^{∗},b_2^{∗},\cdots,b_n^{∗}\}$ be the orthogonal basis generated in Theorem 3.1, and we have $u_{i,k}={b_k \cdot b_i^{∗} \over b_i^{∗} \cdot b_i^{∗}}$. We say $\{b_1,b_2,\cdots,b_n\}$ is a LLL reduced basis if it satisfies two conditions:

(1) $\forall i \ne k, u_{i,k} \le {1\over 2},$
(2) For each $i, ||b_{i+1}^{∗} + u_{i,i+1}b_i^{∗}||^2 \ge {3\over 4}||b_i^{∗}||^2$

Remark. The constant $3\over 4$ is chosen for the simplicity of the paper. Any constant between $1\over 4$ and 1 can guarantee that the algorithm terminates in polynomial time.

Remark. The condition 2 emphasizes the ordering of the basis, like what we did in two dimensional case.

Given a basis $\{b_1,b_2,\cdots,b_n\}$ in n-dimension, to get a LLL reduced basis, the LLL algorithm works as below.

LLL Algorithm

  • Step 1: Computed the most orthogonal basis based on Gram-Schmidt orthogonalization.

  • Step 2: Check the second condition.

    Because I don’t know how to choose the constant, I run the LLL Algorithm four times with different constants for one output.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    def LLL_Algorithm(Lattice,constant=1/4):
    n = len(Lattice[0])
    quickSort(Lattice,0,n-1)
    while True:
    finish = True
    for i in range(1,n+1):
    for k in range(1,i):
    m = int((Lattice[i-1]*Lattice[k-1]) / (Lattice[k-1] * Lattice[k-1]))
    Lattice[i-1] = Lattice[i-1] - m * Lattice[k-1]
    for i in range(n-1):
    u = (Lattice[i+1]*Lattice[i]) / (Lattice[i] * Lattice[i])
    if ((Lattice[i+1]+u*Lattice[i]).norm())^2 < ((constant) * (Lattice[i].norm())^2):
    Lattice[i],Lattice[i+1] = Lattice[i+1],Lattice[i]
    finish = False
    break
    if finish:
    if contant == 1:
    return Lattice
    else:
    return LLL_Algorithm(Lattice,constant+(1/4))
    def partition(Lattice, low, high):
    i = (low - 1)
    pivot = Lattice[high].norm()
    for j in range(low, high):
    if Lattice[j].norm() <= pivot:
    i = i + 1
    Lattice[i], Lattice[j] = Lattice[j], Lattice[i]
    Lattice[i + 1], Lattice[high] = Lattice[high], Lattice[i + 1]
    return (i + 1)

    def quickSort(Lattice, low, high):
    if low < high:
    pi = partition(Lattice, low, high)
    quickSort(Lattice, low, pi - 1)
    quickSort(Lattice, pi + 1, high)

Claim 4.1. If $\{b_1,b_2,\cdots,b_n\}$ is a n-dimensional LLL reduced basis of Lattice $\mathcal L$, then $||b_1|| \le 2^{n-1 \over 2 }\lambda(\mathcal L)$ is the length of the shortest vector of $\mathcal L$.

In this paper, the author proof the LLL algorithm terminates in polynomial time of $n$ in the next part. If you want to know how to proof that, you can read the original paper.

Conclusion

I think this paper make me understand the LLL Algorithm easily, but I want to know more detail about the LLL Algorithm. If I have time, I would like to see the original paper about LLL! -> https://link.springer.com/content/pdf/10.1007/BF01457454.pdf