We will study an iterative method for solving linear systems: the Gauss-Seidel method. The aim is to build a sequence of approximations that converges to the true solution.

Iterative method

The Gauss-Seidel method is an iterative method for solving linear systems such as

\[Ax=b\]

For this, we use a sequence $x^{(k)}$ which converges to the fixed point(solution) $x$.

For $x^{(0)}$ given, we build a sequence $x^{(k)}$such $x^{(k+1)}=F(x^{(k)})$ with $k \in \mathbf{N}$.

$A=M-N$ where $M$ is an invertible matrix.

\[\begin{array}{cccc} Ax=b \Leftrightarrow Mx=Nx+b \Leftrightarrow & x &=& M^{-1}Nx+M^{-1}b \\ & &=& F(x) \end{array}\]

where $F$ is an affine function.

Algorithm

\[\left\{ \begin{array}{cc} x^{(0)} \textrm{ given}& ,\\ x^{(k+1)} = M^{-1}Nx^{(k)}+M^{-1}b& \textrm{else}. \end{array} \right.\]

If $x$ is solution of $Ax=b$ then $x = M^{-1}Nx+M^{-1}b$

Error

Let $e^{(k)}$ be the error vector

\[e^{(k+1)}=x^{(k+1)}-x^{(k)}=M^{-1}N(x^{(k)}-x^{(k-1)})=M^{-1}Ne^{(k)}\]

We put $B = M^{-1}N$, which gives

\[e^{(k+1)}=Be^{(k)}=B^{(k+1)}e^{(0)}.\]

Convergence

The algorithm converges if $\lim_{k \to \infty} | e^{(k)} | = 0 \Leftrightarrow \lim_{k \to \infty} | B^k | = 0$ (null matrix).

Theorem: $\lim_{k \to \infty} | B^k | = 0$ if and only if the spectral radius of the matrix $B$ checks: \(\rho(B)<1,\) we remind that $\rho(B) = \max_{i = 1,\ldots,n} |\lambda_i|$ where $ \lambda_1,\ldots,\lambda_n$ represent the eigenvalues of $B$.

Theorem: If A is strictly diagonally dominant, \(\left | a_{ii} \right | > \sum_{i \ne j} {\left | a_{ij} \right |},\forall i=1,\ldots,n\) then for all $x_0$ the Gauss-Seidel algorithm will converge to the solution $x$ of the system $Ax=b.$

Gauss-Seidel Method

We decompose $A$ in the following way :\(A=D-E-F\) with

  • $D$ the diagonal
  • $-E$ the strictly lower triangular part of $A$
  • $-F$ the strictly upper triangular part of $A$.

In the Gauss-Seidel method we choose $M = D-E$ and $N = F$ (in the Jacobi method, $M = D$ et $N = E+F$).

\[x^{(k+1)}=(D-E)^{-1}Fx^{(k)}+(D-E)^{-1}b\]

We obtain

\[x^{(k+1)}_i = \displaystyle\frac{b_i - \displaystyle\sum_{j=1}^{i-1} a_{ij} x^{(k+1)}_j - \displaystyle\sum_{j=i+1}^{n} a_{ij} x^{(k)}_j }{a_{ii}}\]

Stop criteria

For the stop criteria , we can use the residual vector, wich gives for a given precision $\epsilon$ :

\[\dfrac{\|r^{(k)} \|}{\|b\|}=\frac{\|b-Ax^{(k)} \|}{\|b\|} < \epsilon\]