Math-Linux.com

Knowledge base dedicated to Linux and applied mathematics.

Home > Mathematics > Linear Systems > Preconditioned Conjugate Gradient Method

Preconditioned Conjugate Gradient Method

All the versions of this article: [English] [français]

Solving linear systems resulting from the finite differences method or of the finite elements shows the limits of the conjugate gradient. Indeed, Spectral condition number of such matrices is too high. The technique of Preconditioned Conjugate Gradient Method consists in introducing a matrix C subsidiary.

Problem

We want to solve the following system:

Ax=b,


where A is a n\times n symmetric definite and positive matrix(A^\top =A and x^\top Ax>0, for all x\in \mathbf{R}^n non zero).
Let x_\star be the exact solution of this system.

Spectral condition number

It happens sometimes that the spectral condition number \kappa(A) is too high (eigenvalues are not well distributed). Preconditionnement
consists in introducing regular matrix C\in\mathcal{M}_n(\mathbb{R}) and solving the system:

C^{-1}(Ax)=C^{-1}b\Leftrightarrow Ax=b


such that the new spectral condition number is smaller for a judicious choice of the matrix C.

Preconditioned Conjugate Gradient Method

Let x_0\in \mathbb{R}^{n} be an intial vector, Preconditioned Gradient Method algorithm
is the following one:

r_{0}=b-Ax_{0}


  z_{0}={C}^{-1}r_{0}


d_{0}=z_{0}


For k=0,1,2,\ldots

  \alpha_k={{z_{k}^{\top}r_{k}}\over{{
d}_k^{\top}Ad_k}}


 x_{k+1}=x_{k}+\alpha_kd_k


r_{k+1}=r_{k}-\alpha_kAd_k


z_{k+1}={C}^{-1}r_{k+1}


\beta_{k+1}={{z_{k+1}^{\top}r_{k+1} } \over {
z_{k}^{\top}r_{k} } }


d_{k+1}=z_{k+1}+\beta_{k+1}d_{k}


EndFor

Jacobi Preconditionner

Jacobi Preconditioner consists in taking the diagonal of A for the matrix C, i.e.


C_{ij}=
\left\{
\begin{array}{cc}
A_{ii}  & \textrm{si }i=j ,\\
0 &\textrm{sinon}.
  \end{array}
\right.

Advantages of such preconditioner are the facility of its implementation and the low memory it needs.
But we can find other preconditioners such that resolution of the linear system is fastest, it is the case of the
SSOR Preconditioner.

SSOR Preconditioner(Symmetric Successive Over Relaxation)

We decompose the symmetric matrix A like follows:

A=L+D+L^{\top}


where L is the strictly lower part of A and D is the diagonal of A. SSOR Preconditioner consists
in taking

C=(\frac{D}{\omega}+L)\frac{\omega}{2-\omega}D^{-1}(\frac{D}{\omega}+L^{\top})


where \omega is a relaxation parameter. A necessary and sufficient condition of the Preconditioned Gradient Method
algorithm is to fix the parameter \omega in the interval ]0,2[.

Any message or comments?

comments powered by Disqus