Important: Please read the installation page for details about how to install the toolboxes. $\newcommand{\dotp}[2]{\langle #1, #2 \rangle}$ $\newcommand{\enscond}[2]{\lbrace #1, #2 \rbrace}$ $\newcommand{\pd}[2]{ \frac{ \partial #1}{\partial #2} }$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\umax}[1]{\underset{#1}{\max}\;}$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\uargmin}[1]{\underset{#1}{argmin}\;}$ $\newcommand{\norm}[1]{\|#1\|}$ $\newcommand{\abs}[1]{\left|#1\right|}$ $\newcommand{\choice}[1]{ \left\{ \begin{array}{l} #1 \end{array} \right. }$ $\newcommand{\pa}[1]{\left(#1\right)}$ $\newcommand{\diag}[1]{{diag}\left( #1 \right)}$ $\newcommand{\qandq}{\quad\text{and}\quad}$ $\newcommand{\qwhereq}{\quad\text{where}\quad}$ $\newcommand{\qifq}{ \quad \text{if} \quad }$ $\newcommand{\qarrq}{ \quad \Longrightarrow \quad }$ $\newcommand{\ZZ}{\mathbb{Z}}$ $\newcommand{\CC}{\mathbb{C}}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\EE}{\mathbb{E}}$ $\newcommand{\Zz}{\mathcal{Z}}$ $\newcommand{\Ww}{\mathcal{W}}$ $\newcommand{\Vv}{\mathcal{V}}$ $\newcommand{\Nn}{\mathcal{N}}$ $\newcommand{\NN}{\mathcal{N}}$ $\newcommand{\Hh}{\mathcal{H}}$ $\newcommand{\Bb}{\mathcal{B}}$ $\newcommand{\Ee}{\mathcal{E}}$ $\newcommand{\Cc}{\mathcal{C}}$ $\newcommand{\Gg}{\mathcal{G}}$ $\newcommand{\Ss}{\mathcal{S}}$ $\newcommand{\Pp}{\mathcal{P}}$ $\newcommand{\Ff}{\mathcal{F}}$ $\newcommand{\Xx}{\mathcal{X}}$ $\newcommand{\Mm}{\mathcal{M}}$ $\newcommand{\Ii}{\mathcal{I}}$ $\newcommand{\Dd}{\mathcal{D}}$ $\newcommand{\Ll}{\mathcal{L}}$ $\newcommand{\Tt}{\mathcal{T}}$ $\newcommand{\si}{\sigma}$ $\newcommand{\al}{\alpha}$ $\newcommand{\la}{\lambda}$ $\newcommand{\ga}{\gamma}$ $\newcommand{\Ga}{\Gamma}$ $\newcommand{\La}{\Lambda}$ $\newcommand{\si}{\sigma}$ $\newcommand{\Si}{\Sigma}$ $\newcommand{\be}{\beta}$ $\newcommand{\de}{\delta}$ $\newcommand{\De}{\Delta}$ $\newcommand{\phi}{\varphi}$ $\newcommand{\th}{\theta}$ $\newcommand{\om}{\omega}$ $\newcommand{\Om}{\Omega}$
This tour explores the use of the conjugate gradient method for the solution of large scale symmetric linear systems.
addpath('toolbox_signal')
addpath('toolbox_general')
addpath('solutions/optim_3_cgs')
The conjugate gradient method is an iterative method that is taylored to solve large symmetric linear systems $Ax=b$.
We first give an example using a full explicit matrix $A$, but one should keep in mind that this method is efficient especially when the matrix $A$ is sparse or more generally when it is fast to apply $A$ to a vector. This is usually the case in image processing, where $A$ is often composed of convolution, fast transform (wavelet, fourier) or diagonal operator (e.g. for inpainting).
One initializes the CG method as $$ x_0 \in \RR^N, \quad r_0 = b - x_0, \quad p_0 = r_0 $$ The iterations of the method reads $$ \choice{ \alpha_k = \frac{ \dotp{r_k}{r_k} }{ \dotp{p_k}{A p_k} } \\ x_{k+1} = x_k + \alpha_k p_k \\ r_{k+1} = r_k - \alpha_k A p_k \\ \beta_k = \frac{ \dotp{r_{k+1}}{r_{k+1}} }{ \dotp{r_k}{r_k} } \\ p_{k+1} = r_k + \beta_k p_k }$ $$
Note that one has $r_k = b - Ax_k$ which is the residual at iteration $k$. One can thus stop the method when $\norm{r_k}$ is smaller than some user-defined threshold.
Dimension of the problem.
n = 500;
Matrix $A$ of the linear system. We use here a random positive symmetric matrix and shift its diagonal to make it well conditionned.
A = randn(n);
A = A*A' + .1*eye(n);
Right hand side of the linear system. We use here a random vector.
b = randn(n,1);
Canonical inner product in $\RR^N$.
dotp = @(a,b)sum(a(:).*b(:));
Exercise 1
Implement the conjugate gradient method, and monitor the decay of the energy $\norm{r_k}=\norm{Ax_k-b}$.
exo1()
%% Insert your code here.
Local differential operators like gradient, divergence and laplacian are the building blocks for variational image processing.
Load an image $g \in \RR^N$ of $N=n \times n$ pixels.
n = 256;
g = rescale( load_image('lena',n) );
Display it.
clf;
imageplot(g);
For continuous functions, the gradient reads $$ \nabla g(x) = \pa{ \pd{g(x)}{x_1}, \pd{g(x)}{x_2} } \in \RR^2. $$ (note that here, the variable $x$ denotes the 2-D spacial position).
We discretize this differential operator using first order finite differences. $$ (\nabla g)_i = ( g_{i_1,i_2}-g_{i_1-1,i_2}, g_{i_1,i_2}-g_{i_1,i_2-1} ) \in \RR^2. $$ Note that for simplity we use periodic boundary conditions.
Compute its gradient, using finite differences.
s = [n 1:n-1];
grad = @(f)cat(3, f-f(s,:), f-f(:,s));
One thus has $ \nabla : \RR^N \mapsto \RR^{N \times 2}. $
v = grad(g);
One can display each of its components.
clf;
imageplot(v(:,:,1), 'd/dx', 1,2,1);
imageplot(v(:,:,2), 'd/dy', 1,2,2);
One can also display it using a color image.
clf;
imageplot(v);