Consider $Ax=b$:
$$\left[\begin{array}{cccc} a_{1,1} & a_{1,2} & a_{1,3} & a_{1,4} \\ a_{2,1} & a_{2,2} & a_{2,3} & a_{2,4} \\ a_{3,1} & a_{3,2} & a_{3,3} & a_{3,4} \\ a_{4,1} & a_{4,2} & a_{4,3} & a_{4,4} \end{array}\right] \left[\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \end{array}\right]= \left[\begin{array}{c} b_{1} \\ b_{2} \\ b_{3} \\ b_{4} \end{array}\right]$$$$\rightarrow \sum_j a_{i,j}x_i = b_i,\,\,i=1\ldots n.$$$$a_{i,1}x_1 + a_{i,2}x_2 + a_{i,3}x_3 + a_{i,4}x_4 = b_i,\,\, i=1,2,3,4.$$Solve for $x_i$:
$$ x_i^{k+1} = x_i^k + \frac{1}{a_{ii}}\left(b_i - \sum_{j=1}^na_{i,j}x_j^k\right).$$
$$R_i^k = \left(b_i - \sum_{j=1}^na_{i,j}x_j^k\right).$$ and we can write $$x_i^{k+1} = x_i^k + \frac{1}{a_{i,i}}R_{i}^k$$. * $Ax=b$, so $b-Ax=0$, but $b-Ax^k\ne0$ (since we are not converged yet); rather $b-Ax^k = R^k$. At convergence $R^k\rightarrow 0$. * That is, iterations drive the residual (error) to zero. * This happens faster if we start with a better guess for $x$.
where $A = L+D+U$ and $L$ is the lower triangular part of $A$, $D$ is the diagonal of $A$, and $U$ is the upper triangular part of $A$.
xold
, say), and fill in a separate $x^{k+1}$ array (called xnew
, say). Then we set xold=xnew
before the start of the next iteration.or $$Bx = c$$, where $B=L+D$ and $c=-Ux^k+b$ for each iteration. * $L$, $D$, and $U$ are as for Jacobi above.
with $x_i^{k+1}-x_i^k = \Delta x_i^k = R_i^k/a_{i,i}.$
$$x_i^{k+1} = x_i^k + \omega\frac{R_i^k}{a_{i,i}}.$$ * Using the G.S. equation for $R_i^k$ above.
* for $\omega < 1.0$ we have under-relaxation (take a smaller step). Under-relaxation is used in, i.e., nonlinear solvers to help stability. * For $\omega = 1.0$ we have Gauss-Seidel. * For $\omega \ge 2.0$ the method diverges (See Morton and Mayers for a proof of this.)where $\rho_{Jacobi}$ is the spectral radius (magnitude of largest eigenvalue) of the Jacobi iteration. * In some cases $\omega_{opt}$ can be found analytically. * See N.R. 3rd edition section 20.5
Now,
* This is length of the error vector relative to the length of the answer, and has a nice physical interpretation.
* This is a relative error treating the system of unknowns as a whole, rather than considering each individual variable.
import numpy as np
Take $A$ as a penta-diagonal matrix with the following form for $n=7$. We'll solve the problem for $n=60$.
\begin{equation*} A = \left[\begin{array}{c c c c c c c} 4 & -1 & 0 & 1 & 0 & 0 & 0 \\ -1 & 4 & -1 & 0 & 1 & 0 & 0 \\ 0 & -1 & 4 & -1 & 0 & 1 & 0 \\ 1 & 0 & -1 & 4 & -1 & 0 & 1 \\ 0 & 1 & 0 & -1 & 4 & -1 & 0 \\ 0 & 0 & 1 & 0 & -1 & 4 & -1 \\ 0 & 0 & 0 & 1 & 0 & -1 & 4 \\ \end{array}\right] \end{equation*}We'll let $b$ be a vector of ones, and use a vector of zeros for the initial guess for $x$.
n = 60
A = np.diag(np.ones(n-3),-3) + np.diag(-1*np.ones(n-1),-1) + \
np.diag(4*np.ones(n),0) + np.diag(-1.*np.ones(n-1),1) + \
np.diag(np.ones(n-3),3)
b = np.ones(n)
x0 = np.zeros(n)
x, k_iter = jacobi(A,b,x0)
print(f"The problem required {k_iter} iterations")
x_py = np.linalg.solve(A,b)
reAvg = np.mean(np.abs((x_py-x)/x_py))
print(f"\nThe average relative error between the Jacobi function and np.linalg.solve is {reAvg:.3e}")
print(f"\nx =\n{x}")