$Ax=b$
$$\left[\begin{array}{cccc} a_{1,1} & a_{1,2} & a_{1,3} & a_{1,4} \\ a_{2,1} & a_{2,2} & a_{2,3} & a_{2,4} \\ a_{3,1} & a_{3,2} & a_{3,3} & a_{3,4} \\ a_{4,1} & a_{4,2} & a_{4,3} & a_{4,4} \end{array}\right] \left[\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \end{array}\right]= \left[\begin{array}{c} b_{1} \\ b_{2} \\ b_{3} \\ b_{4} \end{array}\right]$$$$\rightarrow \sum_j a_{i,j}x_i = b_i,\,\,i=1\ldots n.$$$$a_{i,1}x_1 + a_{i,2}x_2 + a_{i,3}x_3 + a_{i,4}x_4 = b_i,\,\, i=1,2,3,4.$$Solve for $x_i$:
$$ x_i^{k+1} = x_i^k + \frac{1}{a_{ii}}\left(b_i - \sum_{j=1}^na_{i,j}x_j^k\right).$$
$$R_i^k = \left(b_i - \sum_{j=1}^na_{i,j}x_j^k\right).$$ and we can write $$x_i^{k+1} = x_i^k + \frac{1}{a_{i,i}}R_{i}^k$$. * $Ax=b$, so $b-Ax=0$, but $b-Ax^k\ne0$ (since we are not converged yet); rather $b-Ax^k = R^k$. At convergence $R^k\rightarrow 0$. * That is, iterations drive the residual (error) to zero. * This happens faster if we start with a better guess for $x$.
where $A = L+D+U$ and $L$ is the lower triangular part of $A$, $D$ is the diagonal of $A$, and $U$ is the upper triangular part of $A$.
xold
, say), and fill in a separate $x^{k+1}$ array (called xnew
, say). Then we set xold=xnew
before the start of the next iteration.or $$Bx = c,$$ where $B=L+D$ and $c=-Ux^k+b$ for each iteration. * $L$, $D$, and $U$ are as for Jacobi above.
with $x_i^{k+1}-x_i^k = \Delta x_i^k = R_i^k/a_{i,i}.$
$$x_i^{k+1} = x_i^k + \omega\frac{R_i^k}{a_{i,i}}.$$ * Using the G.S. equation for $R_i^k$ above.
* for $\omega < 1.0$ we have under-relaxation (take a smaller step). Under-relaxation is used in, i.e., nonlinear solvers to help stability. * For $\omega = 1.0$ we have Gauss-Seidel. * For $\omega \ge 2.0$ the method diverges (See Morton and Mayers for a proof of this.)where $\rho_{Jacobi}$ is the spectral radius (magnitude of largest eigenvalue) of the Jacobi iteration. * In some cases $\omega_{opt}$ can be found analytically. * See N.R. 3rd edition section 20.5
Now,
* This is length of the error vector relative to the length of the answer, and has a nice physical interpretation.
* This is a relative error treating the system of unknowns as a whole, rather than considering each individual variable.
using LinearAlgebra
"""
Program solves Ax=b linear systems for x using the Jacobi iteration method
A in a 2-D, square input matrix (n x n)
b is a 1-D input array (length n)
x is a 1-D input initial guess of the solution (length n)
Function returns the solution x and the number of iterations required.
"""
function jacobi(A,b,x, tol=1E-5, maxit=10000)
for k in 1:maxit # iteration loop
xold = copy(x) # reset xold = xnew each time
R = b - A*xold # compute the Residual (error)
x = xold + R./diag(A) # update the solution
RE = norm(R)./norm(x) # relative error
if RE <= tol return x,k end # check for convergence
end
println("Warning, reached $maxit iterations without converging to tolerance = $tol")
return x, maxit
end;
Take $A$ as a penta-diagonal matrix with the following form for $n=7$. We'll solve the problem for $n=60$.
\begin{equation*} A = \left[\begin{array}{c c c c c c c} 4 & -1 & 0 & 1 & 0 & 0 & 0 \\ -1 & 4 & -1 & 0 & 1 & 0 & 0 \\ 0 & -1 & 4 & -1 & 0 & 1 & 0 \\ 1 & 0 & -1 & 4 & -1 & 0 & 1 \\ 0 & 1 & 0 & -1 & 4 & -1 & 0 \\ 0 & 0 & 1 & 0 & -1 & 4 & -1 \\ 0 & 0 & 0 & 1 & 0 & -1 & 4 \\ \end{array}\right] \end{equation*}We'll let $b$ be a vector of ones, and use a vector of zeros for the initial guess for $x$.
using Statistics
using PyFormattedStrings
n = 60
A = diagm(-3=>fill(1,n-3), -1=>fill(-1,n-1), 0=>fill(4,n), 1=>fill(-1,n-1), 3=>fill(1,n-3))
x0 = zeros(n)
b = ones(n)
x, k_iter = jacobi(A,b,x0)
println("The problem required $k_iter iterations")
x_ju = A\b
reAvg = mean(abs.((x_ju-x)./x_ju))
println(f"The average relative error between the Jacobi function and direct solver is {reAvg:.3e}")
println("\nx =$x")
The problem required 31 iterations The average relative error between the Jacobi function and direct solver is 4.824e-06 x =[0.27184420979927554, 0.3522001406811041, 0.35220013953986357, 0.26482330108125796, 0.21524252629668855, 0.2082216232848737, 0.236306153107122, 0.25987557994903543, 0.2664638440872434, 0.2580510870569739, 0.2480225310657565, 0.24384704733230794, 0.2459754054188209, 0.24993196178375526, 0.25214914159015, 0.25182953890451976, 0.25037257967415893, 0.24931629786091253, 0.2492261412070742, 0.2497215372649615, 0.25018694538358754, 0.2503062889291779, 0.2501544574747556, 0.2499638464169653, 0.24988824140998916, 0.24992890365759518, 0.25000129311192637, 0.2500344207840551, 0.25001995872028615, 0.24999480188048692, 0.2499948018804869, 0.2500199587202861, 0.25003442078405513, 0.2500012931119263, 0.24992890365759518, 0.24988824140998922, 0.24996384641696529, 0.2501544574747556, 0.25030628892917794, 0.25018694538358743, 0.2497215372649615, 0.2492261412070742, 0.24931629786091253, 0.250372579674159, 0.25182953890451976, 0.25214914159015, 0.2499319617837553, 0.24597540541882096, 0.2438470473323079, 0.24802253106575647, 0.2580510870569739, 0.2664638440872434, 0.2598755799490355, 0.23630615310712194, 0.20822162328487365, 0.21524252629668855, 0.2648233010812579, 0.3522001395398636, 0.3522001406811041, 0.2718442097992755]