$Ax=b$
$$\left[\begin{array}{cccc} a_{1,1} & a_{1,2} & a_{1,3} & a_{1,4} \\ a_{2,1} & a_{2,2} & a_{2,3} & a_{2,4} \\ a_{3,1} & a_{3,2} & a_{3,3} & a_{3,4} \\ a_{4,1} & a_{4,2} & a_{4,3} & a_{4,4} \end{array}\right] \left[\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \end{array}\right]= \left[\begin{array}{c} b_{1} \\ b_{2} \\ b_{3} \\ b_{4} \end{array}\right]$$$$\rightarrow \sum_j a_{i,j}x_i = b_i,\,\,i=1\ldots n.$$$$a_{i,1}x_1 + a_{i,2}x_2 + a_{i,3}x_3 + a_{i,4}x_4 = b_i,\,\, i=1,2,3,4.$$Solve for $x_i$:
$$ x_i^{k+1} = x_i^k + \frac{1}{a_{ii}}\left(b_i - \sum_{j=1}^na_{i,j}x_j^k\right).$$
$$R_i^k = \left(b_i - \sum_{j=1}^na_{i,j}x_j^k\right).$$
and we can write $$x_i^{k+1} = x_i^k + \frac{1}{a_{i,i}}R_{i}^.k$$ * $Ax=b$, so $b-Ax=0$, but $b-Ax^k\ne0$ (since we are not converged yet); rather $b-Ax^k = R^k$. At convergence $R^k\rightarrow 0$. * That is, iterations drive the residual (error) to zero. * This happens faster if we start with a better guess for $x$.
where $A = L+D+U$ and $L$ is the lower triangular part of $A$, $D$ is the diagonal of $A$, and $U$ is the upper triangular part of $A$.
xold
, say), and fill in a separate $x^{k+1}$ array (called xnew
, say). Then we set xold=xnew
before the start of the next iteration.or $$Bx = c,$$ where $B=L+D$ and $c=-Ux^k+b$ for each iteration. * $L$, $D$, and $U$ are as for Jacobi above.
Since solutions converge smoothly from our guess for $x$ to the actual $x$, we can take a bigger step towards the solution than a given Jacobi or G.S. step provides:
$$x_i^{k+1} = x_i^k + \frac{R_i^k}{a_{i,i}} \rightarrow x_i^{k+1} = x_i^k + \Delta x_i^k,$$with $x_i^{k+1}-x_i^k = \Delta x_i^k = R_i^k/a_{i,i}.$
The SOR method multiplies the $\Delta x_i^k$ by a factor $\omega > 1$ to take a larger step towards $x$: $$x_i^{k+1} = x_i^k + \omega\frac{R_i^k}{a_{i,i}}.$$
where $\rho_{Jacobi}$ is the spectral radius (magnitude of largest eigenvalue) of the Jacobi iteration. * In some cases $\omega_{opt}$ can be found analytically. * See N.R. 3rd edition section 20.5
Now,
import numpy as np
def jacobi(A,b,x, tol=1E-5, maxit=10000):
'''
Program solves Ax=b linear systems for x using the Jacobi iteration method
A in a 2-D, square input matrix (n x n)
b is a 1-D input array (length n)
x is a 1-D input initial guess of the solution (length n)
Function returns the solution x and the number of iterations required.
'''
for k in range(1,maxit+1): # iteration loop
xold = x.copy() # reset xold = xnew each time
R = b - np.dot(A,xold) # compute the Residual (error)
x = xold + R/np.diag(A) # update the solution
#RE = np.sqrt(np.sum(R**2))/np.sqrt(np.sum(x**2)) # relative error
RE = np.linalg.norm(R)/np.linalg.norm(x) # relative error
if RE <= tol: # check for convergence
return x, k # return solution and number of iterations
print('Warning, reached', maxit, ' iterations without converging to tolerance =', tol)
return x, maxit
Take $A$ as a penta-diagonal matrix with the following form for $n=7$. We'll solve the problem for $n=60$.
\begin{equation*} A = \left[\begin{array}{c c c c c c c} 4 & -1 & 0 & 1 & 0 & 0 & 0 \\ -1 & 4 & -1 & 0 & 1 & 0 & 0 \\ 0 & -1 & 4 & -1 & 0 & 1 & 0 \\ 1 & 0 & -1 & 4 & -1 & 0 & 1 \\ 0 & 1 & 0 & -1 & 4 & -1 & 0 \\ 0 & 0 & 1 & 0 & -1 & 4 & -1 \\ 0 & 0 & 0 & 1 & 0 & -1 & 4 \\ \end{array}\right] \end{equation*}We'll let $b$ be a vector of ones, and use a vector of zeros for the initial guess for $x$.
n = 60
A = np.diag(np.ones(n-3),-3) + np.diag(-1*np.ones(n-1),-1) + \
np.diag(4*np.ones(n),0) + np.diag(-1.*np.ones(n-1),1) + \
np.diag(np.ones(n-3),3)
b = np.ones(n)
x0 = np.zeros(n)
x, k_iter = jacobi(A,b,x0)
print(f"The problem required {k_iter} iterations")
x_py = np.linalg.solve(A,b)
reAvg = np.mean(np.abs((x_py-x)/x_py))
print(f"\nThe average relative error between the Jacobi function and np.linalg.solve is {reAvg:.3e}")
print(f"\nx =\n{x}")
The problem required 31 iterations The average relative error between the Jacobi function and np.linalg.solve is 4.824e-06 x = [0.27184421 0.35220014 0.35220014 0.2648233 0.21524253 0.20822162 0.23630615 0.25987558 0.26646384 0.25805109 0.24802253 0.24384705 0.24597541 0.24993196 0.25214914 0.25182954 0.25037258 0.2493163 0.24922614 0.24972154 0.25018695 0.25030629 0.25015446 0.24996385 0.24988824 0.2499289 0.25000129 0.25003442 0.25001996 0.2499948 0.2499948 0.25001996 0.25003442 0.25000129 0.2499289 0.24988824 0.24996385 0.25015446 0.25030629 0.25018695 0.24972154 0.24922614 0.2493163 0.25037258 0.25182954 0.25214914 0.24993196 0.24597541 0.24384705 0.24802253 0.25805109 0.26646384 0.25987558 0.23630615 0.20822162 0.21524253 0.2648233 0.35220014 0.35220014 0.27184421]