Numerical Analysis


Exercise Session 1: Introduction to Julia, Solutions of linear systems

Exercise 1.

Consider the function below. Compute the value $f(2, 3)$ by hand. Then write a Julia function that returns $f(x, n)$ for everyvalue of $x$ and $n$. Check that your function returns the correct value of $f(2, 3)$ $$f(x, n) = \sum_{i=1}^n \prod_{j=1}^i x^{n-j+1}$$

In [ ]:

Exercise 2 (Boyd and Vandenberghe)

We consider a time invariant linear dynamical system with $n$ vector state $x_t$ and $m$-vector input $u_t$, with dynamics $$x_{t+1} = Ax_t + Bu_t, \quad t=1, 2,\ldots $$

The entries of the state often represent deviations of quantities from their desired values, so $x_t\approx 0$ is a goal in operation of the system. The entries of the input $u_t$ are deviations from the standard or nominal values. For example, in an aircraft model, the states might be the deviation from the desired altitude, climb rate, speed, and angle of attack; the input $u_t$ represents changes in the control surface angles or engine thrust from their normal values.

In state feedback control, the states are measured and the input is a linear function of the state $u_t = Kx_t$. The $m\times n$ matrix $K$ is called the state feedback gain matrix. The state feedback gain matrix is very carefully designed, using several methods. State feedback control is very widely used in many application areas (including, for example, control of the airplanes)

  • Open and closed-loop dynamical system. With $u_t = 0$, the system satisfies $x_{t+1} = Ax_t$ for $t=1,2,\ldots$ which is called the open-loop dynamics. When $u_t = Kx_t$, the system dynamics can be expressed as $x_{t+1} = \tilde{A}x_t$, for $t=1,2, \ldots$ where the $n\times n$ matrix $\tilde{A}$ is the closed-loop dynamics matrix. Find an expression for $\tilde{A}$ in terms of $A, B$ and $K$
  • Aircraft control The longitudinal dynamics of a 747 flying at $40000$ ft at Mach $0.81$ is given by \begin{align*} A = \left[\begin{array}{cccc} .99 & .03 & -.02 & -.32\\ .01 & .47 & 4.7 & .00\\ .02 & -.06 & .40 & -.00\\ .01 & -.04 & .72 & .99 \end{array}\right], \quad B = \left[\begin{array}{cc} 0.01 & 0.99\\ -3.44 & 1.66\\ -0.83 & 0.44\\ -0.47 & 0.25 \end{array}\right] \end{align*}

where the sampling time is one second. We further use the state feedback matrix \begin{align*} K = \left[\begin{array}{cccc} -.038 & .021 & .319 & -.270\\ -.061 & -.004 & -.120 & .007 \end{array}\right] \end{align*}

Plot the open-loop and closed-loop state trajectories from several nonzero initial states such as $x_1 = (1,0,0,0)$ or ones that are randomly generated, from $t = 1$ to $t=100$. In other words, plot $(x_t)_i$ versus $t$. Would you rather be in the plane with the state feedback control turned off (i.e. open-loop) or on (close loop)?

In [ ]:

Exercise 3

We let $\text{proj}_u(v)$ to denote the orthogonal projection of the vector $v$ onto $u$. Such a projection can be computed as \begin{align*} \text{proj}_{\mathbf{u}}(\mathbf{v}) = \frac{(\mathbf{u}^T \mathbf{v})}{(\mathbf{u}^T\mathbf{u})}\mathbf{u} \end{align*}

For an original set of vectors $u_1, u_2, \ldots u_n$, the Gram-Schmidt process returns the corresponding collection of orthonormalized vectors by means of the following steps

\begin{align*} \mathbf{u}_1& = \mathbf{v}_1\\ \mathbf{u}_2& = \mathbf{v}_2 - \text{proj}_{\mathbf{u}_1}(\mathbf{v}_2)\\ \mathbf{u}_3& = \mathbf{v}_3 - \text{proj}_{\mathbf{u_1}}(\mathbf{v}_3) - \text{proj}_{\mathbf{u}_2}(\mathbf{v}_3)\\ &\; \vdots\\ \mathbf{u}_k& = \mathbf{v}_k - \sum_{j=1}^{k-1} \text{proj}_{\mathbf{u}_j} (\mathbf{v}_k) \end{align*}

Code this procedure in Julia.

In [ ]:

Exercise 4

A Toepliz matrix is a matrix of the form

\begin{align*} T = \left[\begin{array}{cccccc} a_0 & a_{-1}& a_{-2} &\ldots & \ldots & a_{-(n-1)}\\ a_1 & a_0 & a_{-1} & \ddots & & \vdots\\ a_2 & a_1 & \ddots & \ddots & \ddots & \vdots\\ \vdots & \ddots & \ddots & \ddots & a_{-1} & a_{-2}\\ \vdots & & \ddots & a_1 & a_0 & a_{-1}\\ a_{n-1}& \ldots & \ldots & a_2 & a_1 & a_0 \end{array}\right] \end{align*}

That is the diagonals in $T$ are constant. Write a function Toeplitz that takes as input a vector of size $2n+1$ and returns the corresponding Toeplitz matrix $T(v)$.

Exercise 5 Gaussian Elimination

5a As a warmup, we consider the simple linear system

\begin{align*}A = \left(\begin{array}{ccc} 1& 3& 1\\ 1& 11 & -1\\ 3 & 11 & 6 \end{array}\right), \quad b= \left(\begin{array}{c} 9\\ 1\\ 35 \end{array}\right) \end{align*}

In order to reduce the system to a triangular form, we consider the three simples steps

\begin{align*} \left[\begin{array}{ccc|c} 1 & 3 & 1 & 9\\ 1 & 1 &-1 & 1\\ 3 &. 11 & 6 & 35\end{array}\right]\rightarrow \left[\begin{array}{ccc|c} 1 & 3 & 1 & 9\\ 0& -2 & -2 & -8\\ 0 &2 & 3 & 8\end{array} \right] \rightarrow \left[\begin{array}{ccc|c} 1 & 3 & 1 & 9\\ 0 & -2 & -2 & -8\\ 0&0& 1& 0 \end{array}\right] \end{align*}

Start by coding those steps below

In [ ]:

5b. Forward Elimination Recall that for a linear system $Ax = b$, the Forward Elimination step can be coded as

\begin{align*} &\text{For $m = 1, \ldots , n-1$ do}\\ &\\ &\quad \text{for $j=m+1,\ldots, n$ do}\\ & \\ &\quad \quad \text{for $k=m+1,\ldots, n$ do $a_{jk} = a_{jk - \frac{a_{jm}a_{mk}}{a_{mm}}}$}\\ &\quad \quad b_{j} = b_j - \frac{a_{jm}b_m}{a_{mm}} \end{align*}

Code this forward step below and apply it to a general $n\times n$ matrix generated randomly.

In [ ]:

5c Backward Substitution We now want to code the Backward substitution step. Recall that this step is given by

\begin{align*} &\text{For $m=n, n-1, \ldots, 1$ do $x_m = b_m$}\\ &\quad \text{For $k=m+1, \ldots, n$ do $x_m = x_m - a_{mk}x_k$}\\ &\quad x_m = \frac{x_m}{a_{mm}} \end{align*}

Implement this second step in Julia.

In [ ]:

5c Gaussian Elimination from scratch Combine the Forward Elimination and Backward Substitution steps that you coded above in a single _GaussianElimination function that takes a matrix $A$ and a right-hand side $b$ and returns the solution of the linear system $Ax = b$. Apply your function to a randomly generated matrix $A$ and vector $b$.

In [ ]:

Exercise 6 Iterative Solutions

6a Jacobi and Gauss-Seidel Recall that the Jacobi and Gauss Seidel methods can be defined based on the decomposition of the linear system $Ax = b$ into the matrices $A = D + A_L + A_R$ where $D = \text{diag}(a_{11}, \ldots, a_{nn})$,

\begin{align*} A_L = \left[\begin{array}{cccccc} 0& & && & \\ a_{21}& 0 & & & & \\ a_{31}& a_{32}& 0 & & & \\ \vdots & \vdots & & & & \\ a_{n1}& a_{n2}& \ldots & a_{n,n-1}& 0 \end{array}\right] \end{align*}\begin{align*} A_R = \left[\begin{array}{cccccc} 0& a_{12}& a_{13}& \ldots& &a_{1n} \\ & 0 & a_{23} & \ldots& & a_{2n}\\ & & & & &\vdots \\ & & & &0 & a_{n-1, n} \\ & & & & &0 \end{array}\right] \end{align*}

In the method attributed to Jacobi, the iterates are then defined as

\begin{align*} \mathbf{x}^{(k+1)} = -D^{-1}(A_L + A_R)\mathbf{x}^{(k)} + D^{-1}\mathbf{b} \end{align*}

While for the Gauss-Seidel method, the iterates are defined as

\begin{align*} \mathbf{x}^{(k+1)} = -(D + A_L)^{-1}A_R \mathbf{x}^{(k)} + (D+A_L)^{-1}\mathbf{b} \end{align*}

Complete the functions 'Jacobi' and 'Gauss_Seidel' below so that they return the solution of the linear system $Ax = b$

In [ ]:
function Jacobi(A, b)
    
    #= The function should return the solution 
    of the linear system Ax = b using Jacobi iterations =#
    
    
    return x
In [ ]:
function Gauss_Seidel(A, b)
    
    #= The function should return the solution 
    of the linear system Ax = b using Gauss-Seidel iterations =#
    
    
    return x

6b Direct Methods vs Iterative Methods

Complete the function 'checkRank' below which takes as input a square matrix $A$ and return true if this matrix is full rank. Generate a couple of square matrices with random (normally distributed) entries. Make sure those matrices are full rank. Then compare the output of your Gaussian Elimination, Jacobi and Gauss-Seidel algorithms on the systems obtained from those matrices and a random Right-hand side.

In [ ]:
function checkRank(A)
    
    #= Returns true if the square matrix A is full rank =#
    
    
    return x