using LinearAlgebra, PyPlot
If we have a simple scalar ODE:
$$ \frac{dx}{dt} = a x $$then the solution is
$$ x(t) = e^{at} x(0) $$where $x(0)$ is the initial condition.
If we have an $m\times m$ system of ODEs
$$ \frac{d\vec{x}}{dt} = A\vec{x} $$we know that if $A = X \Lambda X^{-1}$ is diagonalizable with eigensolutions $A\vec{x}_k = \lambda_k \vec{x}_k$ ($k=1,2,\ldots,m$), then we can write the solution as:
$$ \vec{x}(t) = c_1 e^{\lambda_1 t} \vec{x}_1 + c_2 e^{\lambda_2 t} \vec{x}_2 + \cdots $$where the $\vec{c}$ coefficients are determined from the initial conditions
$$ \vec{x}(0) = c_1 \vec{x}_1 + c_2 \vec{x}_2 + \cdots $$i.e. $\vec{c} = X^{-1} \vec{x}(0)$ where $X$ is the matrix whose columns are the eigenvectors and $\vec{c} = (c_1, c_2, \ldots, c_m)$.
It sure would be nice to have a formula as simple as $e^{at} x(0)$ from the scalar case. Can we define the exponential of a matrix so that
$$ \vec{x}(t) = \underbrace{e^{At}}_\mbox{???} \vec{x}(0) \, ? $$But what is the exponential of a matrix?
We can guess at least one case. For eigenvectors, the matrix A acts like a scalar λ, so we should have $e^{At} \vec{x}_k = e^{\lambda_k t} \vec{x}_k$!
This turns out to be exactly correct, but let's take it a bit more slowly.
Another way of saying this is that we'd like to write the solution $x(t)$ as $\mbox{(some matrix)} \times \vec{x}(0)$. This will help us to understand the solution as a linear operation on the initial condition and manipulate it algebraically, in much the same way as writing the solution to $Ax=b$ as $x = A^{-1} b$ helps us work with matrix equations (even though we rarely compute matrix inverses explicitly in practice).
To do so, let's break down
$$ \vec{x}(t) = c_1 e^{\lambda_1 t} \vec{x}_1 + c_2 e^{\lambda_2 t} \vec{x}_2 + \cdots $$into steps.
Compute $\vec{c} = X^{-1} \vec{x}(0)$. That is, write the initial condition in the basis of eigenvectors. (In practice, we would solve $X \vec{c} = \vec{x}(0)$ by elimination, rather than computing $X^{-1}$ explicitly!)
Multiply each component of $\vec{c}$ by $e^{\lambda t}$.
Multiply by $X$: i.e. multiply each coefficient $c_k e^{\lambda_k t}$ by $\vec{x}_k$ and add them up.
In matrix form, this becomes:
$$ \vec{x}(t) = X \underbrace{\begin{pmatrix} e^{\lambda_1 t} & & & \\ & e^{\lambda_2 t} & & \\ & & \ddots & \\ & & & e^{\lambda_m t} \end{pmatrix}}_{e^{\Lambda t}} \underbrace{X^{-1} \vec{x}(0)}_\vec{c} = \boxed{ e^{At} \vec{x}(0) } $$where we have defined the "matrix exponential" of a diagonalizable matrix as:
$$ e^{At} = X e^{\Lambda t} X^{-1} $$Note that we have defined the exponential $e^{\Lambda t}$ of a diagonal matrix $\Lambda$ to be the diagonal matrix of the $e^{\lambda t}$ values.
Equivalently, $e^{At}$ is the matrix with the same eigenvectors as A but with eigenvalues λ replaced by $e^{\lambda t}$.
Equivalently, for eigenvectors, A acts like a number λ, so $e^{At} \vec{x}_k = e^{\lambda_k t} \vec{x}_k$.
For example, the matrix
$$ A = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} $$has two eigenvalues $\lambda_1 = +1$ and $\lambda_2 = -1$ (corresponding to exponentially growing and decaying solutions to $d\vec{x}/dt = A\vec{x}$, respectively). The corresponding eigenvectors are:
$$ \vec{x}_1 = \begin{pmatrix} 1 \\ 1 \end{pmatrix} , \; \vec{x}_2 = \begin{pmatrix} 1 \\ -1 \end{pmatrix} . $$Hence, the matrix exponential should be:
$$ e^{At} = \underbrace{\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}}_X \underbrace{\begin{pmatrix} e^t & \\ & e^{-t} \end{pmatrix}}_{e^{\Lambda t}} \underbrace{\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}^{-1}}_{X^{-1}} = \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} e^t & \\ & e^{-t} \end{pmatrix} \left[ \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & -1\end{pmatrix} \right] = \frac{1}{2} \begin{pmatrix} e^t & e^{-t} \\ e^t & -e^{-t} \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} = \frac{1}{2} \begin{pmatrix} e^t + e^{-t} & e^t - e^{-t} \\ e^t - e^{-t} & e^t + e^{-t}\end{pmatrix} = \begin{pmatrix} \cosh(t) & \sinh(t) \\ \sinh(t) & \cosh(t) \end{pmatrix} $$In this example, $e^{At}$ turns out to have a very nice form! In general, no one ever, ever, calculates matrix exponentials analytically like this except for toy $2\times 2$ problems or very special matrices. (I will never ask you to go through this tedious algebra on an exam.)
The computer is pretty good at computing matrix exponentials, however, and in Julia this is calculated by the exp(A*t)
function. (There is a famous paper: 19 dubious ways to compute the exponential of a matrix on techniques for this tricky problem.) Let's try it:
t = 1
[cosh(t) sinh(t)
sinh(t) cosh(t)]
2×2 Matrix{Float64}: 1.54308 1.1752 1.1752 1.54308
exp([0 1; 1 0]*t)
2×2 Matrix{Float64}: 1.54308 1.1752 1.1752 1.54308
Yup, it matches for $t=1$.
What happens for larger $t$, say $t=20$?
t = 20
[cosh(t) sinh(t); sinh(t) cosh(t)]
2×2 Matrix{Float64}: 2.42583e8 2.42583e8 2.42583e8 2.42583e8
exp([0 1; 1 0]*20)
2×2 Matrix{Float64}: 2.42583e8 2.42583e8 2.42583e8 2.42583e8
For large $t$, the $e^t$ exponentially growing term takes over, and $\cosh(t) \approx \sinh(t) \approx e^t/2$:
$$ e^{At} = \begin{pmatrix} \cosh(t) & \sinh(t) \\ \sinh(t) & \cosh(t) \end{pmatrix} \approx \frac{e^t}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} $$exp(20)/2 * [1 1; 1 1]
2×2 Matrix{Float64}: 2.42583e8 2.42583e8 2.42583e8 2.42583e8
But we could have seen this from our eigenvector expansion too:
$$ \vec{x}(t) = c_1 e^t \begin{pmatrix} 1 \\ 1 \end{pmatrix} + c_2 e^{-t} \begin{pmatrix} 1 \\ -1 \end{pmatrix} \approx c_1 e^t \begin{pmatrix} 1 \\ 1 \end{pmatrix} $$where $c_1$ is the coefficient of the initial condition: (nearly) every initial condition should give $\vec{x}(t)$ proportional to $(1,1)$ for large $t$, except in the very special case where $c_1 = 0$.
In fact, since these two eigenvectors are an orthogonal basis (not by chance: we will see later that it happens because $A^T = A$), we can get $c_1$ just by a dot product:
$$ c_1 = \frac{\vec{x}_1 ^T \vec{x}(0)}{\vec{x}_1 ^T \vec{x}_1} = \frac{\vec{x}_1 ^T \vec{x}(0)}{2} $$and hence
$$ \vec{x}(t) \approx c_1 e^t \vec{x}_1 = \frac{e^t}{2} \vec{x}_1 \vec{x}_1^T \vec{x}(0) = \frac{e^t}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} \vec{x}(0) $$which is the same as our approximation for $e^{At}$ above.
Just plugging in $t=1$ above, we see that we have defined the matrix exponential by
$$ e^{A} = X e^{\Lambda} X^{-1} $$This works (for a diagonalizable matrix $A$, at least), but it is a bit odd. It doesn't look much like any definition of $e^x$ for scalar $x$, and it's not clear how you would extend it to non-diagonalizable (defective) matrices.
Instead, we can equivalently define matrix exponentials by starting with the Taylor series of $e^x$:
$$ e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots + \frac{x^n}{n!} + \cdots $$It is quite natural to define $e^A$ (for any square matrix $A$) by the same series:
$$ e^A = I + A + \frac{A^2}{2!} + \frac{A^3}{3!} + \cdots + \frac{A^n}{n!} + \cdots $$This involves only familiar matrix multiplication and addition, so it is completely unambiguous, and it converges because the $n!$ denominator grows faster than $A^n \sim \lambda^n$ for the biggest $|\lambda|$.
Let's try summing up 100 terms of this series for a random $A$ and comparing it to both Julia's expm
and to our formula in terms of eigenvectors:
A = randn(5,5)
5×5 Matrix{Float64}: -0.818911 -0.837843 -0.833094 -0.465218 -0.111137 0.255629 1.16573 -1.20799 0.466481 -0.295185 -1.69869 -0.807249 -1.03416 -1.17902 0.329974 1.62197 0.379368 0.15949 -0.0619288 -0.998135 -0.654186 -0.666233 1.09477 -0.164912 0.640717
exp(A)
5×5 Matrix{Float64}: 0.181729 -1.11919 0.148658 -0.412992 0.246856 3.02992 4.29519 -2.92774 1.88224 -2.13708 -2.20443 -1.05848 1.66506 -1.03794 1.36019 2.09946 0.722445 -1.26534 1.2118 -1.76688 -2.77878 -2.14487 2.53331 -1.26725 3.12757
series = I + A # first two terms
term = A
for n = 2:100
term = term*A / n # compute Aⁿ / n! from the previous term Aⁿ⁻¹/(n-1)!
series = series + term
end
series
5×5 Matrix{Float64}: 0.181729 -1.11919 0.148658 -0.412992 0.246856 3.02992 4.29519 -2.92774 1.88224 -2.13708 -2.20443 -1.05848 1.66506 -1.03794 1.36019 2.09946 0.722445 -1.26534 1.2118 -1.76688 -2.77878 -2.14487 2.53331 -1.26725 3.12757
λ, X = eigen(A)
X * Diagonal(exp.(λ)) / X
5×5 Matrix{Float64}: 0.181729 -1.11919 0.148658 -0.412992 0.246856 3.02992 4.29519 -2.92774 1.88224 -2.13708 -2.20443 -1.05848 1.66506 -1.03794 1.36019 2.09946 0.722445 -1.26534 1.2118 -1.76688 -2.77878 -2.14487 2.53331 -1.26725 3.12757
real(X * Diagonal(exp.(λ)) / X) # get rid of tiny imaginary parts
5×5 Matrix{Float64}: 0.181729 -1.11919 0.148658 -0.412992 0.246856 3.02992 4.29519 -2.92774 1.88224 -2.13708 -2.20443 -1.05848 1.66506 -1.03794 1.36019 2.09946 0.722445 -1.26534 1.2118 -1.76688 -2.77878 -2.14487 2.53331 -1.26725 3.12757
Hurray, they all match, up to roundoff errors! (Though the eigenvector method doesn't realize that the result is real, and we see tiny imaginary parts due to roundoff errors.)
But why does the eigenvector definition match the series definition? They look quite different, but they are not! We can see this simply by looking at what the series does to an eigenvector:
Even simpler, the key fact is that the eigenvalues of $e^A$ are $e^\lambda$. We can see this from the series definition:
If $Ax = \lambda x$, thend $$ e^A x = \left(I + A + \frac{A^2}{2!} + \cdots\right)x = \left(1 + \lambda + \frac{\lambda^2}{2!} + \cdots\right) x = e^\lambda x $$ from the series definition of $e^\lambda$.
It follows that $e^A$ has the same eigenvectors as $A$ and the eigenvalues become $e^\lambda$.
If $A$ is diagonalizable, this means $e^A = X e^\Lambda X^{-1}$: we get the same result as before!
In first-year calculus, we learn that $\frac{d}{dt} e^{at} = a e^{at}$. The same thing works for matrices!
$$ \boxed{\frac{d}{dt} e^{At} = A e^{At}} $$You can derive this in various ways. For example, you can plug $e^{At}$ into the series definition and take the derivative term-by-term.
This is why $\vec{x}(t) = e^{At} \vec{x}(0)$ solves our ODE:
It satisfies $d\vec{x}/dt = A\vec{x}$, since $\frac{d}{dt} e^{At} \vec{x}(0) = A e^{At} \vec{x}(0)$
It satisfies the initial condition: $e^{A\times0} \vec{x}(0) = \vec{x}(0)$, since from the series definition we can see that $e^{A\times0}=I$.
In high school, you learn that $e^x e^y = e^{x+y}$. (In fact, exponentials $a^x$ are essentially the only functions that have this property.)
However, this is not in general true for matrices:
$$ \boxed{e^A e^B \ne e^{A + B} } $$unless $AB = BA$ (unless they commute).
This can be seen from the series definition: if you multiply together the series for $e^A$ and $e^B$, you can only re-arrange this into the series for $e^{A + B}$ if you are allowed to re-order products of $A$ and $B$. For example, the $(A+B)^2=(A+B)(A+B)$ term gives $A^2 +AB+BA +B^2$ (not $A^2 +2AB +B^2$!), which requires both orders $BA$ and $AB$.
Let's try it:
B = randn(5,5)
exp(A) * exp(B)
5×5 Matrix{Float64}: 0.323484 -1.42524 0.0588823 -0.336001 0.390326 -0.881855 4.45288 -4.68822 -1.15012 -2.35137 0.308648 -1.45824 4.01309 1.19521 1.27798 -0.481631 1.64395 -4.93559 -1.08891 -1.70655 1.03601 -2.15709 9.89571 3.02326 4.39306
exp(A + B)
5×5 Matrix{Float64}: -0.572969 -2.44162 1.64648 -0.782038 1.11299 0.681504 3.88447 -1.7922 1.01024 -1.21317 -0.830851 -2.25456 1.85245 -0.607258 0.658459 1.08542 4.28899 -2.02442 1.72467 -0.937318 -2.03497 -4.54824 5.62928 -1.55535 3.48544
They are not even close!
However, since $A$ and $2A$ commute ($A\times2A=2A^2 = 2A \times A$), we do have $e^{A}e^{2A}=e^{3A}$:
exp(A) * exp(2A)
5×5 Matrix{Float64}: -47.5951 -36.0198 39.0739 -23.0627 36.4263 226.625 155.117 -183.007 105.17 -174.234 -88.0121 -54.1272 70.0116 -39.1407 68.523 85.1844 51.5045 -67.7549 37.6019 -67.0832 -174.238 -113.934 140.131 -79.2429 135.861
exp(3A)
5×5 Matrix{Float64}: -47.5951 -36.0198 39.0739 -23.0627 36.4263 226.625 155.117 -183.007 105.17 -174.234 -88.0121 -54.1272 70.0116 -39.1407 68.523 85.1844 51.5045 -67.7549 37.6019 -67.0832 -174.238 -113.934 140.131 -79.2429 135.861
exp(2A) * exp(A)
5×5 Matrix{Float64}: -47.5951 -36.0198 39.0739 -23.0627 36.4263 226.625 155.117 -183.007 105.17 -174.234 -88.0121 -54.1272 70.0116 -39.1407 68.523 85.1844 51.5045 -67.7549 37.6019 -67.0832 -174.238 -113.934 140.131 -79.2429 135.861
As a special case of the above, since $A$ and $-A$ commute, we have $e^A e^{-A} = e^{A-A} = I$, so:
$$ \boxed{\left(e^A\right)^{-1} = e^{-A}} $$For example
inv(exp(A))
5×5 Matrix{Float64}: 3.2008 1.52405 2.99 1.83183 0.523262 1.20707 0.966905 1.95747 0.701775 0.110564 3.10572 1.80607 4.97103 3.13859 0.60015 -3.26614 -1.27767 -2.92433 -0.296963 0.488793 -0.167361 0.0365872 -1.21242 -0.553745 0.572403
exp(-A)
5×5 Matrix{Float64}: 3.2008 1.52405 2.99 1.83183 0.523262 1.20707 0.966905 1.95747 0.701775 0.110564 3.10572 1.80607 4.97103 3.13859 0.60015 -3.26614 -1.27767 -2.92433 -0.296963 0.488793 -0.167361 0.0365872 -1.21242 -0.553745 0.572403
From above, we had $\vec{x}(t) = e^{At} \vec{x}(0)$ solving $d\vec{x}/dt = A\vec{x}$ given the initial condition at $t=0$.
However, there is nothing that special about $t=0$. We could instead have given $\vec{x}(t)$ and asked for $\vec{x}(t+\Delta t)$ and the result would have been similar:
$$ \boxed{ \vec{x}(t+\Delta t) = e^{A\Delta t} \vec{x(t)} } = e^{A\Delta t} e^{A t} \vec{x}(0) = e^{A(t + \Delta t)} \vec{x}(0)\, . $$Viewed in this way, the matrix $T = e^{A\Delta t}$ can be thought of as a "propagator" matrix: it takes the solution at any time $t$ and "propagates" it forwards in time by $\Delta t$.
The inverse of this propagator matrix is simply $T^{-1} = e^{-A\Delta t}$, which propagates backwards in time by $\Delta t$.
If we multiply by this propagator matrix repeatedly, we can get $\vec{x}$ at a whole sequence of time points:
$$ \vec{x}(0), \vec{x}(\Delta t), \vec{x}(2\Delta t), \ldots = \vec{x}(0), T \vec{x}(0), T^2 \vec{x}(0), \ldots $$which is nice for plotting the solutions as a function of time! Let's try it for our two masses and springs example:
C = [ 0 0 1 0
0 0 0 1
-0.02 0.01 0 0
0.01 -0.02 0 0 ]
Δt = 1.0
T = exp(C*Δt) # propagator matrix
x₀ = [0.0,0,1,0] # initial condition
# loop over 300 timesteps and keep track of x₁(t)
x = x₀
x₁ = [ x₀[1] ]
for i = 1:300
x = T*x # repeatedly multiply by T
push!(x₁, x[1]) # & store current x₁(t) in the array x₁
end
plot((0:300)*Δt, x₁, "r.-")
xlabel("time \$t\$")
ylabel("solution \$x_1(t)\$")
grid()
(This is not an approximate solution. It is the exact solution, up to the computer's roundoff errors, at the times $t=0,\Delta t, 2\Delta t, \ldots$. Don't confuse it with approximations like Euler's method.)
It is important to compare and contrast the two cases we have studied:
versus
These two cases are related by the propagator matrix $T = e^{A\Delta t}$! Solving the ODE for long time, or multiplying by $e^{At}$ for large $t$, corresponds to repeatedly multiplying by $T$!
What are the eigenvalues of $T$ for a diagonalizable $A = X \Lambda X^{-1}$? Well, since
$$ T = e^{A \Delta t} = X e^{\Lambda \Delta t} X^{-1} = X \begin{pmatrix} e^{\lambda_1 \Delta t} & & & \\ & e^{\lambda_2 \Delta t} & & \\ & & \ddots & \\ & & & e^{\lambda_m \Delta t} \end{pmatrix} X^{-1} $$the eigenvalues of $T$ are just $e^{\lambda \Delta t}$ (the equation above is precisely the diagonalization of $T$).
Equivalently, for an eigenvector $\vec{x}_k$ of $A$, $T\vec{x}_k = e^{\lambda_k \Delta t} \vec{x}_k$, so $\vec{x}_k$ is also an eigenvector of $T$ with eigenvalue $e^{\lambda_k \Delta t}$. Let's check:
eigvals(exp(A*Δt))
5-element Vector{Float64}: 0.23567369540695432 0.2899190400271146 0.9169902702871335 2.0483367693538845 6.99042092346493
λ = eigvals(A)
exp.(λ * Δt)
5-element Vector{Float64}: 0.235673695406957 0.28991904002711205 0.916990270287135 2.0483367693538863 6.990420923464918
Yup, they match (although the order is different: Julia gives the eigenvalues in a somewhat "random" order).
What does this mean for stability of the solutions?
For example, if $A$ has an real eigenvalue with $\lambda < 0$, a decaying solution, then $T$ has an eigenvalue $e^{\lambda \Delta t} < 1$, which is also decaying when you multiply by $T$ repeatedly!
It is easy to verify that going from $\lambda \to e^\lambda$ turns the conditions for growing/decaying ODE (eᴬᵗ) solutions into the rules for growing/decaying Aⁿ solutions!.