#!/usr/bin/env python # coding: utf-8 # # Time-dependent variational forms #
# # There are at least three different strategies for performing # a discretization in time: # # 1. Use *finite differences* for time derivatives to arrive at # a recursive set of spatial problems that can be discretized by # the finite element method. # # 2. Discretize in space by finite elements first, and then solve # the resulting system of ordinary differential equations (ODEs) by # some *standard library* for ODEs. # # 3. Discretize in space and time simultaneously by space-time finite elements. # # With the first strategy, we discretize in time prior to the space # discretization, while the second strategy consists of doing exactly # the opposite. It should come as no surprise that in many situations # these two strategies end up in exactly the same systems to be solved, but # this is not always the case. Also the third approach often reproduces standard # finite difference schemes such as the Backward Euler and the Crank-Nicolson # schemes for lower-order elements, but offers an interesting framework for deriving higher-order # methods. In this chapter we shall be concerned with # the first strategy, # which is the most common strategy as it turns the time-dependent # PDE problem to a sequence of stationary problems for which efficient # finite element solution strategies often are available. # The second strategy would # naturally employ well-known ODE software, # which are available as user-friendly routines # in Python. However, these routines are presently not efficient enough # for PDE problems in 2D and 3D. The first strategy gives complete hands-on # control of the implementation and the computational efficiency # in time and space. # # We shall use a simple diffusion problem to illustrate the basic # principles of how a time-dependent PDE is solved by finite differences # in time and finite elements in space. Of course, instead of finite elements, # we may employ other types of basis functions, such as global polynomials. # Our model problem reads # #
# # $$ # \begin{equation} # \frac{\partial u}{\partial t} = {\alpha}\nabla^2 u + f(\boldsymbol{x}, t),\quad # \boldsymbol{x}\in\Omega,\ t\in (0,T], # \label{fem:deq:diffu:eq} \tag{1} # \end{equation} # $$ # #
# # $$ # \begin{equation} # u(\boldsymbol{x}, 0) = I(\boldsymbol{x}),\quad \boldsymbol{x}\in\Omega, # \label{fem:deq:diffu:ic} \tag{2} # \end{equation} # $$ # #
# # $$ # \begin{equation} # \frac{\partial u}{\partial n} = 0,\quad \boldsymbol{x}\in\partial\Omega,\ t\in (0,T] # \label{fem:deq:diffu:bcN} \tag{3} # {\thinspace .} # \end{equation} # $$ # Here, $u(\boldsymbol{x},t)$ is the unknown function, ${\alpha}$ is a constant, and # $f(\boldsymbol{x},t)$ and $I(\boldsymbol{x})$ are given functions. We have assigned the particular # boundary condition ([3](#fem:deq:diffu:bcN)) to minimize # the details on handling boundary conditions in the finite element method. # # **Remark.** For systems of PDEs the strategy for discretization in time may have great impact on # overall efficiency and accuracy. The Navier-Stokes equations for # an incompressible Newtonian fluid is a prime example where many methods have been proposed # and where there are notable differences between the different methods. Furthermore, # the differences often depend significantly on the application. # Discretization in time *before* discretization in space allows for manipulations # of the equations and schemes that are very efficient compared to # schemes based on discretizing in space first. # The schemes are so-called operator-splitting schemes or projection based schemes. These schemes do, however, # suffer from loss of accuracy particularly in terms of errors associated with the boundaries. # The numerical error is caused by the splitting of the equations which leads to non-trivial splitting # of the boundary conditions. # # # # Discretization in time by a Forward Euler scheme #
# # The discretization strategy is to first apply a simple finite difference # scheme in time and derive a recursive set of spatially continuous PDE # problems, one at each time level. For each spatial PDE problem we can # set up a variational formulation and employ the finite element method # for solution. # # ## Time discretization # # We can apply a finite difference method in time to ([1](#fem:deq:diffu:eq)). # First we need 'a mesh' in time, here taken as uniform with # mesh points $t_n = n\Delta t$, $n=0,1,\ldots,N_t$. # A Forward Euler scheme consists of sampling ([1](#fem:deq:diffu:eq)) # at $t_n$ and approximating the time derivative by a forward # difference $[D_t^+ u]^n\approx # (u^{n+1}-u^n)/\Delta t$. # This approximation turns ([1](#fem:deq:diffu:eq)) # into a differential equation that is discrete in time, but still # continuous in space. # With a finite difference operator notation we can write the # time-discrete problem as # #
# # $$ # \begin{equation} # [D_t^+ u = {\alpha}\nabla^2 u + f]^n, # \label{fem:deq:diffu:FE:eq:FEop} \tag{4} # \end{equation} # $$ # for $n=1,2,\ldots,N_t-1$. # Writing this equation out in detail and # isolating the unknown $u^{n+1}$ on the left-hand side, demonstrates that # the time-discrete problem is a recursive set of problems that are # continuous in space: # #
# # $$ # \begin{equation} # u^{n+1} = u^n + \Delta t \left( {\alpha}\nabla^2 u^n + f(\boldsymbol{x}, t_n)\right) # \label{fem:deq:diffu:FE:eq:unp1} \tag{5} # {\thinspace .} # \end{equation} # $$ # Given $u^0=I$, we can use ([5](#fem:deq:diffu:FE:eq:unp1)) to compute # $u^1,u^2,\dots,u^{N_t}$. # # **More precise notation.** # # For absolute clarity in the various stages of the discretizations, we # introduce ${u_{\small\mbox{e}}}(\boldsymbol{x},t)$ as the exact solution of the space-and time-continuous # partial differential equation ([1](#fem:deq:diffu:eq)) and # ${u_{\small\mbox{e}}}^n(\boldsymbol{x})$ as the time-discrete approximation, arising from the finite # difference method in time ([4](#fem:deq:diffu:FE:eq:FEop)). # More precisely, ${u_{\small\mbox{e}}}$ fulfills # #
# # $$ # \begin{equation} # \frac{\partial {u_{\small\mbox{e}}}}{\partial t} = {\alpha}\nabla^2 {u_{\small\mbox{e}}} + f(\boldsymbol{x}, t) # \label{fem:deq:diffu:eq:uex} \tag{6}, # \end{equation} # $$ # while ${u_{\small\mbox{e}}}^{n+1}$, with a superscript, # is the solution of the time-discrete equations # #
# # $$ # \begin{equation} # {u_{\small\mbox{e}}}^{n+1} = {u_{\small\mbox{e}}}^n + \Delta t \left( {\alpha}\nabla^2 {u_{\small\mbox{e}}}^n + f(\boldsymbol{x}, t_n)\right) # \label{fem:deq:diffu:FE:eq:uex:n} \tag{7} # {\thinspace .} # \end{equation} # $$ # The ${u_{\small\mbox{e}}}^{n+1}$ quantity is then discretized in space and approximated # by $u^{n+1}$. # # # # # ## Space discretization # # We now introduce a finite element approximation to ${u_{\small\mbox{e}}}^n$ and ${u_{\small\mbox{e}}}^{n+1}$ # in ([7](#fem:deq:diffu:FE:eq:uex:n)), where the coefficients depend on the # time level: # #
# # $$ # \begin{equation} # {u_{\small\mbox{e}}}^n \approx u^n = \sum_{j=0}^{N} c_j^{n}{\psi}_j(\boldsymbol{x}), # \label{fem:deq:diffu:femapprox:n} \tag{8} # \end{equation} # $$ # #
# # $$ # \begin{equation} # {u_{\small\mbox{e}}}^{n+1} \approx u^{n+1} = \sum_{j=0}^{N} c_j^{n+1}{\psi}_j(\boldsymbol{x}) # \label{fem:deq:diffu:femapprox:np1} \tag{9} # {\thinspace .} # \end{equation} # $$ # Note that, as before, $N$ denotes the number of degrees of freedom # in the spatial domain. The number of time points is denoted by $N_t$. # We define a space $V$ spanned by the basis functions $\left\{ {{\psi}}_i \right\}_{i\in{\mathcal{I}_s}}$. # # # # # # # # ## Variational forms # # A Galerkin method or a # weighted residual method with weighting functions $w_i$ can # now be formulated. We insert ([8](#fem:deq:diffu:femapprox:n)) and # ([9](#fem:deq:diffu:femapprox:np1)) in # ([7](#fem:deq:diffu:FE:eq:uex:n)) to obtain the residual # $$ # R = u^{n+1} - u^n - \Delta t \left( {\alpha}\nabla^2 u^n + f(\boldsymbol{x}, t_n)\right) # {\thinspace .} # $$ # The weighted residual principle, # $$ # \int_\Omega Rw{\, \mathrm{d}x} = 0,\quad \forall w\in W, # $$ # results in # $$ # \int_\Omega # \left\lbrack # u^{n+1} - u^n - \Delta t \left( {\alpha}\nabla^2 u^n + f(\boldsymbol{x}, t_n)\right) # \right\rbrack w {\, \mathrm{d}x} =0, \quad\forall w \in W{\thinspace .} # $$ # From now on we use the Galerkin method so $W=V$. # Isolating the unknown $u^{n+1}$ on the left-hand side gives # $$ # \int_{\Omega} u^{n+1}v{\, \mathrm{d}x} = \int_{\Omega} # \left\lbrack u^n + \Delta t \left( {\alpha}\nabla^2 u^n + f(\boldsymbol{x}, t_n)\right) # \right\rbrack v{\, \mathrm{d}x},\quad \forall v\in V # {\thinspace .} # $$ # As usual in spatial finite element problems involving second-order # derivatives, we apply integration by parts on the term # $\int (\nabla^2 u^n)v{\, \mathrm{d}x}$: # $$ # \int_{\Omega}{\alpha}(\nabla^2 u^n)v {\, \mathrm{d}x} = # -\int_{\Omega}{\alpha}\nabla u^n\cdot\nabla v{\, \mathrm{d}x} + # \int_{\partial\Omega}{\alpha}\frac{\partial u^n}{\partial n}v {\, \mathrm{d}x} # {\thinspace .} # $$ # The last term vanishes because we have the Neumann condition # $\partial u^n/\partial n=0$ for all $n$. Our discrete problem in # space and time then reads # #
# # $$ # \begin{equation} # \int_{\Omega} u^{n+1}v{\, \mathrm{d}x} = # \int_{\Omega} u^n v{\, \mathrm{d}x} - # \Delta t \int_{\Omega}{\alpha}\nabla u^n\cdot\nabla v{\, \mathrm{d}x} + # \Delta t\int_{\Omega}f^n v{\, \mathrm{d}x},\quad \forall v\in V{\thinspace .} # \label{fem:deq:diffu:FE:vf:u:np1} \tag{10} # \end{equation} # $$ # This is the variational formulation of our recursive set of spatial # problems. # # # **Nonzero Dirichlet boundary conditions.** # # As in stationary problems, # we can introduce a boundary function $B(\boldsymbol{x},t)$ to take care # of nonzero Dirichlet conditions: # #
# # $$ # \begin{equation} # {u_{\small\mbox{e}}}^n \approx u^n = B(\boldsymbol{x},t_n) + \sum_{j=0}^{N} c_j^{n}{\psi}_j(\boldsymbol{x}), # \label{fem:deq:diffu:femapprox:n:B} \tag{11} # \end{equation} # $$ # #
# # $$ # \begin{equation} # {u_{\small\mbox{e}}}^{n+1} \approx u^{n+1} = B(\boldsymbol{x},t_{n+1}) + # \sum_{j=0}^{N} c_j^{n+1}{\psi}_j(\boldsymbol{x}) # \label{fem:deq:diffu:femapprox:np1:B} \tag{12} # {\thinspace .} # \end{equation} # $$ # ## Notation for the solution at recent time levels # # In a program it is only necessary to have the two variables $u^{n+1}$ # and $u^n$ at the same time at a given time step. It is therefore # unnatural to use the index $n$ in computer code. Instead a natural # variable naming is `u` for $u^{n+1}$, the new unknown, and `u_n` for # $u^n$, the solution at the previous time level. When we have several # preceding (already computed) time levels, it is natural to number them # like `u_nm1`, `u_nm2`, `u_nm3`, etc., backwards in time, corresponding to # $u^{n-1}$, $u^{n-2}$, and $u^{n-3}$. Essentially, this means a one-to-one # mapping of notation in mathematics and software, except for $u^{n+1}$. # We shall therefore, to make the distance between mathematics and code # as small as possible, often introduce just $u$ for $u^{n+1}$ in the # mathematical notation. Equation # ([10](#fem:deq:diffu:FE:vf:u:np1)) with this new naming convention is # consequently expressed as # #
# # $$ # \begin{equation} # \int_{\Omega} u v{\, \mathrm{d}x} = # \int_{\Omega} u^{n} v{\, \mathrm{d}x} - # \Delta t \int_{\Omega}{\alpha}\nabla u^{n}\cdot\nabla v{\, \mathrm{d}x} + # \Delta t\int_{\Omega}f^n v{\, \mathrm{d}x} # {\thinspace .} # \label{fem:deq:diffu:FE:vf:u} \tag{13} # \end{equation} # $$ # This variational form can alternatively be expressed by the inner # product notation: # #
# # $$ # \begin{equation} # (u,v) = (u^{n},v) - # \Delta t ({\alpha}\nabla u^{n},\nabla v) + # \Delta t (f^n, v) # {\thinspace .} # \label{fem:deq:diffu:FE:vf:u:short} \tag{14} # \end{equation} # $$ # To simplify the notation for the solution at recent previous time steps # and avoid notation like `u_nm1`, `u_nm2`, `u_nm3`, etc., we will let $u_1$ denote the solution at previous time step, # $u_2$ is the solution two time steps ago, etc. # # ## Deriving the linear systems # # In the following, we adopt the previously introduced convention that # the unknowns $c_j^{n+1}$ are written as $c_j$, while the known $c_j^n$ # from the previous time level is simply written as $c_{j}^n$. To # derive the equations for the new unknown coefficients $c_j$, we insert # $$ # u = \sum_{j=0}^{N}c_j{\psi}_j(\boldsymbol{x}),\quad # u^{n} = \sum_{j=0}^{N} c_{j}^n{\psi}_j(\boldsymbol{x}) # $$ # in ([13](#fem:deq:diffu:FE:vf:u)) or ([14](#fem:deq:diffu:FE:vf:u:short)), # let the equation hold for all $v={\psi}_i$, $i=0,\ldots,N$, # and order the terms as matrix-vector products: # #
# # $$ # \begin{equation} # \sum_{j=0}^{N} ({\psi}_i,{\psi}_j) c_j = # \sum_{j=0}^{N} ({\psi}_i,{\psi}_j) c_{j}^n # -\Delta t \sum_{j=0}^{N} (\nabla{\psi}_i,{\alpha}\nabla{\psi}_j) c_{j}^n # + \Delta t (f^n,{\psi}_i),\quad i=0,\ldots,N # {\thinspace .} # \label{_auto1} \tag{15} # \end{equation} # $$ # This is a linear system $\sum_j A_{i,j}c_j = b_i$ with # $$ # A_{i,j} = ({\psi}_i,{\psi}_j) # $$ # and # $$ # b_i = \sum_{j=0}^{N} ({\psi}_i,{\psi}_j) c_{j}^n # -\Delta t \sum_{j=0}^{N} (\nabla{\psi}_i,{\alpha}\nabla{\psi}_j) c_{j}^n # + \Delta t (f^n,{\psi}_i){\thinspace .} # $$ # It is instructive and convenient for implementations to write the linear # system on the form # #
# # $$ # \begin{equation} # Mc = Mc_1 - \Delta t Kc_1 + \Delta t f, # \label{_auto2} \tag{16} # \end{equation} # $$ # where # $$ # \begin{align*} # M &= \{M_{i,j}\},\quad M_{i,j}=({\psi}_i,{\psi}_j),\quad i,j\in{\mathcal{I}_s},\\ # K &= \{K_{i,j}\},\quad K_{i,j}=(\nabla{\psi}_i,{\alpha}\nabla{\psi}_j), # \quad i,j\in{\mathcal{I}_s},\\ # f &= \{f_i\},\quad f_i=(f(\boldsymbol{x},t_n),{\psi}_i),\quad i\in{\mathcal{I}_s},\\ # c &= \{c_i\},\quad i\in{\mathcal{I}_s},\\ # c_1 &= \{c_{i}^n\},\quad i\in{\mathcal{I}_s} # {\thinspace .} # \end{align*} # $$ # We realize that $M$ is the matrix arising from a term with the # zero-th derivative of $u$, and called the mass matrix, while $K$ is # the matrix arising from a Laplace term $\nabla^2 u$. The $K$ matrix # is often known as the *stiffness matrix*. (The terms mass and stiffness # stem from the early days of finite elements when applications to # vibrating structures dominated. The mass matrix arises from the # mass times acceleration term in Newton's second law, while the stiffness # matrix arises from the elastic forces (the "stiffness") in that law. # The mass and stiffness # matrix appearing in a diffusion have slightly different mathematical # formulas compared to the classic structure problem.) # # **Remark.** The mathematical symbol $f$ has two meanings, either the # function $f(\boldsymbol{x},t)$ in the PDE or the $f$ vector in the linear system # to be solved at each time level. # # ## Computational algorithm # # We observe that $M$ and $K$ can be precomputed so that we can avoid # computing the matrix entries at every time level. Instead, some # matrix-vector multiplications will produce the linear system to be solved. # The computational algorithm has the following steps: # # 1. Compute $M$ and $K$. # # 2. Initialize $u^0$ by interpolation or projection # # 3. For $n=1,2,\ldots,N_t$: # # a. compute $b = Mc_1 - \Delta t Kc_1 + \Delta t f$ # # b. solve $Mc = b$ # # c. set $c_1 = c$ # # # In case of finite element basis functions, interpolation of the # initial condition at the nodes means $c_{j}^n = I(\boldsymbol{x}_j)$. Otherwise # one has to solve the linear system # $$ # \sum_j{\psi}_j(\boldsymbol{x}_i)c_{j}^n = I(\boldsymbol{x}_i), # $$ # where $\boldsymbol{x}_i$ denotes an interpolation point. Projection # (or Galerkin's method) implies solving a linear system with $M$ as # coefficient matrix: # $$ # \sum_j M_{i,j}c_{j}^n = (I,{\psi}_i),\quad i\in{\mathcal{I}_s}{\thinspace .} # $$ # ## Example using cosinusoidal basis functions #
# # Let us go through a computational example and demonstrate the # algorithm from the previous section. We consider a 1D problem # #
# # $$ # \begin{equation} # \frac{\partial u}{\partial t} = {\alpha}\frac{\partial^2 u}{\partial x^2},\quad # x\in (0,L),\ t\in (0,T], # \label{fem:deq:diffu:pde1D:eq} \tag{17} # \end{equation} # $$ # #
# # $$ # \begin{equation} # u(x, 0) = A\cos(\pi x/L) + B\cos(10\pi x/L),\quad x\in[0,L], # \label{fem:deq:diffu:pde1D:ic} \tag{18} # \end{equation} # $$ # #
# # $$ # \begin{equation} # \frac{\partial u}{\partial x} = 0,\quad x=0,L,\ t\in (0,T] # \label{fem:deq:diffu:pde1D:bcN} \tag{19} # {\thinspace .} # \end{equation} # $$ # We use a Galerkin method with basis functions # $$ # {\psi}_i = \cos(i\pi x/L){\thinspace .} # $$ # These basis functions fulfill ([19](#fem:deq:diffu:pde1D:bcN)), which is # not a requirement (there are no Dirichlet conditions in this problem), # but helps to make the approximation good. # # Since the initial condition ([18](#fem:deq:diffu:pde1D:ic)) lies in the # space $V$ where we seek the approximation, we know that a Galerkin or # least squares approximation of the initial condition becomes exact. # Therefore, the initial condition can be expressed as # $$ # c_{1}^n=A,\quad c_{10}^n=B, # $$ # while $c_{i}^n=0$ for $i\neq 1,10$. # # The $M$ and $K$ matrices are easy to compute since the basis functions # are orthogonal on $[0,L]$. Hence, we # only need to compute the diagonal entries. We get # $$ # M_{i,i} = \int_0^L cos^2(i x \pi/L) {\, \mathrm{d}x}, # $$ # which is computed as # In[1]: import sympy as sym x, L = sym.symbols('x L') i = sym.symbols('i', integer=True) sym.integrate(sym.cos(i*x*sym.pi/L)**2, (x,0,L)) # which means $L$ if $i=0$ and $L/2$ otherwise. Similarly, # the diagonal entries of the $K$ matrix are computed as # In[2]: sym.integrate(sym.diff(cos(i*x*sym.pi/L),x)**2, (x,0,L)) # so # $$ # M_{0,0}=L,\quad M_{i,i}=L/2,\ i>0,\quad K_{0,0}=0,\quad K_{i,i}=\frac{\pi^2 i^2}{2L},\ i>0{\thinspace .} # $$ # The equation system becomes # $$ # \begin{align*} # Lc_0 &= Lc_{0}^0 - \Delta t \cdot 0\cdot c_{0}^0,\\ # \frac{L}{2}c_i &= \frac{L}{2}c_{i}^n - \Delta t # \frac{\pi^2 i^2}{2L} c_{i}^n,\quad i>0{\thinspace .} # \end{align*} # $$ # The first equation leads to $c_0=0$ for any $n$ since we start with $c_{0}^0=0$ and $K_{0,0}=0$. # The others imply # $$ # c_i = (1-\Delta t (\frac{\pi i}{L})^2) c_{i}^n{\thinspace .} # $$ # With the notation $c^n_i$ for $c_i$ at the $n$-th time level, we can apply # the relation above recursively and get # $$ # c^n_i = (1-\Delta t (\frac{\pi i}{L})^2)^n c^0_i{\thinspace .} # $$ # Since only two of the coefficients are nonzero at time $t=0$, we have # the closed-form discrete solution # $$ # u^n_i = A(1-\Delta t (\frac{\pi}{L})^2)^n \cos(\pi x/L) # + B(1-\Delta t (\frac{10\pi }{L})^2)^n \cos(10\pi x/L){\thinspace .} # $$ # # # # # # Discretization in time by a Backward Euler scheme #
# # ## Time discretization # # The Backward Euler scheme in time applied to our diffusion problem # can be expressed as follows using the finite difference operator notation: # $$ # [D_t^- u = {\alpha}\nabla^2 u + f(\boldsymbol{x}, t)]^n # {\thinspace .} # $$ # Here $[D_t^- u]^n\approx (u^{n}-u^{n-1})/\Delta t$. # Written out, and collecting the unknown $u^n$ on the left-hand side # and all the known terms on the right-hand side, # the time-discrete differential equation becomes # #
# # $$ # \begin{equation} # u^{n} - \Delta t {\alpha}\nabla^2 u^n = # u^{n-1} + \Delta t f(\boldsymbol{x}, t_{n}) # \label{fem:deq:diffu:BE:eq:un} \tag{22} # {\thinspace .} # \end{equation} # $$ # From equation ([22](#fem:deq:diffu:BE:eq:un)) we can compute # $u^1,u^2,\dots,u^{N_t}$, # if we have a start $u^0=I$ from the initial condition. # However, ([22](#fem:deq:diffu:BE:eq:un)) is a partial differential # equation in space and needs a solution method based on discretization # in space. For this purpose we use an expansion as in # ([8](#fem:deq:diffu:femapprox:n))-([9](#fem:deq:diffu:femapprox:np1)). # # ## Variational forms # # Inserting ([8](#fem:deq:diffu:femapprox:n))-([9](#fem:deq:diffu:femapprox:np1)) # in ([22](#fem:deq:diffu:BE:eq:un)), multiplying by any $v\in V$ # (or ${\psi}_i\in V$), # and integrating by parts, as we did in the Forward Euler case, results # in the variational form # #
# # $$ # \begin{equation} # \int_{\Omega} \left( u^{n}v # + \Delta t {\alpha}\nabla u^n\cdot\nabla v\right){\, \mathrm{d}x} # = \int_{\Omega} u^{n-1} v{\, \mathrm{d}x} + # \Delta t\int_{\Omega}f^n v{\, \mathrm{d}x},\quad\forall v\in V # \label{fem:deq:diffu:BE:vf:u:n} \tag{23} # {\thinspace .} # \end{equation} # $$ # Expressed with $u$ for the unknown $u^n$ and $u^{n}$ for the previous # time level, as we have done before, the variational form becomes # #
# # $$ # \begin{equation} # \int_{\Omega} \left( uv # + \Delta t {\alpha}\nabla u\cdot\nabla v\right){\, \mathrm{d}x} # = \int_{\Omega} u^{n} v{\, \mathrm{d}x} + # \Delta t\int_{\Omega}f^n v{\, \mathrm{d}x}, # \label{fem:deq:diffu:BE:vf:u} \tag{24} # \end{equation} # $$ # or with the more compact inner product notation, # #
# # $$ # \begin{equation} # (u,v) + \Delta t ({\alpha}\nabla u,\nabla v) # = (u^{n},v) + # \Delta t (f^n,v) # \label{fem:deq:diffu:BE:vf:u:short} \tag{25} # {\thinspace .} # \end{equation} # $$ # ## Linear systems # # Inserting $u=\sum_j c_j{\psi}_i$ and $u^{n}=\sum_j c_{j}^n{\psi}_i$, # and choosing $v$ to be the basis functions ${\psi}_i\in V$, # $i=0,\ldots,N$, together with doing some algebra, lead # to the following linear system to be # solved at each time level: # #
# # $$ # \begin{equation} # (M + \Delta t K)c = Mc_1 + \Delta t f, # \label{fem:deq:diffu:BE:vf:linsys} \tag{26} # \end{equation} # $$ # where $M$, $K$, and $f$ are as in the Forward Euler case # and we use the previously introduced notation $c = \{c_i\}$ and $c_1 = \{c_{i}^n\}$. # # # This time we really have to solve a linear system at each time level. # The computational algorithm goes as follows. # # 1. Compute $M$, $K$, and $A=M + \Delta t K$ # # 2. Initialize $u^0$ by interpolation or projection # # 3. For $n=1,2,\ldots,N_t$: # # a. compute $b = Mc_1 + \Delta t f$ # # b. solve $Ac = b$ # # c. set $c_1 = c$ # # # In case of finite element basis functions, interpolation of the # initial condition at the nodes means $c_{j}^n = I(\boldsymbol{x}_j)$. Otherwise # one has to solve the linear system $\sum_j{\psi}_j(\boldsymbol{x}_i)c_j = # I(\boldsymbol{x}_i)$, where $\boldsymbol{x}_i$ denotes an interpolation point. Projection # (or Galerkin's method) implies solving a linear system with $M$ as # coefficient matrix: $\sum_j M_{i,j}c_{j}^n = (I,{\psi}_i)$, # $i\in{\mathcal{I}_s}$. # # # # Dirichlet boundary conditions #
# # # Suppose now that the boundary condition ([3](#fem:deq:diffu:bcN)) is # replaced by a mixed Neumann and Dirichlet condition, # #
# # $$ # \begin{equation} # u(\boldsymbol{x},t) = u_0(\boldsymbol{x},t),\quad \boldsymbol{x}\in\partial\Omega_D, # \label{_auto3} \tag{29} # \end{equation} # $$ # #
# # $$ # \begin{equation} # -{\alpha}\frac{\partial}{\partial n} u(\boldsymbol{x},t) = g(\boldsymbol{x},t),\quad # \boldsymbol{x}\in\partial{\Omega}_N{\thinspace .} # \label{_auto4} \tag{30} # \end{equation} # $$ # Using a Forward Euler discretization in time, the variational # form at a time level becomes # #
# # $$ # \begin{equation} # \int\limits_\Omega u^{n+1}v{\, \mathrm{d}x} = # \int\limits_\Omega (u^n - \Delta t{\alpha}\nabla u^n\cdot\nabla v){\, \mathrm{d}x} + # \Delta t\int\limits_\Omega fv {\, \mathrm{d}x} - # \Delta t\int\limits_{\partial\Omega_N} gv{\, \mathrm{d}s},\quad \forall v\in V{\thinspace .} # \label{_auto5} \tag{31} # \end{equation} # $$ # ## Boundary function # # # # The Dirichlet condition $u=u_0$ at $\partial\Omega_D$ can be incorporated # through a boundary function $B(\boldsymbol{x})=u_0(\boldsymbol{x})$ and demanding that the basis functions ${\psi}_j=0$ # at $\partial\Omega_D$. The expansion for $u^n$ is written as # $$ # u^n(\boldsymbol{x}) = u_0(\boldsymbol{x},t_n) + \sum_{j\in{\mathcal{I}_s}}c_j^n{\psi}_j(\boldsymbol{x}){\thinspace .} # $$ # Inserting this expansion in the variational formulation and letting it # hold for all test functions $v\in V$, i.e., all basis functions ${\psi}_i$ leads to the linear system # $$ # \begin{align*} # \sum_{j\in{\mathcal{I}_s}} \left(\int\limits_\Omega {\psi}_i{\psi}_j{\, \mathrm{d}x}\right) # c^{n+1}_j &= \sum_{j\in{\mathcal{I}_s}} # \left(\int\limits_\Omega\left( {\psi}_i{\psi}_j - # \Delta t{\alpha}\nabla {\psi}_i\cdot\nabla{\psi}_j\right){\, \mathrm{d}x}\right) c_j^n - \\ # &\quad \int\limits_\Omega\left( u_0(\boldsymbol{x},t_{n+1}) - u_0(\boldsymbol{x},t_n) # + \Delta t{\alpha}\nabla u_0(\boldsymbol{x},t_n)\cdot\nabla # {\psi}_i\right){\, \mathrm{d}x} \\ # & \quad + \Delta t\int\limits_\Omega f{\psi}_i{\, \mathrm{d}x} - # \Delta t\int\limits_{\partial\Omega_N} g{\psi}_i{\, \mathrm{d}s}, # \quad i\in{\mathcal{I}_s}{\thinspace .} # \end{align*} # $$ # ## Finite element basis functions # # When using finite elements, each basis function ${\varphi}_i$ is associated # with a node $\boldsymbol{x}_{i}$. We have a collection of nodes # $\{\boldsymbol{x}_i\}_{i\in{I_b}}$ on the boundary $\partial\Omega_D$. # Suppose $U_k^n$ is the known # Dirichlet value at $\boldsymbol{x}_{k}$ at time $t_n$ ($U_k^n=u_0(\boldsymbol{x}_{k},t_n)$). # The appropriate boundary function is then # $$ # B(\boldsymbol{x},t_n)=\sum_{j\in{I_b}} U_j^n{\varphi}_j{\thinspace .} # $$ # The unknown coefficients $c_j$ are associated with the rest of the nodes, # which have numbers $\nu(i)$, $i\in{\mathcal{I}_s} = \{0,\ldots,N\}$. The basis # functions of $V$ are chosen as ${\psi}_i = {\varphi}_{\nu(i)}$, $i\in{\mathcal{I}_s}$, # and all of these vanish at the boundary nodes as they should. # The expansion for $u^{n+1}$ and $u^n$ become # $$ # \begin{align*} # u^n &= \sum_{j\in{I_b}} U_j^n{\varphi}_j + \sum_{j\in{\mathcal{I}_s}}c_{j}^n{\varphi}_{\nu(j)},\\ # u^{n+1} &= \sum_{j\in{I_b}} U_j^{n+1}{\varphi}_j + # \sum_{j\in{\mathcal{I}_s}}c_{j}{\varphi}_{\nu(j)}{\thinspace .} # \end{align*} # $$ # The equations for the unknown coefficients $\left\{ {c}_j \right\}_{j\in{\mathcal{I}_s}}$ become # $$ # \begin{align*} # \sum_{j\in{\mathcal{I}_s}} \left(\int\limits_\Omega {\varphi}_i{\varphi}_j{\, \mathrm{d}x}\right) # c_j &= \sum_{j\in{\mathcal{I}_s}} # \left(\int\limits_\Omega\left( {\varphi}_i{\varphi}_j - # \Delta t{\alpha}\nabla {\varphi}_i\cdot\nabla{\varphi}_j\right){\, \mathrm{d}x}\right) c_{j}^n # - \\ # &\quad \sum_{j\in{I_b}}\int\limits_\Omega\left( {\varphi}_i{\varphi}_j(U_j^{n+1} - U_j^n) # + \Delta t{\alpha}\nabla {\varphi}_i\cdot\nabla # {\varphi}_jU_j^n\right){\, \mathrm{d}x} \\ # &\quad + \Delta t\int\limits_\Omega f{\varphi}_i{\, \mathrm{d}x} - # \Delta t\int\limits_{\partial\Omega_N} g{\varphi}_i{\, \mathrm{d}s}, # \quad i\in{\mathcal{I}_s}{\thinspace .} # \end{align*} # $$ # ## Modification of the linear system # # Instead of introducing a boundary function $B$ we can work with # basis functions associated with all the nodes and incorporate the # Dirichlet conditions by modifying the linear system. # Let ${\mathcal{I}_s}$ be the index set that counts all the nodes: # $\{0,1,\ldots,N=N_n-1\}$. The # expansion for $u^n$ is then $\sum_{j\in{\mathcal{I}_s}}c^n_j{\varphi}_j$ and the # variational form becomes # $$ # \begin{align*} # \sum_{j\in{\mathcal{I}_s}} \left(\int\limits_\Omega {\varphi}_i{\varphi}_j{\, \mathrm{d}x}\right) # c_j &= \sum_{j\in{\mathcal{I}_s}} # \left(\int\limits_\Omega\left( {\varphi}_i{\varphi}_j - # \Delta t{\alpha}\nabla {\varphi}_i\cdot\nabla{\varphi}_j\right){\, \mathrm{d}x}\right) c_{1,j} # \\ # &\quad + \Delta t\int\limits_\Omega f{\varphi}_i{\, \mathrm{d}x} - # \Delta t\int\limits_{\partial\Omega_N} g{\varphi}_i{\, \mathrm{d}s}{\thinspace .} # \end{align*} # $$ # We introduce the matrices $M$ and $K$ with entries # $M_{i,j}=\int\limits_\Omega{\varphi}_i{\varphi}_j{\, \mathrm{d}x}$ and # $K_{i,j}=\int\limits_\Omega{\alpha}\nabla{\varphi}_i\cdot\nabla{\varphi}_j{\, \mathrm{d}x}$, # respectively. # In addition, we define the vectors $c$, $c_1$, and $f$ with # entries $c_i$, $c_{1,i}$, and # $\int\limits_\Omega f{\varphi}_i{\, \mathrm{d}x} - \int\limits_{\partial\Omega_N}g{\varphi}_i{\, \mathrm{d}s}$, respectively. # The equation system can then be written as # #
# # $$ # \begin{equation} # Mc = Mc_1 - \Delta t Kc_1 + \Delta t f{\thinspace .} # \label{_auto6} \tag{32} # \end{equation} # $$ # When $M$, $K$, and $f$ are assembled without paying attention to # Dirichlet boundary conditions, we need to replace equation $k$ # by $c_k=U_k$ for $k$ corresponding to all boundary nodes ($k\in{I_b}$). # The modification of $M$ consists in setting $M_{k,j}=0$, $j\in{\mathcal{I}_s}$, and # the $M_{k,k}=1$. Alternatively, a modification that preserves # the symmetry of $M$ can be applied. At each time level one forms # $b = Mc_1 - \Delta t Kc_1 + \Delta t f$ and sets $b_k=U^{n+1}_k$, # $k\in{I_b}$, and solves the system $Mc=b$. # # In case of a Backward Euler method, the system becomes # ([26](#fem:deq:diffu:BE:vf:linsys)). We can write the system # as $Ac=b$, with $A=M + \Delta t K$ and $b = Mc_1 + f$. # Both $M$ and $K$ needs to be modified because of Dirichlet # boundary conditions, but the diagonal entries in $K$ should be # set to zero and those in $M$ to unity. In this way, for $k\in{I_b}$ we # have $A_{k,k}=1$. # The right-hand side must read $b_k=U^n_k$ for $k\in{I_b}$ (assuming # the unknown is sought at time level $t_n$). # # ## Example: Oscillating Dirichlet boundary condition #
# # We shall address the one-dimensional initial-boundary value problem # #
# # $$ # \begin{equation} # u_t = ({\alpha} u_x)_x + f,\quad x\in\Omega =[0,L],\ t\in (0,T], # \label{fem:deq:diffu:Dirichlet:ex:pde} \tag{33} # \end{equation} # $$ # #
# # $$ # \begin{equation} # u(x,0) = 0,\quad x\in\Omega, # \label{fem:deq:diffu:Dirichlet:ex:uic} \tag{34} # \end{equation} # $$ # #
# # $$ # \begin{equation} # u(0,t) = a\sin\omega t,\quad t\in (0,T], # \label{fem:deq:diffu:Dirichlet:ex:uL} \tag{35} # \end{equation} # $$ # #
# # $$ # \begin{equation} # u_x(L,t) = 0,\quad t\in (0,T]{\thinspace .} # \label{fem:deq:diffu:Dirichlet:ex:uR} \tag{36} # \end{equation} # $$ # A physical interpretation may be that $u$ is the temperature # deviation from a constant mean temperature in a body $\Omega$ # that is subject to an oscillating temperature (e.g., day and # night, or seasonal, variations) at $x=0$. # # We use a Backward Euler scheme in time and P1 elements of # constant length $h$ in space. # Incorporation of the Dirichlet condition at $x=0$ through # modifying the linear system at each time level means that we # carry out the computations as explained in the section [Discretization in time by a Backward Euler scheme](#fem:deq:diffu:BE) and get a system ([26](#fem:deq:diffu:BE:vf:linsys)). # The $M$ and $K$ matrices computed without paying attention to # Dirichlet boundary conditions become # #
# # $$ # \begin{equation} # M = \frac{h}{6} # \left( # \begin{array}{cccccccccc} # 2 & 1 & 0 # &\cdots & \cdots & \cdots & \cdots & \cdots & 0 \\ # 1 & 4 & 1 & \ddots & & & & & \vdots \\ # 0 & 1 & 4 & 1 & # \ddots & & & & \vdots \\ # \vdots & \ddots & & \ddots & \ddots & 0 & & & \vdots \\ # \vdots & & \ddots & \ddots & \ddots & \ddots & \ddots & & \vdots \\ # \vdots & & & 0 & 1 & 4 & 1 & \ddots & \vdots \\ # \vdots & & & & \ddots & \ddots & \ddots &\ddots & 0 \\ # \vdots & & & & &\ddots & 1 & 4 & 1 \\ # 0 &\cdots & \cdots &\cdots & \cdots & \cdots & 0 & 1 & 2 # \end{array} # \right) # \label{_auto7} \tag{37} # \end{equation} # $$ # and # #
# # $$ # \begin{equation} # K = \frac{{\alpha}}{h} # \left( # \begin{array}{cccccccccc} # 1 & -1 & 0 &\cdots & \cdots & \cdots & \cdots & \cdots & 0 \\ # -1 & 2 & -1 & \ddots & & & & & \vdots \\ # 0 & -1 & 2 & -1 & \ddots & & & & \vdots \\ # \vdots & \ddots & & \ddots & \ddots & 0 & & & \vdots \\ # \vdots & & \ddots & \ddots & \ddots & \ddots & \ddots & & \vdots \\ # \vdots & & & 0 & -1 & 2 & -1 & \ddots & \vdots \\ # \vdots & & & & \ddots & \ddots & \ddots &\ddots & 0 \\ # \vdots & & & & &\ddots & -1 & 2 & -1 \\ # 0 &\cdots & \cdots &\cdots & \cdots & \cdots & 0 & -1 & 1 # \end{array} # \right) # \label{_auto8} \tag{38} # \end{equation} # $$ # The right-hand side of the variational form contains no source term ($f$) and no boundary term from the # integration by parts ($u_x=0$ at $x=L$ and we compute as if $u_x=0$ at # $x=0$ too) and we are therefore left with $Mc_1$. However, we must incorporate the Dirichlet boundary # condition $c_0=a\sin\omega t_n$. Let us assume that our numbering of nodes is such that # ${\mathcal{I}_s} = \{0,1,\ldots,N=N_n-1\}$. # The Dirichlet condition can then be incorporated # by ensuring that this is the # first equation in the linear system. # To this end, # the first row in $K$ and $M$ is set to zero, but the diagonal # entry $M_{0,0}$ is set to 1. The right-hand side is $b=Mc_1$, # and we set $b_0 = a\sin\omega t_n$. # We can write the complete linear system as # #
# # $$ # \begin{equation} # c_0 = a\sin\omega t_n, # \label{_auto9} \tag{39} # \end{equation} # $$ # #
# # $$ # \begin{equation} # \frac{h}{6}(c_{i-1} + 4c_i + c_{i+1}) + \Delta t\frac{{\alpha}}{h}(-c_{i-1} # +2c_i - c_{i+1}) = \frac{h}{6}(c_{1,i-1} + 4c_{1,i} + c_{1,i+1}), # \label{_auto10} \tag{40} # \end{equation} # $$ # $$ # \qquad i=1,\ldots,N_n-2,\nonumber # $$ # #
# # $$ # \begin{equation} # \frac{h}{6}(c_{i-1} + 2c_i) + \Delta t\frac{{\alpha}}{h}(-c_{i-1} # +c_i) = \frac{h}{6}(c_{1,i-1} + 2c_{1,i}), # \label{_auto11} \tag{41} # \end{equation} # $$ # $$ # \qquad i=N_n-1{\thinspace .}\nonumber # $$ # The Dirichlet boundary condition can alternatively be implemented # through a boundary function $B(x,t)=a\sin\omega t\,{\varphi}_0(x)$: # $$ # u^n(x) = a\sin\omega t_n\,{\varphi}_0(x) + # \sum_{j\in{\mathcal{I}_s}} c_j{\varphi}_{\nu(j)}(x),\quad # \nu(j) = j+1{\thinspace .} # $$ # Now, $N=N_n-2$ and the $c$ vector contains values of $u$ at nodes # $1,2,\ldots,N_n-1$. The right-hand side gets a contribution # #
# # $$ # \begin{equation} # \int\limits_0^L \left( # a(\sin\omega t_n - \sin\omega t_{n-1}){\varphi}_0{\varphi}_i # - \Delta t{\alpha} a\sin\omega t_n\nabla{\varphi}_0\cdot\nabla{\varphi}_i\right){\, \mathrm{d}x} # {\thinspace .} # \label{fem:deq:diffu:Dirichlet:ex:bterm} \tag{42} # \end{equation} # $$