Almost all the problems that we will want to solve are systems of more than one variable. For example, the Navier-Stokes equation for fluid flow, $C^0$-conforming discretisations of the Cahn-Hilliard equation for phase separation, or systems that couple different physical processes.

Firedrake supports specifying and solving such systems in *mixed* function spaces. Here we will use a dual formulation of the Poisson equation as a simple introductory example.

These systems of equations have a block algebraic form, and we will also look at some of the support for efficient preconditioning strategies that Firedrake offers.

Recall the primal form of the Poisson equation. Find $u$ satisfying

$$ \begin{align} \nabla^2 u &= -f \quad\text{in $\Omega$}, \\ u &= u_0 \quad\text{on $\Gamma_D$},\\ \nabla u \cdot n &= g \quad\text{on $\Gamma_N$.} \end{align}$$We obtain the dual form by introducing a vector-valued flux $\sigma = \nabla u$. Substituting, the problem becomes. Find $(\sigma, u)$ satisfying

$$ \begin{align} \sigma - \nabla u &= 0 \quad \text{in $\Omega$}, \\ \nabla \cdot \sigma &= -f \quad \text{in $\Omega$}, \\ u &= u_0 \quad \text{on $\Gamma_D$},\\ \sigma \cdot n &= g \quad\text{on $\Gamma \setminus \Gamma_D =: \Gamma_N$.} \end{align} $$For the variational formulation, we introduce a pair of function spaces, $\Sigma$ and $V$, and seek $(\sigma, u) \in \Sigma \times V$ such that:

$$ \begin{align} \int_\Omega (\sigma \cdot \tau + \nabla \cdot \tau u)\,\text{d} x &= \int_\Gamma \tau \cdot n u\,\text{d}s \quad \forall \tau \in \Sigma,\\ \int_\Omega (\nabla\cdot\sigma)v\,\text{d} x &= - \int_\Omega f v\,\text{d}x \quad \forall v \in V. \end{align} $$Notice how the weak condition in the primal form turns into a strong condition on the auxiliary variable $\sigma$ and the strong condition on $u$ in the primal form appears as a weak condition on $\sigma$.

For this problem we will solve with homogeneous Dirichlet (strong) conditions $u_0 = 0$ on the boundary of a square domain $\Omega = [0, 1]^2$, and will choose $f = 10\exp(-100((x-0.5)^2 + (y-0.5)^2))$.

In [1]:

```
%matplotlib notebook
import matplotlib.pyplot as plt
from firedrake import *
from time import time
N = 20
mesh = UnitSquareMesh(N, N)
```

In [2]:

```
rt = FiniteElement("Raviart-Thomas", triangle, 2, variant="integral")
Sigma = FunctionSpace(mesh, rt)
V = FunctionSpace(mesh, "DG", 1)
```

Note that Firedrake inherits the convention from FEniCS that the specified element degree is the degree of the smallest polynomial space that spans the space of the element. The lowest order Raviart-Thomas element contains all the constants and some linear functions, hence we specify degree 1.

Now we build the mixed finite element space $W$, and create test and trial functions.

In [3]:

```
W = Sigma * V
sigma, u = TrialFunctions(W)
tau, v = TestFunctions(W)
```

In [4]:

```
x, y = SpatialCoordinate(mesh)
f = 10*exp(-100*((x - 0.5)**2 + (y - 0.5)**2))
a = dot(sigma, tau)*dx + div(tau)*u*dx + div(sigma)*v*dx
L = -f*v*dx
```

In [5]:

```
wh = Function(W)
solve(a == L, wh, solver_parameters={"ksp_type": "minres", "pc_type": "none"})
```

In [6]:

```
# NBVAL_IGNORE_OUTPUT
sigmah, uh = wh.subfunctions
fig, axes = plt.subplots(ncols=2, sharex=True, sharey=True)
quiver(sigmah, axes=axes[0])
axes[0].set_aspect("equal")
axes[0].set_title("$\sigma$")
tripcolor(uh, axes=axes[1])
axes[1].set_aspect("equal")
axes[1].set_title("$u$")
plt.tight_layout()
```

Most problems you'll encounter will require effective *preconditioning* to achieve mesh and parameter *independent* convergence. By this we mean that the number of iterations to obtain the solution is independent of both the mesh refinement, and hopefully also, any coefficient variability. Here, we have a constant-coefficient problem, so we only need to worry about mesh refinement.

If we had a primal Poisson problem, we could treat the issue quite easily using an algebraic multigrid preconditioner such as hypre. However, for this dual formulation it is slightly more complicated.

First, we'll illustrate the issue, then look at some possible solutions.

We're going to be running the same problem over and over again at a range of mesh resolutions, so to avoid writing the code again and again we'll wrap the problem up in a function.

In [7]:

```
def create_solver(N, solver_parameters, aP=None):
mesh = UnitSquareMesh(N, N)
Sigma = FunctionSpace(mesh, rt)
V = FunctionSpace(mesh, "DG", 1)
W = Sigma * V
sigma, u = TrialFunctions(W)
tau, v = TestFunctions(W)
x, y = SpatialCoordinate(mesh)
f = 10*exp(-100*((x - 0.5)**2 + (y - 0.5)**2))
a = dot(sigma, tau)*dx + div(tau)*u*dx + div(sigma)*v*dx
L = -f*v*dx
wh = Function(W)
if aP is not None:
aP = aP(W)
problem = LinearVariationalProblem(a, L, wh, aP=aP)
solver = LinearVariationalSolver(problem, solver_parameters=solver_parameters)
return solver
```

Let's try solving the problem using MINRES with incomplete Cholesky as a preconditioner.

In [8]:

```
# NBVAL_SKIP
parameters = {"ksp_type": "minres", "mat_type": "aij", "pc_type": "icc"}
from firedrake.solving_utils import KSPReasons
for N in [5, 10, 25, 50]:
solver = create_solver(N, parameters)
cpu_timestamp = time()
solver.solve()
cpu_time = time() - cpu_timestamp
print("N = {:3d}, iterations = {:3d}, converged reason = {:s}, time = {:.3f}s".format(
N, solver.snes.ksp.getIterationNumber(),
KSPReasons[solver.snes.ksp.getConvergedReason()],
cpu_time))
```

Many state of the art solution methods for systems of equations rely on block factorisation and block inverses to construct good preconditioners. Firedrake supports these through PETSc's `fieldsplit`

preconditioning type. The mixed Poisson problem

is a block system with matrix

$$ \mathcal{A} = \begin{split}\left(\begin{matrix} \color{#800020}A & \color{#2A52BE}B \\ \color{#BE962A}C & 0 \end{matrix}\right),\end{split} $$admitting a factorisation

$$ \begin{split}\left(\begin{matrix} I & 0 \\ \color{#BE962A}C \color{#800020}{A^{-1}} & I\end{matrix}\right) \left(\begin{matrix}\color{#800020}A & 0 \\ 0 & S\end{matrix}\right) \left(\begin{matrix} I & \color{#800020}{A^{-1}} \color{#2A52BE}B \\ 0 & I\end{matrix}\right),\end{split} $$with $S = -\color{#BE962A}C \color{#800020}{A^{-1}} \color{#2A52BE}B$ the *Schur complement*. This has an inverse:

In particular, note that if we drop some of the terms in the factorisation and write

$$ \mathcal{P} = \begin{split}\left(\begin{matrix} \color{#800020}A & 0 \\ 0 & -\color{#BE962A}C \color{#800020}{A^{-1}} \color{#2A52BE}B\end{matrix}\right)\end{split}, $$then the system $\mathcal{P}^{-1}\mathcal{A}$ has at most four distinct eigenvalues, and so a Krylov method applied to this preconditioned system will converge quickly Murphy, Golub, and Wathen (2000).

This approach therefore reduces the problem of finding an inverse of $\mathcal{A}$ into the (hopefully simpler) problem of finding good inverses for blocks thereof.

Let's try this out on our problem:

In [9]:

```
# NBVAL_SKIP
parameters = {"ksp_type": "minres",
"mat_type": "aij",
"pc_type": "fieldsplit",
"pc_fieldsplit_type": "schur",
"pc_fieldsplit_schur_fact_type": "diag",
"fieldsplit_0_ksp_type": "cg",
"fieldsplit_0_pc_type": "icc",
"fieldsplit_0_ksp_rtol": 1e-12,
"fieldsplit_1_ksp_type": "cg",
"fieldsplit_1_pc_type": "none",
"fieldsplit_1_ksp_rtol": 1e-12}
from firedrake.solving_utils import KSPReasons
for N in [5, 10, 25, 50]:
solver = create_solver(N, parameters)
cpu_timestamp = time()
solver.solve()
cpu_time = time() - cpu_timestamp
print("N = {:3d}, iterations = {:1d}, converged reason = {:s}, time = {:.3f}s".format(
N, solver.snes.ksp.getIterationNumber(),
KSPReasons[solver.snes.ksp.getConvergedReason()],
cpu_time))
```

Above we saw a larger set of PETSc options than previously. This warrants a more in depth explanation. PETSc objects are configurable at *runtime*, and the primary mechanism for controlling their behaviour is the PETSc *options database*. The "programming language" of these options consists of two operations:

- Value assignment
- string concatenation

Each object has an options *prefix* that can be used to distinguish it from other objects (of the same, or different, type). For example, the linear solve object in PETSc is a `KSP`

, and its prefix is `"ksp_"`

. Hence, `"ksp_type": "minres"`

says "Set the type of the KSP object in this solve to minres".

Similarly, the preconditioner is wrapped up in a `PC`

object, with prefix `"pc_"`

. Here, we say `"pc_type": "fieldsplit"`

which sets the preconditioner type to be `"fieldsplit"`

.

The type of the fieldsplit `PC`

is controlled with `"pc_fieldsplit_type": "schur"`

. This kind of block preconditioner needs inverses for the blocks. These are provided by KSP objects. So that we can configure them separately from the main "outer" KSP, they have separate prefixes: `"fieldsplit_0_"`

and `"fieldsplit_1_"`

. The former controls the behaviour for $A^{-1}$, the latter for $S^{-1}$.

Although this system takes some getting used to, and looks somewhat verbose, it provides a lot of flexibility. We do not need to modify the code itself, only the options used, to change configuration of the solver.

Determine how the iterations required to invert the $S$ change when the mesh size is increased.

- Hint: The KSPs for the fieldsplit solves are available as
`solver.snes.ksp.pc.getFieldSplitSubKSP()`

In [ ]:

```
```

Our problem is that we haven't provided a preconditioning matrix to invert $S$. If $A$ is such that $\text{diag}(A)^{-1}$ is a good approximation to $A^{-1}$, then a sparse approximation $\tilde{S} = -C \text{diag}(A)^{-1} B$ is a good approximation to $S$. PETSc allows us to create this purely using options:

In [10]:

```
# NBVAL_SKIP
parameters = {"ksp_type": "minres",
"mat_type": "aij",
"pc_type": "fieldsplit",
"pc_fieldsplit_type": "schur",
"pc_fieldsplit_schur_fact_type": "diag",
# Construct \tilde{S} using C diag(A)^{-1} B
"pc_fieldsplit_schur_precondition": "selfp",
"fieldsplit_0_ksp_type": "cg",
"fieldsplit_0_pc_type": "icc",
"fieldsplit_0_ksp_rtol": 1e-12,
"fieldsplit_1_ksp_type": "cg",
"fieldsplit_1_ksp_converged_reason": None,
# Use algebraic multigrid on \tilde{S}
"fieldsplit_1_pc_type": "hypre",
"fieldsplit_1_ksp_rtol": 1e-12}
from firedrake.solving_utils import KSPReasons
for N in [5, 10, 25, 50, 100]:
solver = create_solver(N, parameters)
cpu_timestamp = time()
solver.solve()
cpu_time = time() - cpu_timestamp
print("N = {:3d}, iterations = {:1d}, converged reason = {:s}, time = {:.3f}s\n".format(
N, solver.snes.ksp.getIterationNumber(),
KSPReasons[solver.snes.ksp.getConvergedReason()],
cpu_time))
```

In [11]:

```
# NBVAL_SKIP
parameters = {"ksp_type": "minres",
"mat_type": "aij",
"pc_type": "fieldsplit",
"pc_fieldsplit_type": "schur",
"pc_fieldsplit_schur_fact_type": "diag",
# Construct \tilde{S} using C diag(A)^{-1} B
"pc_fieldsplit_schur_precondition": "selfp",
"fieldsplit_0_ksp_type": "preonly",
"fieldsplit_0_pc_type": "icc",
"fieldsplit_1_ksp_type": "preonly",
# Use algebraic multigrid on \tilde{S}
"fieldsplit_1_pc_type": "hypre"}
from firedrake.solving_utils import KSPReasons
for N in [5, 10, 25, 50, 100]:
solver = create_solver(N, parameters)
cpu_timestamp = time()
solver.solve()
cpu_time = time() - cpu_timestamp
print("N = {:3d}, iterations = {:2d}, converged reason = {:s}, time = {:.3f}s".format(
N, solver.snes.ksp.getIterationNumber(),
KSPReasons[solver.snes.ksp.getConvergedReason()],
cpu_time))
```

Developing efficient solvers for block systems using this approach is quite an experimental science. The best approach will vary depending on how good the preconditioners you have available for the individual blocks are, how many degrees of freedom the system has, any coefficient variations, the tolerance to which you wish to solve the problem, and probably many more.

The strength of Firedrake and PETSc in this set up is the ease with which we can experiment, without needing to recompile code. For a few more details, and further pointers, you can look at the extended Firedrake demo on this topic.

In [ ]:

```
```