As we saw in the previous chapter, the best way to define collisions is to use particles which are labeled using invariant quantities of motion (rest mass $m$ and charge $q$) as well as quantities that varies with motion (the position $\vec r$ and the velocity $\vec v$).

For a closed system with no source or sink of:

- particles (i.e. mass conservation)
- momentum (i.e. momentum conservation)
- energy (i.e. energy conservation)

we have $$df(\vec r,\vec p,t)=0\tag{4.1}$$

Note that Eq. $(4.1)$ is a total derivative since both $\vec r$ and $\vec p$ are both function of time. Evidently, we can use the chain rule to get $$df(\vec r,\vec p,t)=\frac{\partial f}{\partial t}dt+\frac{\partial f}{\partial \vec r}d\vec r+\frac{\partial f}{\partial \vec p}d\vec p$$

**Fig.1 Change of f as a function of time inside the volume dV due to a spacial variation of f along the x axis and a velocity parallel to that gradient.**

The figure above shows that $f$ changes in time due to a change in f along the x-direction and a velocity $v_x$ that moves this change insdie the volume dV.

Vlasov realized that to properly capture Coulomb collisions, one should not use the collision operator $(\frac{\partial f}{\partial t})_{coll}$ but split the charge particle distribution function into two distribution functions:

- one for the electrons, $f_e$
- one for the ions, $f_i$

We suppose here that there is only one ions species with ionization $Z_i$. However, if other collisions (e.g. charged particles-neutrals) are not negligible then they have to be taken into account.

*global*, Vlasov meant that these fields are the fields created by all the particles at the location $(x,y,z)$ but average over all the particle momenta at this location.

What do we obtain if we integrate the distribution function $f_i$ or $f_e$ over all velocities?

What is the physical meaning of $m_{i,e}n_{i,e}v^2$?

In [1]:

```
import numpy as np
from scipy.integrate import nquad
from math import exp as exp
def integrand(vx,vy,vz,A0,v0):
return A0*exp(-(vx**2+vy**2+vz**2)/v0**2)
I = nquad(integrand,[[-np.inf,np.inf],[-np.inf,np.inf],[-np.inf,np.inf]], args=(1e3,.1))
print('the value is',I[0],'particles/m^3 with error',I[1])
```

the value is 5.568327996831708 particles/m^3 with error 2.0286636909816417e-08

Yes. The integral of $\exp(-x^2)$ from $-\infty$ to $+\infty$ is

$$\int_{-\infty}^{+\infty}\exp(-x^2)dx=\sqrt{\pi}$$

Now that we have integrated $f$, $f\vec v$ and $f\vec v^2$ over all possible velocities, let's redo the same thing with $df$, $df\vec v$ and $df \vec v^2$.

The conservation of mass in obtained by integrating Vlasov's equation $$\int\int\int_{-\infty}^{+\infty}\frac{\partial f_e}{\partial t} + \vec v.\vec \nabla_{\vec r}\, f_e +\vec a.\vec \nabla_{\vec v}\,f_ed^3v=\int\int\int_{-\infty}^{+\infty}\Big(\frac{\partial f}{\partial t}\Big)_{e\,coll}d^3v\tag{4.11}$$ Since the left hand side of Eq. $(4.11)$ is integrated for all possible velocities, we will get the same results for electrons and ions. However, the right hand side of Eq. $(4.11)$ is dependent on cross-sections and has to be treated separately for ions and electrons.

When we integrate eq. $(4.11)$ over all velocities, we find the first conservation equation, called the conservation of mass or *continuity equation*
$$\frac{\partial n_e}{\partial t}+\vec \nabla . \big(n_e\vec u_e\big)=S_e\tag{4.12}$$
$S_e$ is the sources of particles in this equations and comes from
$$S_e=\int\int\int_{-\infty}^{+\infty}\Big(\frac{\partial f}{\partial t}\Big)_{e\,coll}d^3v$$
Note that these collisions are all but Coulomb collisions in the context set by Vlasov. If $S_e$ is positive then the electron density $n_e$ at the location $(x,y,z)$ is increasing *or* the flux of particle leaving this location is non-zero and points away from this volume. This is the meaning of the divergence operator $\vec\nabla.\bullet$

Physically, what can be considered a source of electrons at a given location?

Ionizations.

Note that injecting electrons at this location does not count because the represent an electron flux and this would be accounted for in $\vec \nabla . \big(n_e\vec u_e\big)$. When $S_e$ is negative then there is a sink of electrons. Physically, what can be considered a sink of electrons at a given location?

Recombinations.

**positive** quantities, $G_e$, the term corresponding to electron sources or *gains*, and $L_e$, the term corresponding to electrons sinks or *losses*. So, we get
$$\frac{\partial n_e}{\partial t}+\vec \nabla . \big(n_e\vec u_e\big)=G_e-L_e.\tag{4.13}$$

If we look at ionizations only, we can write $$G_e=\nu_{iz}n_e$$ or $$G_e=n_en_g\sigma<v>_{ionization}$$ Note that $G_e=L_g$, $G_e=G_i$, together with $G_g=0$ (no recombinations) and $L_g=G_i$. This effectively couples all three equations together.

If ions can be ionized multiple times, then we will have a conservation equation for each charged ion species (e.g. single ionized, double ionized, ...). But all the conservation equations are still coupled via their source terms.

We now look at what happens when Vlasov equations for ions and electrons are multiplied by $m_{i,e}\vec v$. Integrating over all velocities we get conservation of momentum. $$\frac{\partial n_e\vec u_e}{\partial t}+\vec \nabla \cdot (n_e\vec u_e \vec u_e)=\frac{1}{m_e}\big(-en_e[\vec E +\vec u_e \times \vec B]-\vec \nabla p_e+\vec F|_{e\,coll}\big)$$ and $$\frac{\partial n_i\vec u_i}{\partial t}+\vec \nabla \cdot (n_i\vec u_i \vec u_i)=\frac{1}{m_i}\big(Z_ien_i[\vec E +\vec u_i \times \vec B]-\vec \nabla p_i+\vec F|_{i\,coll}\big)$$

The operator vector $\vec F|_{coll}$ represents the force density of collisions with respect to other ions or neutrals. We will ignore such effects here. Further, we also dropped in this treatment secondary effects, such as viscosity. In this context, the pressure is given by our integral calculus and has the following form. $$p_{i,e}= \frac{1}{3}m_{i,e}n_{i,e}\Big<\big(v_{i,e}-u_{i,e}\big)^2\Big>_{\vec v}$$ where $\Big<\big(\bullet\big)^2\Big>_{\vec v}$ is the average operator over all velocities.

If we now suppose that the electron and ion plasmas behave like an ideal gas then this operator yields for both pressures, $p_e=n_eeT_e$ and $p_i=Z_in_ieT_i$.

We now look at what happens when Vlasov equations for ions and electrons are multiplied by $m_{i,e}v^2$ and then integrated over all velocities. This gives conservation of energy for both species. $$\frac{\partial \epsilon_e}{\partial t}+\vec \nabla \cdot (\epsilon_e\vec u_e)=\frac{1}{m_e}\big(-\vec \nabla \cdot Q_e+\vec j_e.\vec E\big)\tag{4.18}$$ Clearly, we have here a similar structure in the equation, mainly a change in time od a certain quantity $\sigma$ is related to its flux $\sigma \vec u$ via the divergence operator $\vec \nabla$. And as before we have introduced a new quantity $\vec Q_e$ that is the flux of energy at the microscopic scale, i.e. $\frac{1}{2}m_en_e<u_e^2\vec u_e>_e$, also called heat flux. The energy density source term $\vec j_e.\vec E$ is the work of the electric field on the moving electrons.

For any fluid, ions or electrons, we have shown that an explicit change in time of any moment ($n$, $n\vec u$ or $\epsilon$) can only be cause by an explicit change of flux in space ($n\vec u$, $n\vec u \vec u$ or $\epsilon \vec u$) or by explicit cources and sinks. Sources and sinks comes from collisions or from electromagnetic sources (according to Vlasov theory).

The physical meaning of the moments on the left hand side of every equation is tightly bound to the integral operator, but the moments and their flux can alsways be defined as long as an integral operator can be defined. This is not true for most terms on the right hand side. If they come from collision operators, the collision operators have to be defined physically (non trivial task when dealing with plasmas) and some quantities have to be purely and simply defined: the closure quantities (like $p$ or $\vec Q$).

$$\begin{align} &\frac{\partial n_e}{\partial t}+\vec \nabla . \big(n_e\vec u_e\big)=0\\ &\frac{\partial n_i}{\partial t}+\vec \nabla . \big(n_i\vec u_i\big)=0\\ &\frac{\partial n_e\vec u_e}{\partial t}+\vec \nabla \cdot (n_e\vec u_e \vec u_e)=\frac{1}{m_e}\big(-en_e\vec E +\vec j_e \times \vec B-\vec \nabla p_e\big)\\ &\frac{\partial n_i\vec u_i}{\partial t}+\vec \nabla \cdot (n_i\vec u_i \vec u_i)=\frac{1}{m_i}\big(Z_ien_i\vec E +\vec j_i \times \vec B-\vec \nabla p_i\big)\\ &\frac{\partial \epsilon_e}{\partial t}+\vec \nabla \cdot (\epsilon_e\vec u_e)=\frac{1}{m_e}\big(-\vec \nabla \cdot Q_e+\vec j_e.\vec E\big)\\ &\frac{\partial \epsilon_i}{\partial t}+\vec \nabla \cdot (\epsilon_i\vec u_i)=\frac{1}{m_i}\big(-\vec \nabla \cdot Q_i+\vec j_i.\vec E\big)\\ &\vec \nabla\times\vec {B} =\mu_0\vec {j}+\mu_0\varepsilon_0 \frac{\partial\vec {E}}{\partial t} \\ &\vec \nabla\times\vec {E} =-\frac{\partial\vec {B}}{\partial t} \\ &\vec \nabla\cdot\vec {E} =\frac{\rho_E}{\varepsilon_0} \\ &\vec \nabla\cdot\vec {B} =0\\ \end{align}$$The major difficulty in going from two fluid equations to equations for one single fluid is the disparity of both temporal and spacial scales between the ions and the electrons. Because they have the same charge, share similar momentum source or sink, but because the electrons are so much lighter, they react more quickly, 2000 times more quickly if we compare protons and electrons. As a results, two-fluid equations have to be solved on the electron time scale, requiring a numerical treatment that calls for very small time steps.

Because the bulk of the mass lies with the ion fluid, macroscopic effects are most visible at the ion scale. Yet the equations have to be solved very slowly, to account for electron physics, and their interactions with the ions.

To simplify the problem, we assume macroscopic ambipolarity ($\vec u_i\approx\vec u_e$) and quasi-neutrality ($n_e\approx Z_in_i$). We use here the symbol $\approx$ (i.e. almost equal) allows for the existance of an electrical current density $\vec j=Z_ie\vec u_i-en_e\vec u_e$, which can be seen as a difference in flow velocity between ions and electrons. Perfect neutrality and ambipolarity forces the electrical current to be $\vec j=\vec 0$.

*breaking the bank* is to look at the impact of the electrons on the electric and magnetic fields felt by the ions. This is called Ohm's law.

Ohm's law (or the generalized Ohm's law as we are going to study it) is simply that. It is a law that averages over electron motion at the microscopic (fast) scale to give the effective electric field felt by the ion at the macroscopic (ion) scale.

One might think that quasi-neutrality means that there is no electric field inside a plasma. In fact, it states that there is no **source** of electrostatic field inside a plasma, i.e. $\rho_E=Z_ien_i-en_e=0$. However, this is not because $\vec \nabla\cdot\vec E=0$ that there is no electrostatic field. It just means there exist a vector $\vec A_E$ such that $\vec E=\vec \nabla\times\vec A_E.$

*and* the type of material via the factor of proportionality $\eta$, called resistivity
$$\vec E=\eta \vec J$$

More generally, electric fields can be generated by different phenemona tightely connected to electron physics. We will look at the following in the next chapter:

- Dynamo
- Hall effect
- Electron pressure (also known as the Biermann battery term)
- Electron inertia

While the meaning of electron closure quantities and collisional operators can be recast on the ion time scale, their mathematical definitions become very restrictive. We lose a sizable portion of electron physics by doing so. If the physical system indeed allows it, then the equations will closely match the physical behavior of this system. If not, then it will be a poor approximation. Using the one-fluid quantities as defined above, we get $$\begin{align} &\frac{\partial n}{\partial t}+\vec \nabla . \big(n\vec u\big)=0\tag{4.20}\\ &\frac{\partial n\vec u}{\partial t}+\vec \nabla . \big(n\vec u\vec u\big)=\frac{1}{m}(\vec j\times\vec B-\vec\nabla\cdot p)\tag{4.21}\\ &\frac{\partial \epsilon}{\partial t}+\vec \nabla \cdot (\epsilon\vec u)=\frac{1}{m}\big(-\vec \nabla \cdot Q+\eta j^2\big)\tag{4.22} \end{align}$$

$$\begin{align}
&\frac{\partial\vec {B}}{\partial t}=-\vec \nabla\times\big( {\frac{\eta}{\mu_0}}\vec \nabla\times\vec {B}\big) \tag{4.23}\\
&\vec \nabla\cdot\vec {B}=0 \tag{4.24}
\end{align}
$$

Before solving flows where electromagnetism plays an important role, let's first look at what happens when flows are solely driven by kinetic energy, i.e. $B=0$ and $E=0$. In this case, we need to solve Eqs. $(4.20)$ to $(4.22)$ with $\vec j\times \vec B=\vec 0$. We will suppose also that all the heat is transferred via convection rather than conduction so we also ignore $\vec Q$.

To do this we recast our conservation laws using a more general framework. Regardless of their sources (even if fields are involved), Eqs. $(4.20)$ to $(4.22)$ can be written as $$\frac{\partial U}{\partial t}+\vec \nabla \cdot \vec F(U)=S_U\tag{4.25}$$ Here $U$ can be a scalar or a vector. If we discritize our domain into an ensemble of adjacent domains $\Omega_j$ with boundary $\partial\Omega_j$ we can rewrite Eq. $(4.25)$ as $$\iiint_{\Omega_j}\frac{\partial U}{\partial t}dV+\iiint_{\Omega_j}\vec \nabla \cdot \vec F(U)dV=\iiint_{\Omega_j}S_U\,dV$$

$$\iiint_{\Omega_j}\frac{\partial U}{\partial t}dV+\iint_{\partial\Omega_j}\vec F(U)\cdot \vec n\,dS=\iiint_{\Omega_j}S_U\,dV$$

By using the theorem we actually turned the spatial derivative inside a volume into a surface integral of the function itself. To first order we also suppose that $U$ and $S_U$ are constant inside the volume (this is actually why we discretized our equations) so we get

$$\frac{\partial U_j}{\partial t}+\frac{1}{V_j}\iint_{\partial\Omega_j}\vec F(U)\cdot \vec n\,dS=S_{U_j}\tag{4.26}$$

In our case, $U$ is $n$, $n\vec u$ and $\epsilon$. $\vec F(U)$ is $n\vec u$, $n\vec u\vec u$ or $\epsilon\vec u$.

We are now ready to design a code to handle hydrodynamics flows. Ignoring electric fields and magnetic fields allows us to not only simplify the code but also run the code faster, as we discuss later.

We also use numba to improve code performance (it is python after all) and some python widget to monitor the remaining time of the computation.

Keep in mind that such computation are very intensive on large computers. Using python with neither accelerator (i.e. graphics card) nor parallelization will make any computation inherently slow. So be wise when you choose your resolution.

The code presented here is two-dimensional. It can be used in slab mode, i.e. $z$ is the direction of symmetry, or in cylindrical mode, i.e. $\phi$ is the direction of symmetry. In other words, in slab geometry any quantity is such that $$U(x,y,z)=U(x,y).$$ In cylindrical geometry we have $$U(r,\phi,z)=U(r,z).$$

In [2]:

```
import numpy as np
import math
from math import sqrt
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from numba import jit
from ipywidgets.widgets.interaction import show_inline_matplotlib_plots
from IPython.display import clear_output
from ipywidgets import FloatProgress
from IPython.display import display
```

For practical reason we will define some constants that will be use to remember which component in the array correspond to which quantity, e.g. the temperature can be accessed using `tp`

at the location `i,j`

as `Q[tp,i,j]`

. `rh`

is the number density, `mx`

is the momentum density in the x-direction, `my`

and `mz`

in the y and z directions. `en`

is for the energy density, `tp`

the temperature. `nQ`

is the total number of variables, from `rh`

to `tp`

.

In [3]:

```
rh=0
mx=1
my=2
mz=3
en=4
tp=5
nQ=6
```

`mu`

is the atomic number of the fluid, in this case aluminum. `aindex`

is the adiabatic index.

In [4]:

```
mu=27.
aindex=1.1
```

`L0`

is the geometrical scale length, `t0`

the time scale, `n0`

the density scale, `v0`

is the velocity scale, `p0`

the pressure scale, `te0`

the temperature scale. We also define a couple of floor quantities under which no density, temperature and pressure can go.

In [5]:

```
L0=1.0e-3
t0=100.0e-9
n0=6.0e28
v0=L0/t0
p0=mu*1.67e-27*n0*v0**2
te0=p0/n0/1.6e-19
rh_floor = 1.0E-8
T_floor = 0.0026/te0
rh_mult = 1.1
P_floor = rh_floor*T_floor
```

We now define our geometrical functions. They are used quite often so we will compile them inside the code to accelerate computations. To avoid writing two codes, one for slab symmetry and one for cylindrical symmetry, we wrote this code in for slab symmetry and in Cartesian coordinates, i.e. $(x,y,z)$. When using the code for problem with cylindrical symmetry then $$\begin{align} &x\rightarrow r\\ &y\rightarrow z\\ &z\rightarrow \phi\end{align}$$

The `rc`

function computes the radius in cylindrical coordinates, which appears in source terms and in the conservation equations. Clearly, this is simply $x$ as mentioned above. That's why `rc`

returns `xc`

. To turn the code into slab symmetry, just have the function `rc`

return `1`

.

In [6]:

```
@jit(nopython=True)
def xc(i):
return lxd + (i-ngu+1)/dxi
@jit(nopython=True)
def rc(i):
return xc(i)
@jit(nopython=True)
def yc(j):
return lyd + (j-ngu+1)/dyi
```

`rc`

above. Note the the letter `c`

, for center, is now gone.

In [7]:

```
@jit(nopython=True)
def r(i,j):
return math.sqrt((xc(i)-(lxd+lxu)/2)**2 + (yc(j)-(lyd+lyu)/2)**2)
@jit(nopython=True)
def rz(i,j):
return sqrt(xc(i)**2+yc(j)**2)
@jit(nopython=True)
def theta(i,j):
return math.atan(yc(j)-(lyd+lyu)/2,xc(i)-(lxd+lxu)/2)
```

We now compute the sources for all our conservation equations.

In [8]:

```
def get_sources(Qin):
sourcesin=np.zeros((nx,ny,nQ))
for j in range(ngu, ny-ngu):
for i in range(ngu, nx-ngu):
rbp = 0.5*(rc(i+1) + rc(i))
rbm = 0.5*(rc(i) + rc(i-1))
dn = Qin[i,j,rh]
dni=1./dn
vx = Qin[i,j,mx]*dni
vy = Qin[i,j,my]*dni
vz = Qin[i,j,mz]*dni
T = (aindex - 1)*(Qin[i,j,en]*dni - 0.5*(vx**2 + vy**2 + vz**2))
if(T < T_floor):
T = T_floor
Qin[i,j,tp]=T
sourcesin[i,j,rh] = 0
P = dn*T
sourcesin[i,j,mx] = (Qin[i,j,mz]*vz + P)
sourcesin[i,j,mz] = - Qin[i,j,mz]*vx
sourcesin[i,j,en] = 0
return sourcesin
```

In [9]:

```
@jit(nopython=True)
def advance_time_level(Qin,flux_x,flux_y,source):
Qout=np.zeros((nx,ny,nQ))
dxt = dxi*dt
dyt = dyi*dt
for j in range(ngu, ny-ngu):
for i in range(ngu, nx-ngu):
rbp = 0.5*(rc(i+1) + rc(i))
rbm = 0.5*(rc(i) + rc(i-1))
rci = 1./rc(i)
Qout[i,j,rh] = Qin[i,j,rh]*rc(i) - dxt*(flux_x[i,j,rh]*rbp-flux_x[i-1,j,rh]*rbm) - dyt*(flux_y[i,j,rh]-flux_y[i,j-1,rh])*rc(i)
Qout[i,j,mx] = Qin[i,j,mx]*rc(i) - dxt*(flux_x[i,j,mx]*rbp-flux_x[i-1,j,mx]*rbm) - dyt*(flux_y[i,j,mx]-flux_y[i,j-1,mx])*rc(i) + dt*source[i,j,mx]
Qout[i,j,my] = Qin[i,j,my]*rc(i) - dxt*(flux_x[i,j,my]*rbp-flux_x[i-1,j,my]*rbm) - dyt*(flux_y[i,j,my]-flux_y[i,j-1,my])*rc(i) + dt*source[i,j,my]
Qout[i,j,mz] = Qin[i,j,mz]*rc(i) - dxt*(flux_x[i,j,mz]*rbp-flux_x[i-1,j,mz]*rbm) - dyt*(flux_y[i,j,mz]-flux_y[i,j-1,mz])*rc(i) + dt*source[i,j,mz]
Qout[i,j,en] = Qin[i,j,en]*rc(i) - dxt*(flux_x[i,j,en]*rbp-flux_x[i-1,j,en]*rbm) - dyt*(flux_y[i,j,en]-flux_y[i,j-1,en])*rc(i) + dt*source[i,j,en]
Qout[i,j,rh:en+1] = Qout[i,j,rh:en+1]*rci
return Qout
```

`flx`

and the freezing speeds `cfx`

along the x-direction. These speeds correspond to the largest speed found in the hyperbolic equation at the given location $(x,y)$. Using freezing speed avoids using a Riemann solver.

In [10]:

```
@jit(nopython=True)
def calc_flux_x(Qx):
cfx=np.zeros((nx,nQ))
flx=np.zeros((nx,nQ))
for i in range(0, nx):
dn = Qx[i,rh]
dni = 1./dn
vx = Qx[i,mx]*dni
vy = Qx[i,my]*dni
vz = Qx[i,mz]*dni
P = (aindex - 1)*(Qx[i,en] - 0.5*dn*(vx**2 + vy**2 + vz**2))
if(P < P_floor):
P = P_floor
flx[i,rh] = Qx[i,mx]
flx[i,mx] = Qx[i,mx]*vx + P
flx[i,my] = Qx[i,my]*vx
flx[i,mz] = Qx[i,mz]*vx
flx[i,en] = (Qx[i,en] + P)*vx
asqr = sqrt(aindex*P*dni)
vf1 = sqrt(vx**2 + aindex*P*dni)
cfx[i,rh] = vf1
cfx[i,mx] = vf1
cfx[i,my] = vf1
cfx[i,mz] = vf1
cfx[i,en] = vf1
return cfx,flx
```

`fly`

and the freezing speeds `cfy`

along the y-direction.

In [11]:

```
@jit(nopython=True)
def calc_flux_y(Qy):
cfy=np.zeros((ny,nQ))
fly=np.zeros((ny,nQ))
for j in range(0, ny):
dn = Qy[j,rh]
dni = 1./dn
vx = Qy[j,mx]*dni
vy = Qy[j,my]*dni
vz = Qy[j,mz]*dni
P = (aindex - 1)*(Qy[j,en] - 0.5*dn*(vx**2 + vy**2 + vz**2))
if(P < P_floor):
P = P_floor
fly[j,rh] = Qy[j,my]
fly[j,mx] = Qy[j,mx]*vy
fly[j,my] = Qy[j,my]*vy + P
fly[j,mz] = Qy[j,mz]*vy
fly[j,en] = (Qy[j,en] + P)*vy
asqr = sqrt(aindex*P*dni)
vf1 = sqrt(vy**2 + aindex*P*dni)
cfy[j,rh] = vf1
cfy[j,mx] = vf1
cfy[j,my] = vf1
cfy[j,mz] = vf1
cfy[j,en] = vf1
return cfy,fly
```

In [12]:

```
def tvd2(Qin,n,ff,cfr):
sl = 1 #use 0.75 for first order
wr = cfr*Qin + ff
wl = cfr*Qin - ff
fr = wr
fl = np.roll(wl,-1,axis=0)
dfrp = np.roll(fr,-1,axis=0) - fr
dfrm = fr - np.roll(fr,+1,axis=0)
dflp = fl - np.roll(fl,-1,axis=0)
dflm = np.roll(fl,+1,axis=0) - fl
dfr=np.zeros((n,nQ))
dfl=np.zeros((n,nQ))
flux2 = tvd2_cycle(nQ, n, dfrp, dfrm, dflp, dflm, dfr, dfl, fr, fl, sl)
return flux2
@jit(nopython=True)
def tvd2_cycle(nQ, n, dfrp, dfrm, dflp, dflm, dfr, dfl, fr, fl, sl):
flux2=np.zeros((n,nQ))
for l in range(nQ):
for i in range(n):
if(dfrp[i,l]*dfrm[i,l] > 0) :
dfr[i,l] = dfrp[i,l]*dfrm[i,l]/(dfrp[i,l] + dfrm[i,l])
else:
dfr[i,l] = 0
if(dflp[i,l]*dflm[i,l] > 0) :
dfl[i,l] = dflp[i,l]*dflm[i,l]/(dflp[i,l] + dflm[i,l])
else:
dfl[i,l] = 0
flux2 = 0.5*(fr - fl + sl*(dfr - dfl))
return flux2
```

We truly compute all the fluxes here.

In [15]:

```
def get_flux(Qin):
flux_x=np.zeros((nx,ny,nQ))
flux_y=np.zeros((nx,ny,nQ))
fx=np.zeros((nx,nQ))
cfx=np.zeros((nx,nQ))
ffx=np.zeros((nx,nQ))
fy=np.zeros((ny,nQ))
cfy=np.zeros((ny,nQ))
ffy=np.zeros((ny,nQ))
for j in range(0, ny):
cfx,ffx=calc_flux_x(Qin[:,j,:])
flux_x[:,j,:]=tvd2(Qin[:,j,:],nx,ffx,cfx)
for i in range(0, nx):
cfy,ffy=calc_flux_y( Qin[i,:,:])
flux_y[i,:,:]=tvd2(Qin[i,:,:],ny,ffy,cfy)
return flux_x,flux_y
```

We now limit flows to avoid minor instabilities caused by round errors.

In [16]:

```
def limit_flow(Qin):
en_floor = rh_floor*T_floor/(aindex-1)
for j in range(ngu-1, ny-ngu+1):
for i in range(ngu-1, nx-ngu+1):
if(Qin[i,j,rh] <= rh_floor):
Qin[i,j,rh] = rh_floor
Qin[i,j,mx] = 0.0
Qin[i,j,my] = 0.0
Qin[i,j,mz] = 0.0
Qin[i,j,en] = en_floor
if(MMask[i,j]==1):
Qin[i,j,mx] = 0.0
Qin[i,j,my] = 0.0
Qin[i,j,mz] = 0.0
Qin[i,j,en] = en_floor
```

Everything you find below should be the only lines of code you need to change.
We now initialize the computational domain. `ngu`

correspond to the number of points at the boundary. Leave it alone. `nx`

and `ny`

are the number of grid points along the x- and y-directions. `lxu`

and `lxd`

are the upper and lower corners of the mesh along the x-direction. `lyu`

and `lyd`

are the corners for the y-direction. `dxi`

and `dyi`

are the inverse of the grid resolution `dx`

and `dy`

. Since we use both `1/dx`

and `/dy`

a lot and it is more expansive to compute inverse, we compute them once and that's it. Since python is interactive, we could see them as constants here.

**Every time you want to restart a new computation you need to recompile what's below.**

In [17]:

```
ngu=2
nx=45
ny=31
lxu=15e-3/L0 #note that 15e-3 is in m while 15e-3/L0 is dimensionless
lxd=0
lyd=0
lyu = lyd + (ny-2*ngu)*(lxu-lxd)/(nx-2*ngu)
lyd=-lyu/2. #we do this to center the grid on 0
lyu=lyu/2.
dxi = (nx-2*ngu)/(lxu-lxd)
dyi = (ny-2*ngu)/(lyu-lyd)
```

As discussed earlier, it is important to keep the time step `dt`

small enough to keep the code stable. For that we use a combination of flow speed and sound speed. There is a stability parameter that is called the *Courant–Friedrichs–Lewy* condition (or CFL). The fastest speed $U$ in the computation (the maximum of the flow speed and sound speed) defines the largest time step $\Delta t$ that can be taken as a function of the resolution $\Delta x$.
$$ UC_{CFL}=\frac{\Delta x}{\Delta t}$$
$C_{CFL}<1$. Here we take it 0.1 for second order time advance. Note that higher resolution (i.e. $\Delta x$ smaller) requires to take $\Delta t$ to be smaller. So every time the resolution is increased by a factor of 2 in both x and y, the computational time is $2^3=8$ times longer.

To find the right value for your problem, start with a large number, e.g. 0.5, and decrease that number until the code runs with not error. Then decrease it even more until the final solution of the problem look the same. Then you will have the optimum value for `dt`

. When if `cfl`

is too small, there are a lot of time steps and sharp features in the solution will start to be washed out. So it is important to find the right `cfl`

number. But once it is found then similar problems can be solved using the same condition.

In [18]:

```
def get_min_dt(Q):
cfl=0.05 #this is where the CFL condition is changed. Keep the rest identical
vmax =sqrt(aindex*1.6e-19*T_floor*te0/(mu*1.6e-27))/v0
for j in range(ny):
for i in range(nx):
dn = Q[i,j,rh]
dni=1./dn
vx = Q[i,j,mx]*dni
vy = Q[i,j,my]*dni
vz = Q[i,j,mz]*dni
T = (aindex - 1)*(Q[i,j,en]*dni - 0.5*(vx**2 + vy**2 + vz**2))
if(T < T_floor):
T = T_floor
cs=sqrt(aindex*1.6e-19*T*te0/(mu*1.6e-27))/v0
if(vmax < cs and T > rh_mult*T_floor):
vmax = cs
v = sqrt(vx**2+vy**2+vz**2)
if(vmax < v and Q[i,j,rh] > rh_mult*rh_floor):
vmax = v
return min(1./(vmax*dxi),1./(vmax*dyi))*cfl
```

`ti`

and final `tf`

times. `n_out`

corresponds to to the total number of outputs. The rest are variable initialization. Note the most important one, the time `t`

.

In [19]:

```
ti = 00.0E-2
tf = 5.e-6/t0
n_out=20
n_steps=0
t_count=0
dt=0.
t = ti
```

`MMask`

for parts of the simulation space were we want to preclude motion. That will come handy later in the course.

In [20]:

```
Q=np.zeros((nx,ny,nQ))
MMask=np.zeros((nx,ny))
Q1=np.copy(Q)
Q2=np.copy(Q)
sources=np.zeros((nx,ny,nQ))
flux_x=np.zeros((nx,ny,nQ))
flux_y=np.zeros((nx,ny,nQ))
```

That's for an empty domain.

In [21]:

```
Q[:,:,rh]=rh_floor
Q[:,:,en]=0.026/te0*Q[:,:,rh]/(aindex-1)
```

Below we define our boundary conditions. In this example, we do not use the time `t`

and focus only on steady state solutions. The first block define the default boundary conditions for the whole domain. Note that the conditions for the y-direction are pretty simple. These boundary conditions correspond to an open boundary. everything can go through it.

Boundary conditions are a bit more complex for the x-direction. We are using a cylindrical coordinate system with open boundary conditions on the right but reflecting boundary conditions on the left. That's because nothing can pass through the axis of symmetry $r=0$.

At the end we have the boundary conditions that make up our problem, e.g. material free falling toward the axis of our coordinate system. Note that we are using very large density because the author works in high energy density plasmas. the initial parameters `L0`

, `t0`

and `n0`

can be changed to accommodate other regimes. But, chances are that the CFL scaler `cflm`

may have to be adjusted.

In [22]:

```
def set_bc(Qin,t):
if ((rc(0)==rc(1))or(lxd!=0)):
#left open boundary conditions at x=lxu
Qin[1,:,:] = Qin[2,:,:]*rc(2)/rc(1)
Qin[0,:,:] = Qin[1,:,:]*rc(1)/rc(0)
else:
#left reflecting boundary conditions at x=lxd when the symmetry axis is located there
for l in range(ngu):
for k in range(nQ):
if(k == mx):
Qin[l,:,k] = -Qin[2*ngu-l-1,:,k]
if (k == mz) :
Qin[l,:,k] = -Qin[2*ngu-l-1,:,k]
if (k == my or k == rh or k == en) :
Qin[l,:,k] = Qin[2*ngu-l-1,:,k]
#right open boundary conditions at x=lxu
Qin[nx-ngu,:,:] = Qin[nx-ngu-1,:,:]*rc(nx-ngu-1)/rc(nx-ngu)
Qin[nx-ngu+1,:,:] = Qin[nx-ngu,:,:]*rc(nx-ngu)/rc(nx-ngu+1)
#bottom open boundary conditions at y=lyd
Qin[:,1,:] = Qin[:,2,:]
Qin[:,0,:] = Qin[:,1,:]
#top open boundary conditions at ly=lyu
Qin[:,ny-ngu,:] = Qin[:,ny-ngu-1,:]
Qin[:,ny-ngu+1,:] = Qin[:,ny-ngu,:]
#our boundary condtions for injecting material radially into our domain
#note that all the values used are dimensionless
N=3
for j in range (int(ny/2)-N,int(ny/2)+N):
for k in range(ngu):
Q[nx-1-k,j,rh]=6e28/n0
Q[nx-1-k,j,mx]=-4e2/v0*Q[nx-1-k,j,rh]
Q[nx-1-k,j,mz]=-1e2/v0*Q[nx-1-k,j,rh]
Q[nx-1-k,j,en]=0.026/te0*Q[nx-1-k,j,rh]/(aindex-1)+0.5*(Q[nx-1-k,j,mx]**2+Q[nx-1-k,j,my]**2+Q[nx-1-k,j,mz]**2)/Q[nx-1-k,j,rh]
```

Let's define some plotting parameters

In [23]:

```
matplotlib.rcParams.update({'font.size': 22})
xi = np.linspace(lxd*L0, lxu*L0, nx-2*ngu-1)
yi = np.linspace(lyd*L0, lyu*L0, ny-2*ngu-1)
progress_bar = FloatProgress(min=ti, max=tf,description='Progress')
output_bar = FloatProgress(min=0, max=(tf-ti)/n_out,description='Next output')
columns = 2
rows = 3
box=np.array([lxd,lxu,lyd,lyu])*L0/1e-3
```

And here we go!

In [24]:

```
while t<tf:
dt=get_min_dt(Q) #compute the optimal dt
set_bc(Q,t) #compute the boundary cnditions
sources=get_sources(Q) #compute the sources
flux_x,flux_y=get_flux(Q) #and ten the fluxes
Q1=advance_time_level(Q,flux_x,flux_y,sources) #advance the solution
limit_flow(Q1) #limit flows that are too fast
set_bc(Q1,t+dt) #let's do it again
sources=get_sources(Q1)
flux_x,flux_y=get_flux(Q1)
Q2=advance_time_level(Q1,flux_x,flux_y,sources)
limit_flow(Q2)
Q=0.5*(Q+Q2) #here we do a simple second average
limit_flow(Q)
n_steps+=1 #time step increased by 1
t_count+=dt #and the time by dt
progress_bar.value=t #we have defined some progress bars cause python is slow
output_bar.value+=dt # we need to monitor our progress and reduce resolution if it takes too long
if ((t==ti) or (t_count>(tf-ti)/n_out)): #everything below is plotting....
fig=plt.figure(figsize=(20, 19))
for i in range(1, rows+1):
for j in range(1, columns+1):
k=j+(i-1)*columns
fig.add_subplot(rows, columns, k)
if (k==1):
data= np.flip(np.transpose(Q[ngu:nx-ngu-1,ngu:ny-ngu-1,rh]*n0),0)
im = plt.imshow(data, cmap=plt.cm.nipy_spectral, norm=colors.LogNorm(vmin=data.min(), vmax=data.max()),extent=box)
cb = fig.colorbar(im,fraction=0.046, pad=0.04)
cb.ax.set_ylabel('$n_i(m^{-3})$', rotation=-90)
if (k==3):
data=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,mx]**2
data+=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,my]**2
data+=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,mz]**2
data/=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,rh]**2
data=np.sqrt(data)*L0/t0
data= np.flip(np.transpose(data),0)
im = plt.imshow(data, extent=box)
cb = fig.colorbar(im,fraction=0.046, pad=0.04)
cb.ax.set_ylabel('$u(m/s)$', rotation=-90)
if (k==2):
data=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,mx]**2
data/=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,rh]**2
data=np.sqrt(data)*L0/t0
data= np.flip(np.transpose(data),0)
im = plt.imshow(data, extent=box)
cb = fig.colorbar(im,fraction=0.046, pad=0.04)
cb.ax.set_ylabel('$u_r(m/s)$', rotation=-90)
if (k==4):
data=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,my]**2
data/=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,rh]**2
data=np.sqrt(data)*L0/t0
data= np.flip(np.transpose(data),0)
im = plt.imshow(data, extent=box)
cb = fig.colorbar(im,fraction=0.046, pad=0.04)
cb.ax.set_ylabel('$u_z(m/s)$', rotation=-90)
if (k==6):
data=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,mz]**2
data/=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,rh]**2
data=np.sqrt(data)*L0/t0
data= np.flip(np.transpose(data),0)
im = plt.imshow(data, extent=box)
cb = fig.colorbar(im,fraction=0.046, pad=0.04)
cb.ax.set_ylabel('$u_\phi(m/s)$', rotation=-90)
if (k==5):
data=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,tp]*te0
data= np.flip(np.transpose(data),0)
im = plt.imshow(data, cmap=plt.cm.jet, extent=box)
cb = fig.colorbar(im,fraction=0.046, pad=0.04)
cb.ax.set_ylabel('$T(eV)$', rotation=-90)
im.set_interpolation('quadric')
cb.ax.yaxis.set_label_coords(6.5, 0.5)
plt.gca().axes.get_yaxis().set_visible(False)
plt.gca().axes.get_xaxis().set_visible(False)
if (j==1):
plt.ylabel('z(mm)', rotation=90)
plt.gca().axes.get_yaxis().set_visible(True)
if (i==rows):
plt.xlabel('r(mm)', rotation=0)
plt.gca().axes.get_xaxis().set_visible(True)
plt.tight_layout()
clear_output()
output_bar.value=0
display(progress_bar) # display the bar
display(output_bar) # display the bar
t_count=0
show_inline_matplotlib_plots()
t=t+dt
#we are now done. Let's turn off the progress bars.
output_bar.close()
del output_bar
progress_bar.close()
del progress_bar
print("DONE")
```

FloatProgress(value=49.22193753076992, description='Progress', max=50.00000000000001)

FloatProgress(value=0.0, description='Next output', max=2.5000000000000004)