Once collisions are numerous enough to average particle motion to that of a fluid, we can use a simpler model. This model is now using macroscopic quantities like the mass density $\rho$ $(kg/m^3)$, the velocity $\vec v$ $(m/s)$ or the kinetic energy density $\rho v^2$ $(J/m^3)$.
As we saw in the previous chapter, the best way to define collisions is to use particles which are labeled using invariant quantities of motion (rest mass $m$ and charge $q$) as well as quantities that varies with motion (the position $\vec r$ and the velocity $\vec v$).
If we are at a fix position is space $\vec r=(x,y,z)$ we can count all the particles inside an infinitely small small volume $dV=dxdydz$. Let's say that this number is $dN$. Once all these particles have been counted we can sort them as a function of their velocities. We end up with a distribution of particles across all available velocities at the location $\vec r$. Now, this distribution $f$ is a function of both $\vec r$ and $\vec v$ and we can write it as $f(\vec r,\vec v,t)$. $f$ is simply the number or particles per $m^3$ and $(m/s)^3$.
One can use the momentum $\vec p=m\vec v$ instead and then $f$ is now function of both $\vec r$ and $\vec p$. $$f=f(x,y,z,p_x,p_y,p_z,t)$$ So $f$ has a total of six coordinates plus time. To differentiate between the two different spaces (space and momentum space) the distribution function is said to be a function of space $(x,y,z)$ and phase space $(p_x,p_y,p_z)$.
For a closed system with no source or sink of:
we have $$df(\vec r,\vec p,t)=0\tag{4.1}$$
Note that Eq. $(4.1)$ is a total derivative since both $\vec r$ and $\vec p$ are both function of time. Evidently, we can use the chain rule to get $$df(\vec r,\vec p,t)=\frac{\partial f}{\partial t}dt+\frac{\partial f}{\partial \vec r}d\vec r+\frac{\partial f}{\partial \vec p}d\vec p$$
Clearly, we can rewrite this equation using the fact that $$\frac{\partial f}{\partial \vec r}d\vec r=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy+\frac{\partial f}{\partial z}dz=\vec \nabla_{\vec r}\,f.d\vec r\tag{4.2}$$ and $$\frac{\partial f}{\partial \vec p}d\vec p=\frac{\partial f}{\partial p_x}dp_x+\frac{\partial f}{\partial p_y}dp_y+\frac{\partial f}{\partial p_z}dp_z=\vec \nabla_{\vec p}\,f.d\vec p\tag{4.3}$$
So we are left with $$\frac{\partial f}{\partial t}dt+\vec \nabla_{\vec r}\,f.d\vec r+\vec \nabla_{\vec p}\,f.d\vec p=0$$ Clearly, we can divide by $dt$ and use the fact that $$\frac{d\vec p}{dt}=\vec F$$ to get $$\frac{\partial f}{\partial t}+\frac{\vec p}{m}. \vec \nabla_{\vec r}\,f+\frac{\vec F}{m}.\vec \nabla_{\vec v}\,f=0\tag{4.4}$$
Note that, besides using $d\vec p/dt=\vec F$, there is no physics added to Eq. $(4.4)$. It's all math! From now on, we are going to use velocity phase space coordinate $\vec v$ instead of momentum phase space coordinate $\vec p$, so eq. $(4.4)$ becomes $$\frac{\partial f}{\partial t}+\vec v. \vec \nabla_{\vec r}\,f+\frac{\vec F}{m}.\vec \nabla_{\vec v}\,f=0\tag{4.4}$$
So when does the physics comes into play? During interaction between particles, a.k.a. collisions. These collisions are source or sink of particles. For instance, consider collisions that push particles from a velocity $\vec v_1$ to a velocity $\vec v_2$, then $f(\vec r,\vec v_1,t)$ decreases with time while $f(\vec r,\vec v_2,t)$ increases with time. For now, we encapsulate collisions inside a symbolic function $$\Big(\frac{\partial f}{\partial t}\Big)_{coll}$$ and we get, $$\frac{\partial f}{\partial t}+\vec v. \vec \nabla_{\vec r}\,f+\frac{\vec F}{m}.\vec \nabla_{\vec v}\,f=\Big(\frac{\partial f}{\partial t}\Big)_{coll}$$ Since $\vec F/m=\vec a$ we get the final equation, known as Boltzmann equation, $$\frac{\partial f}{\partial t}+\vec v. \vec \nabla_{\vec r}\,f+\vec a.\vec \nabla_{\vec v}\,f=\Big(\frac{\partial f}{\partial t}\Big)_{coll}\tag{4.5}.$$
Fig.1 Change of f as a function of time inside the volume dV due to a spacial variation of f along the x axis and a velocity parallel to that gradient.
The figure above shows that $f$ changes in time due to a change in f along the x-direction and a velocity $v_x$ that moves this change insdie the volume dV.
Vlasov realized that to properly capture Coulomb collisions, one should not use the collision operator $(\frac{\partial f}{\partial t})_{coll}$ but split the charge particle distribution function into two distribution functions:
We suppose here that there is only one ions species with ionization $Z_i$. However, if other collisions (e.g. charged particles-neutrals) are not negligible then they have to be taken into account.
The total system used by Vlasov let the two distribution functions interact via the electric and magnetic fields that these particles generate. $$\begin{split} \frac{\partial f_e}{\partial t} + \vec v.\vec \nabla_{\vec r}\, f_e - \frac{e}{m_e}\left(\vec {E}+\vec v\times\vec {B}\right).\vec \nabla_{\vec v}\,f_e = 0 \\ \frac{\partial f_i}{\partial t} + \vec v.\vec \nabla_{\vec r}\,f_i +\frac{ Z_i e}{m_i}\left(\vec E+\vec v\times\vec B\right).\vec \nabla_{\vec v}\,f_i = 0 \end{split} \tag{4.6} $$ So how do we find these global fields?
We used Maxwell's equations $$ \begin{split} \vec \nabla\times\vec {B} =\mu_0\vec {j}+\mu_0\varepsilon_0 \frac{\partial\vec {E}}{\partial t} \\ \vec \nabla\times\vec {E} =-\frac{\partial\vec {B}}{\partial t} \\ \vec \nabla\cdot\vec {E} =\frac{\rho_E}{\varepsilon_0} \\ \vec \nabla\cdot\vec {B} =0 \end{split} \tag{4.7}$$ By global, Vlasov meant that these fields are the fields created by all the particles at the location $(x,y,z)$ but average over all the particle momenta at this location.
The global fields are defined by the charge density $\rho_E(x,y,z)$ and by the electrical current $\vec j(x,y,z)$. Note that these quantities are now independent of the particle momenta. Indeed, this feat is only possible by using $$\rho_E=e\int(Z_i f_i-f_e)d^3v \\ \vec j=e\int(Z_i f_i \vec v -f_e \vec v)d^3v $$ Basically, we integrated the distribution function over all velocities at each location $(x,y,z)$ to get local quantities where the velocity information has been encapsulated. This approach reduces the amount of information we have to deal with.
What do we obtain if we integrate the distribution function $f_i$ or $f_e$ over all velocities?
We obtain the number density in $particles/m^3$ $$n_{i,e}=\int f_{i,e}(x,y,z,v_x,v_y,v_z,t)d^3v\tag{4.8}$$ Note that we used the subscript $i$ and $e$ in the same equations rather than writing one equation for the electrons and one for the ions.
What happens if the integrate the flux of the distribution function $\vec \Gamma_{f_{i,e}}=f_{i,e}\vec v$ over all velocities ?
We obtain the total flux of particles $\vec \Gamma_{i,e}$ $$\vec \Gamma_{i,e}=\int f_{i,e}(x,y,z,v_x,v_y,v_z,t)\vec v d^3v$$ Since we have defined the flux as a number density times a velocity, then we can get a flow velocity $\vec u_{i,e}$ from dividing the flux by the particle density $n_{i,e}$ $$\vec u_{i,e}=\frac{\vec \Gamma_{i,e}}{n_{i,e}}$$ which we can rewrite as $$n_{i,e}\vec u_{i,e}=\int f_{i,e}(x,y,z,v_x,v_y,v_z,t)\vec v d^3v\tag{4.9}$$ Note that the flow velocity $\vec u$ is different for ions and electrons, while the velocity $\vec v$ is a phase space coordinate of the distribution function, like $\vec x$ is a spacial coordinate. So if $\vec x$ has no subscripts, then $\vec v$ should not either!
What is the physical meaning of $m_{i,e}n_{i,e}v^2$?
This is the kinetic energy density. What happens if we multiply the distribution function by $\vec v^2/2$?
We get the density of kinetic energy $\epsilon_{i,e}$ (not to confuse with $\varepsilon$ which is the permitivity of materials). $$\epsilon_{i,e}=\int f_{i,e}(x,y,z,v_x,v_y,v_z,t)\frac{\vec v^2}{2} d^3v\tag{4.10}$$ This energy density of the fluid is made up of energy at the macroscopic scale (i.e. kinetic energy) and energy at the miscroscopic scale, i.e. magnetic energy and internal energy. In our case, internal energy is simply pressure. $$m_{i,e}\epsilon_{i,e}=\frac{1}{2}m_{i,e}n_{i,e}\vec u_{i,e}^2+\frac{3}{2}p_{i,e}$$ Since $\epsilon_{i,e}$, $n_{i,e}$ and $\vec u_{i,e}$ are know from the previous moment equations, we can define the pressure as $$\frac{3}{2}p_{i,e}=m_{i,e}\epsilon_{i,e}-\frac{1}{2}m_{i,e}n_{i,e}\vec u_{i,e}^2$$ Here $p_{i,e}(\vec r,t)$ is the isotropic pressure. Note that $\frac{3}{2}p_{i,e}$ corresponds to the internal (i.e. microscopic) energy density, while $\frac{1}{2}m_{i,e}n_{i,e}\vec u_{i,e}^2$ correspond to the flow kinetic energy density.
Let's compute numerically the particle density for the distribution function given by $$f(\vec r,\vec v,t)=A_0\exp\Big(-\sqrt{\frac{x^2+y^2+z^2}{L_0^2}}\Big)\exp\Big(-\frac{v_x^2+v_y^2+v_z^2}{v_0^2}\Big)$$ where $A_0=1000\,particles/m^6s^3$, $L_0=1\mu m$ and $v_0=0.1\,m/s$.
import numpy as np
from scipy.integrate import nquad
from math import exp as exp
def integrand(vx,vy,vz,A0,v0):
return A0*exp(-(vx**2+vy**2+vz**2)/v0**2)
I = nquad(integrand,[[-np.inf,np.inf],[-np.inf,np.inf],[-np.inf,np.inf]], args=(1e3,.1))
print('the value is',I[0],'particles/m^3 with error',I[1])
the value is 5.568327996831708 particles/m^3 with error 2.0286636909816417e-08
So the answer is $n(x,y,z)=5.568327996831708\exp\Big(-\sqrt{\frac{x^2+y^2+z^2}{L_0^2}}\Big)\,particles/m^3$. Can we compute the previous integral analytically?
Yes. The integral of $\exp(-x^2)$ from $-\infty$ to $+\infty$ is
Since we know that $$\int\int\int_{-\infty}^{+\infty}\exp\Big(-\frac{v_x^2+v_y^2+v_z^2}{v_0^2}\Big)dv_xdv_ydv_z=\Big(\int_{-\infty}^{+\infty}\exp(-\frac{v_x^2}{v_0^2})dv_x\Big)^3$$
we get $$\int\int\int_{-\infty}^{+\infty}\exp\Big(-\frac{v_x^2+v_y^2+v_z^2}{v_0^2}\Big)=(v_0\sqrt{\pi})^3$$
So the answer is $n(x,y,z)=A_0(v_0\sqrt{\pi})^3\exp\Big(-\sqrt{\frac{x^2+y^2+z^2}{L_0^2}}\Big)\,particles/m^3$. The particles have a Gaussian distribution.
Now that we have integrated $f$, $f\vec v$ and $f\vec v^2$ over all possible velocities, let's redo the same thing with $df$, $df\vec v$ and $df \vec v^2$.
The conservation of mass in obtained by integrating Vlasov's equation $$\int\int\int_{-\infty}^{+\infty}\frac{\partial f_e}{\partial t} + \vec v.\vec \nabla_{\vec r}\, f_e +\vec a.\vec \nabla_{\vec v}\,f_ed^3v=\int\int\int_{-\infty}^{+\infty}\Big(\frac{\partial f}{\partial t}\Big)_{e\,coll}d^3v\tag{4.11}$$ Since the left hand side of Eq. $(4.11)$ is integrated for all possible velocities, we will get the same results for electrons and ions. However, the right hand side of Eq. $(4.11)$ is dependent on cross-sections and has to be treated separately for ions and electrons.
When we integrate eq. $(4.11)$ over all velocities, we find the first conservation equation, called the conservation of mass or continuity equation $$\frac{\partial n_e}{\partial t}+\vec \nabla . \big(n_e\vec u_e\big)=S_e\tag{4.12}$$ $S_e$ is the sources of particles in this equations and comes from $$S_e=\int\int\int_{-\infty}^{+\infty}\Big(\frac{\partial f}{\partial t}\Big)_{e\,coll}d^3v$$ Note that these collisions are all but Coulomb collisions in the context set by Vlasov. If $S_e$ is positive then the electron density $n_e$ at the location $(x,y,z)$ is increasing or the flux of particle leaving this location is non-zero and points away from this volume. This is the meaning of the divergence operator $\vec\nabla.\bullet$
Physically, what can be considered a source of electrons at a given location?
Ionizations.
Note that injecting electrons at this location does not count because the represent an electron flux and this would be accounted for in $\vec \nabla . \big(n_e\vec u_e\big)$. When $S_e$ is negative then there is a sink of electrons. Physically, what can be considered a sink of electrons at a given location?
Recombinations.
It is usual to break $S_e$ down into two positive quantities, $G_e$, the term corresponding to electron sources or gains, and $L_e$, the term corresponding to electrons sinks or losses. So, we get $$\frac{\partial n_e}{\partial t}+\vec \nabla . \big(n_e\vec u_e\big)=G_e-L_e.\tag{4.13}$$
One way to understand conservation of mass (with no subscripts) $\frac{\partial n}{\partial t}+\vec \nabla . \big(n\vec u\big)=0$ is to look at Fig. 1 and replace $f$ with $n$. If $n$ changes along one spacial direction and there is also a velocity along this direction then $n$ changes with time. In other words, $n\vec v$ represents a flux of particles. If this flux of particles is forced inside a volume, then the number of particles inside that volume increases with time, i.e. $\frac{\partial n}{\partial t}$ is positive.
Clearly, we can repeat the whole process for ions $$\frac{\partial n_i}{\partial t}+\vec \nabla . \big(n_i\vec u_i\big)=G_i-L_i.\tag{4.14}$$
And we can also apply this method when particles are neutrals (i.e. gas) and we get $$\frac{\partial n_g}{\partial t}+\vec \nabla . \big(n_g\vec u_g\big)=G_g-L_g\tag{4.15}$$
If we look at ionizations only, we can write $$G_e=\nu_{iz}n_e$$ or $$G_e=n_en_g\sigma<v>_{ionization}$$ Note that $G_e=L_g$, $G_e=G_i$, together with $G_g=0$ (no recombinations) and $L_g=G_i$. This effectively couples all three equations together.
If ions can be ionized multiple times, then we will have a conservation equation for each charged ion species (e.g. single ionized, double ionized, ...). But all the conservation equations are still coupled via their source terms.
We now look at what happens when Vlasov equations for ions and electrons are multiplied by $m_{i,e}\vec v$. Integrating over all velocities we get conservation of momentum. $$\frac{\partial n_e\vec u_e}{\partial t}+\vec \nabla \cdot (n_e\vec u_e \vec u_e)=\frac{1}{m_e}\big(-en_e[\vec E +\vec u_e \times \vec B]-\vec \nabla p_e+\vec F|_{e\,coll}\big)$$ and $$\frac{\partial n_i\vec u_i}{\partial t}+\vec \nabla \cdot (n_i\vec u_i \vec u_i)=\frac{1}{m_i}\big(Z_ien_i[\vec E +\vec u_i \times \vec B]-\vec \nabla p_i+\vec F|_{i\,coll}\big)$$
Here by $\vec u\vec v$ we mean the outer product (i.e. vector product not dot product) which is a matrix $$\begin{align}\vec u\vec v = \begin{bmatrix}u_x \\ u_y \\ u_z \end{bmatrix} \begin{bmatrix}v_x & v_y & v_z\end{bmatrix} = \begin{bmatrix}u_xv_x & u_xv_y & u_xv_z \\ u_yv_x & u_yv_y & u_yv_z \\ u_zv_x & u_zv_y & u_zv_z \end{bmatrix}.\end{align}$$ Note that $\vec \nabla \cdot \big(\vec u \vec v\big )$ is now a vector!
While the momentum equation seems more complex than mass conservation, it is in fact the same equation but where $n$ as been replaced by $n\vec u$. So one can see $mn\vec u \vec u$ is the flux of momentum $mn\vec u$ traversing a surface with velocity $\vec u$.
The operator vector $\vec F|_{coll}$ represents the force density of collisions with respect to other ions or neutrals. We will ignore such effects here. Further, we also dropped in this treatment secondary effects, such as viscosity. In this context, the pressure is given by our integral calculus and has the following form. $$p_{i,e}= \frac{1}{3}m_{i,e}n_{i,e}\Big<\big(v_{i,e}-u_{i,e}\big)^2\Big>_{\vec v}$$ where $\Big<\big(\bullet\big)^2\Big>_{\vec v}$ is the average operator over all velocities.
If we now suppose that the electron and ion plasmas behave like an ideal gas then this operator yields for both pressures, $p_e=n_eeT_e$ and $p_i=Z_in_ieT_i$.
In summary, we get conservation laws for momentum $$\frac{\partial n_e\vec u_e}{\partial t}+\vec \nabla \cdot (n_e\vec u_e \vec u_e)=\frac{1}{m_e}\big(-en_e\vec E +\vec j_e \times \vec B-\vec \nabla p_e\big)\tag{4.16}$$ and $$\frac{\partial n_i\vec u_i}{\partial t}+\vec \nabla \cdot (n_i\vec u_i \vec u_i)=\frac{1}{m_i}\big(Z_ien_i\vec E +\vec j_i \times \vec B-\vec \nabla p_i\big)\tag{4.17}$$ after ignoring collisions and using the fact that $qn_q\vec u_q=\vec j_q$, the current density for the particle with charge q.
We now look at what happens when Vlasov equations for ions and electrons are multiplied by $m_{i,e}v^2$ and then integrated over all velocities. This gives conservation of energy for both species. $$\frac{\partial \epsilon_e}{\partial t}+\vec \nabla \cdot (\epsilon_e\vec u_e)=\frac{1}{m_e}\big(-\vec \nabla \cdot Q_e+\vec j_e.\vec E\big)\tag{4.18}$$ Clearly, we have here a similar structure in the equation, mainly a change in time od a certain quantity $\sigma$ is related to its flux $\sigma \vec u$ via the divergence operator $\vec \nabla$. And as before we have introduced a new quantity $\vec Q_e$ that is the flux of energy at the microscopic scale, i.e. $\frac{1}{2}m_en_e<u_e^2\vec u_e>_e$, also called heat flux. The energy density source term $\vec j_e.\vec E$ is the work of the electric field on the moving electrons.
We see here that we were not able to define $\vec Q_e$ from any previous quantities. So we will need to come up with a model for this effect, purely caused by collisions. An popular approximation for the heat flux is $$\vec Q_e=\kappa_e\vec \nabla T_e\tag{4.19}$$ where $\kappa_e$ is the electron heat conductivity.
Of course, we get a similar equations for the ions $$\frac{\partial \epsilon_i}{\partial t}+\vec \nabla \cdot (\epsilon_i\vec u_i)=\frac{1}{m_i}\big(-\vec \nabla \cdot Q_i+\vec j_i.\vec E\big)\tag{4.19}$$
For any fluid, ions or electrons, we have shown that an explicit change in time of any moment ($n$, $n\vec u$ or $\epsilon$) can only be cause by an explicit change of flux in space ($n\vec u$, $n\vec u \vec u$ or $\epsilon \vec u$) or by explicit cources and sinks. Sources and sinks comes from collisions or from electromagnetic sources (according to Vlasov theory).
The physical meaning of the moments on the left hand side of every equation is tightly bound to the integral operator, but the moments and their flux can alsways be defined as long as an integral operator can be defined. This is not true for most terms on the right hand side. If they come from collision operators, the collision operators have to be defined physically (non trivial task when dealing with plasmas) and some quantities have to be purely and simply defined: the closure quantities (like $p$ or $\vec Q$).
$$\begin{align} &\frac{\partial n_e}{\partial t}+\vec \nabla . \big(n_e\vec u_e\big)=0\\ &\frac{\partial n_i}{\partial t}+\vec \nabla . \big(n_i\vec u_i\big)=0\\ &\frac{\partial n_e\vec u_e}{\partial t}+\vec \nabla \cdot (n_e\vec u_e \vec u_e)=\frac{1}{m_e}\big(-en_e\vec E +\vec j_e \times \vec B-\vec \nabla p_e\big)\\ &\frac{\partial n_i\vec u_i}{\partial t}+\vec \nabla \cdot (n_i\vec u_i \vec u_i)=\frac{1}{m_i}\big(Z_ien_i\vec E +\vec j_i \times \vec B-\vec \nabla p_i\big)\\ &\frac{\partial \epsilon_e}{\partial t}+\vec \nabla \cdot (\epsilon_e\vec u_e)=\frac{1}{m_e}\big(-\vec \nabla \cdot Q_e+\vec j_e.\vec E\big)\\ &\frac{\partial \epsilon_i}{\partial t}+\vec \nabla \cdot (\epsilon_i\vec u_i)=\frac{1}{m_i}\big(-\vec \nabla \cdot Q_i+\vec j_i.\vec E\big)\\ &\vec \nabla\times\vec {B} =\mu_0\vec {j}+\mu_0\varepsilon_0 \frac{\partial\vec {E}}{\partial t} \\ &\vec \nabla\times\vec {E} =-\frac{\partial\vec {B}}{\partial t} \\ &\vec \nabla\cdot\vec {E} =\frac{\rho_E}{\varepsilon_0} \\ &\vec \nabla\cdot\vec {B} =0\\ \end{align}$$The major difficulty in going from two fluid equations to equations for one single fluid is the disparity of both temporal and spacial scales between the ions and the electrons. Because they have the same charge, share similar momentum source or sink, but because the electrons are so much lighter, they react more quickly, 2000 times more quickly if we compare protons and electrons. As a results, two-fluid equations have to be solved on the electron time scale, requiring a numerical treatment that calls for very small time steps.
Because the bulk of the mass lies with the ion fluid, macroscopic effects are most visible at the ion scale. Yet the equations have to be solved very slowly, to account for electron physics, and their interactions with the ions.
To simplify the problem, we assume macroscopic ambipolarity ($\vec u_i\approx\vec u_e$) and quasi-neutrality ($n_e\approx Z_in_i$). We use here the symbol $\approx$ (i.e. almost equal) allows for the existance of an electrical current density $\vec j=Z_ie\vec u_i-en_e\vec u_e$, which can be seen as a difference in flow velocity between ions and electrons. Perfect neutrality and ambipolarity forces the electrical current to be $\vec j=\vec 0$.
Because of these two assumptions, we can redefine our electron distribution function so it takes into account many time cycles on the electron scale inside volumes compatible with the ion scale length. By doing so, we virtually have not change the left hand side of Eqs. $(4.13)$, $(4.16)$ and $(4.18)$. We have put the mathematical burden on the right hand side, i.e. on the closure quantities (e.g. $p$ or $\vec Q$ )and the collision operators. Assumptions will be made on these quantities latter. In this case, we can rewrite all our moments as $$\begin{align} &n=\sum_{i,e}n_{i,e}\\ &\rho=\sum_{i,e}n_{i,e}m_{i,e}\\ &m=\sum_{i,e}m_{i,e}\\ &n\vec u=\frac{1}{\rho}\sum_{i,e}\rho_{i,e}u_{i,e}\\ &\vec j=Z_ien_i\vec u_i-en_e\vec u_e\end{align}$$
One way to keep as much electron physics as possible without breaking the bank is to look at the impact of the electrons on the electric and magnetic fields felt by the ions. This is called Ohm's law.
Ohm's law (or the generalized Ohm's law as we are going to study it) is simply that. It is a law that averages over electron motion at the microscopic (fast) scale to give the effective electric field felt by the ion at the macroscopic (ion) scale.
One might think that quasi-neutrality means that there is no electric field inside a plasma. In fact, it states that there is no source of electrostatic field inside a plasma, i.e. $\rho_E=Z_ien_i-en_e=0$. However, this is not because $\vec \nabla\cdot\vec E=0$ that there is no electrostatic field. It just means there exist a vector $\vec A_E$ such that $\vec E=\vec \nabla\times\vec A_E.$
The simplest case is coming from electron collisions with the ions inside materials. When averaged over many collisions, the force experience collectively by the electron is some kind of drag (fluid view). This is called resistivity in physics. If there is no external electric field, thermal motion averages out to zero. If there is an external electric field, then there is net motion in the direction of the field. But this flow is not proportional to the average force $<-e\vec E>$. This is because the ion are deflecting more eletrons than the electric field can entrain the electrons. As a result, the flow is reduced and depends on the lectric field and the type of material via the factor of proportionality $\eta$, called resistivity $$\vec E=\eta \vec J$$
Ohm was inspired by Fourier's work on heat conduction (thermodynamics or statistical approach) and use experimental measurements to deduce the linear correspondance between field and current. While $\vec j$ is a consequence of the electric field in this framework, the reverse is also true. A flowing current (generated by gravity for instance) can generate an eelctric field.
More generally, electric fields can be generated by different phenemona tightely connected to electron physics. We will look at the following in the next chapter:
While the meaning of electron closure quantities and collisional operators can be recast on the ion time scale, their mathematical definitions become very restrictive. We lose a sizable portion of electron physics by doing so. If the physical system indeed allows it, then the equations will closely match the physical behavior of this system. If not, then it will be a poor approximation. Using the one-fluid quantities as defined above, we get $$\begin{align} &\frac{\partial n}{\partial t}+\vec \nabla . \big(n\vec u\big)=0\tag{4.20}\\ &\frac{\partial n\vec u}{\partial t}+\vec \nabla . \big(n\vec u\vec u\big)=\frac{1}{m}(\vec j\times\vec B-\vec\nabla\cdot p)\tag{4.21}\\ &\frac{\partial \epsilon}{\partial t}+\vec \nabla \cdot (\epsilon\vec u)=\frac{1}{m}\big(-\vec \nabla \cdot Q+\eta j^2\big)\tag{4.22} \end{align}$$
Note that the electric field has been completely eliminated from these equations, by using Ohm's law ($\vec E=\eta \vec j$). Further, when ignoring electromagnetic radiations ($\varepsilon_0\mu_0\frac{\partial \vec E}{\partial t}\approx \vec 0$), it can be dropped from Maxwell's equations $(4.7)$ yielding
Before solving flows where electromagnetism plays an important role, let's first look at what happens when flows are solely driven by kinetic energy, i.e. $B=0$ and $E=0$. In this case, we need to solve Eqs. $(4.20)$ to $(4.22)$ with $\vec j\times \vec B=\vec 0$. We will suppose also that all the heat is transferred via convection rather than conduction so we also ignore $\vec Q$.
To do this we recast our conservation laws using a more general framework. Regardless of their sources (even if fields are involved), Eqs. $(4.20)$ to $(4.22)$ can be written as $$\frac{\partial U}{\partial t}+\vec \nabla \cdot \vec F(U)=S_U\tag{4.25}$$ Here $U$ can be a scalar or a vector. If we discritize our domain into an ensemble of adjacent domains $\Omega_j$ with boundary $\partial\Omega_j$ we can rewrite Eq. $(4.25)$ as $$\iiint_{\Omega_j}\frac{\partial U}{\partial t}dV+\iiint_{\Omega_j}\vec \nabla \cdot \vec F(U)dV=\iiint_{\Omega_j}S_U\,dV$$
Using Gauss's theorem $$\iiint_V\left(\mathbf{\nabla}\cdot\mathbf{F}\right)\,dV=\iint_S\vec F\cdot\vec n\,dS $$ where $\vec n$ is the normal to $\partial\Omega_j$ we get
By using the theorem we actually turned the spatial derivative inside a volume into a surface integral of the function itself. To first order we also suppose that $U$ and $S_U$ are constant inside the volume (this is actually why we discretized our equations) so we get
In our case, $U$ is $n$, $n\vec u$ and $\epsilon$. $\vec F(U)$ is $n\vec u$, $n\vec u\vec u$ or $\epsilon\vec u$.
We are now ready to design a code to handle hydrodynamics flows. Ignoring electric fields and magnetic fields allows us to not only simplify the code but also run the code faster, as we discuss later.
We also use numba to improve code performance (it is python after all) and some python widget to monitor the remaining time of the computation.
Keep in mind that such computation are very intensive on large computers. Using python with neither accelerator (i.e. graphics card) nor parallelization will make any computation inherently slow. So be wise when you choose your resolution.
The code presented here is two-dimensional. It can be used in slab mode, i.e. $z$ is the direction of symmetry, or in cylindrical mode, i.e. $\phi$ is the direction of symmetry. In other words, in slab geometry any quantity is such that $$U(x,y,z)=U(x,y).$$ In cylindrical geometry we have $$U(r,\phi,z)=U(r,z).$$
import numpy as np
import math
from math import sqrt
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from numba import jit
from ipywidgets.widgets.interaction import show_inline_matplotlib_plots
from IPython.display import clear_output
from ipywidgets import FloatProgress
from IPython.display import display
For practical reason we will define some constants that will be use to remember which component in the array correspond to which quantity, e.g. the temperature can be accessed using tp
at the location i,j
as Q[tp,i,j]
. rh
is the number density, mx
is the momentum density in the x-direction, my
and mz
in the y and z directions. en
is for the energy density, tp
the temperature. nQ
is the total number of variables, from rh
to tp
.
rh=0
mx=1
my=2
mz=3
en=4
tp=5
nQ=6
mu
is the atomic number of the fluid, in this case aluminum. aindex
is the adiabatic index.
mu=27.
aindex=1.1
Here we define our dimensional parameters. L0
is the geometrical scale length, t0
the time scale, n0
the density scale, v0
is the velocity scale, p0
the pressure scale, te0
the temperature scale. We also define a couple of floor quantities under which no density, temperature and pressure can go.
L0=1.0e-3
t0=100.0e-9
n0=6.0e28
v0=L0/t0
p0=mu*1.67e-27*n0*v0**2
te0=p0/n0/1.6e-19
rh_floor = 1.0E-8
T_floor = 0.0026/te0
rh_mult = 1.1
P_floor = rh_floor*T_floor
We now define our geometrical functions. They are used quite often so we will compile them inside the code to accelerate computations. To avoid writing two codes, one for slab symmetry and one for cylindrical symmetry, we wrote this code in for slab symmetry and in Cartesian coordinates, i.e. $(x,y,z)$. When using the code for problem with cylindrical symmetry then $$\begin{align} &x\rightarrow r\\ &y\rightarrow z\\ &z\rightarrow \phi\end{align}$$
The rc
function computes the radius in cylindrical coordinates, which appears in source terms and in the conservation equations. Clearly, this is simply $x$ as mentioned above. That's why rc
returns xc
. To turn the code into slab symmetry, just have the function rc
return 1
.
@jit(nopython=True)
def xc(i):
return lxd + (i-ngu+1)/dxi
@jit(nopython=True)
def rc(i):
return xc(i)
@jit(nopython=True)
def yc(j):
return lyd + (j-ngu+1)/dyi
We now add radius and angle function for axi-symmetric distribution in slab geometries, not ot be confused with rc
above. Note the the letter c
, for center, is now gone.
@jit(nopython=True)
def r(i,j):
return math.sqrt((xc(i)-(lxd+lxu)/2)**2 + (yc(j)-(lyd+lyu)/2)**2)
@jit(nopython=True)
def rz(i,j):
return sqrt(xc(i)**2+yc(j)**2)
@jit(nopython=True)
def theta(i,j):
return math.atan(yc(j)-(lyd+lyu)/2,xc(i)-(lxd+lxu)/2)
We now compute the sources for all our conservation equations.
def get_sources(Qin):
sourcesin=np.zeros((nx,ny,nQ))
for j in range(ngu, ny-ngu):
for i in range(ngu, nx-ngu):
rbp = 0.5*(rc(i+1) + rc(i))
rbm = 0.5*(rc(i) + rc(i-1))
dn = Qin[i,j,rh]
dni=1./dn
vx = Qin[i,j,mx]*dni
vy = Qin[i,j,my]*dni
vz = Qin[i,j,mz]*dni
T = (aindex - 1)*(Qin[i,j,en]*dni - 0.5*(vx**2 + vy**2 + vz**2))
if(T < T_floor):
T = T_floor
Qin[i,j,tp]=T
sourcesin[i,j,rh] = 0
P = dn*T
sourcesin[i,j,mx] = (Qin[i,j,mz]*vz + P)
sourcesin[i,j,mz] = - Qin[i,j,mz]*vx
sourcesin[i,j,en] = 0
return sourcesin
Here do the time advance of the conservation equations. The flux are used here as seen in Eq. $(4.26)$. The flux are computed later.
@jit(nopython=True)
def advance_time_level(Qin,flux_x,flux_y,source):
Qout=np.zeros((nx,ny,nQ))
dxt = dxi*dt
dyt = dyi*dt
for j in range(ngu, ny-ngu):
for i in range(ngu, nx-ngu):
rbp = 0.5*(rc(i+1) + rc(i))
rbm = 0.5*(rc(i) + rc(i-1))
rci = 1./rc(i)
Qout[i,j,rh] = Qin[i,j,rh]*rc(i) - dxt*(flux_x[i,j,rh]*rbp-flux_x[i-1,j,rh]*rbm) - dyt*(flux_y[i,j,rh]-flux_y[i,j-1,rh])*rc(i)
Qout[i,j,mx] = Qin[i,j,mx]*rc(i) - dxt*(flux_x[i,j,mx]*rbp-flux_x[i-1,j,mx]*rbm) - dyt*(flux_y[i,j,mx]-flux_y[i,j-1,mx])*rc(i) + dt*source[i,j,mx]
Qout[i,j,my] = Qin[i,j,my]*rc(i) - dxt*(flux_x[i,j,my]*rbp-flux_x[i-1,j,my]*rbm) - dyt*(flux_y[i,j,my]-flux_y[i,j-1,my])*rc(i) + dt*source[i,j,my]
Qout[i,j,mz] = Qin[i,j,mz]*rc(i) - dxt*(flux_x[i,j,mz]*rbp-flux_x[i-1,j,mz]*rbm) - dyt*(flux_y[i,j,mz]-flux_y[i,j-1,mz])*rc(i) + dt*source[i,j,mz]
Qout[i,j,en] = Qin[i,j,en]*rc(i) - dxt*(flux_x[i,j,en]*rbp-flux_x[i-1,j,en]*rbm) - dyt*(flux_y[i,j,en]-flux_y[i,j-1,en])*rc(i) + dt*source[i,j,en]
Qout[i,j,rh:en+1] = Qout[i,j,rh:en+1]*rci
return Qout
We now write the function that computes the flux flx
and the freezing speeds cfx
along the x-direction. These speeds correspond to the largest speed found in the hyperbolic equation at the given location $(x,y)$. Using freezing speed avoids using a Riemann solver.
@jit(nopython=True)
def calc_flux_x(Qx):
cfx=np.zeros((nx,nQ))
flx=np.zeros((nx,nQ))
for i in range(0, nx):
dn = Qx[i,rh]
dni = 1./dn
vx = Qx[i,mx]*dni
vy = Qx[i,my]*dni
vz = Qx[i,mz]*dni
P = (aindex - 1)*(Qx[i,en] - 0.5*dn*(vx**2 + vy**2 + vz**2))
if(P < P_floor):
P = P_floor
flx[i,rh] = Qx[i,mx]
flx[i,mx] = Qx[i,mx]*vx + P
flx[i,my] = Qx[i,my]*vx
flx[i,mz] = Qx[i,mz]*vx
flx[i,en] = (Qx[i,en] + P)*vx
asqr = sqrt(aindex*P*dni)
vf1 = sqrt(vx**2 + aindex*P*dni)
cfx[i,rh] = vf1
cfx[i,mx] = vf1
cfx[i,my] = vf1
cfx[i,mz] = vf1
cfx[i,en] = vf1
return cfx,flx
We now write the function that computes the flux fly
and the freezing speeds cfy
along the y-direction.
@jit(nopython=True)
def calc_flux_y(Qy):
cfy=np.zeros((ny,nQ))
fly=np.zeros((ny,nQ))
for j in range(0, ny):
dn = Qy[j,rh]
dni = 1./dn
vx = Qy[j,mx]*dni
vy = Qy[j,my]*dni
vz = Qy[j,mz]*dni
P = (aindex - 1)*(Qy[j,en] - 0.5*dn*(vx**2 + vy**2 + vz**2))
if(P < P_floor):
P = P_floor
fly[j,rh] = Qy[j,my]
fly[j,mx] = Qy[j,mx]*vy
fly[j,my] = Qy[j,my]*vy + P
fly[j,mz] = Qy[j,mz]*vy
fly[j,en] = (Qy[j,en] + P)*vy
asqr = sqrt(aindex*P*dni)
vf1 = sqrt(vy**2 + aindex*P*dni)
cfy[j,rh] = vf1
cfy[j,mx] = vf1
cfy[j,my] = vf1
cfy[j,mz] = vf1
cfy[j,en] = vf1
return cfy,fly
We now smooth out the different fluxes using a total diminishing variation scheme (or TVD]. Basically, this schemes applies numerical viscosity to smooth out sharp transitions caused by shocks or sharp interfaces. Typically, sharp transitions cause numerical oscillations (called Gibbs phenomenon). Note that this scheme is conservative, in the sense that it does conserve all global quantities $n, n\vec u,\epsilon$.
def tvd2(Qin,n,ff,cfr):
sl = 1 #use 0.75 for first order
wr = cfr*Qin + ff
wl = cfr*Qin - ff
fr = wr
fl = np.roll(wl,-1,axis=0)
dfrp = np.roll(fr,-1,axis=0) - fr
dfrm = fr - np.roll(fr,+1,axis=0)
dflp = fl - np.roll(fl,-1,axis=0)
dflm = np.roll(fl,+1,axis=0) - fl
dfr=np.zeros((n,nQ))
dfl=np.zeros((n,nQ))
flux2 = tvd2_cycle(nQ, n, dfrp, dfrm, dflp, dflm, dfr, dfl, fr, fl, sl)
return flux2
@jit(nopython=True)
def tvd2_cycle(nQ, n, dfrp, dfrm, dflp, dflm, dfr, dfl, fr, fl, sl):
flux2=np.zeros((n,nQ))
for l in range(nQ):
for i in range(n):
if(dfrp[i,l]*dfrm[i,l] > 0) :
dfr[i,l] = dfrp[i,l]*dfrm[i,l]/(dfrp[i,l] + dfrm[i,l])
else:
dfr[i,l] = 0
if(dflp[i,l]*dflm[i,l] > 0) :
dfl[i,l] = dflp[i,l]*dflm[i,l]/(dflp[i,l] + dflm[i,l])
else:
dfl[i,l] = 0
flux2 = 0.5*(fr - fl + sl*(dfr - dfl))
return flux2
We truly compute all the fluxes here.
def get_flux(Qin):
flux_x=np.zeros((nx,ny,nQ))
flux_y=np.zeros((nx,ny,nQ))
fx=np.zeros((nx,nQ))
cfx=np.zeros((nx,nQ))
ffx=np.zeros((nx,nQ))
fy=np.zeros((ny,nQ))
cfy=np.zeros((ny,nQ))
ffy=np.zeros((ny,nQ))
for j in range(0, ny):
cfx,ffx=calc_flux_x(Qin[:,j,:])
flux_x[:,j,:]=tvd2(Qin[:,j,:],nx,ffx,cfx)
for i in range(0, nx):
cfy,ffy=calc_flux_y( Qin[i,:,:])
flux_y[i,:,:]=tvd2(Qin[i,:,:],ny,ffy,cfy)
return flux_x,flux_y
We now limit flows to avoid minor instabilities caused by round errors.
def limit_flow(Qin):
en_floor = rh_floor*T_floor/(aindex-1)
for j in range(ngu-1, ny-ngu+1):
for i in range(ngu-1, nx-ngu+1):
if(Qin[i,j,rh] <= rh_floor):
Qin[i,j,rh] = rh_floor
Qin[i,j,mx] = 0.0
Qin[i,j,my] = 0.0
Qin[i,j,mz] = 0.0
Qin[i,j,en] = en_floor
if(MMask[i,j]==1):
Qin[i,j,mx] = 0.0
Qin[i,j,my] = 0.0
Qin[i,j,mz] = 0.0
Qin[i,j,en] = en_floor
Everything you find below should be the only lines of code you need to change.
We now initialize the computational domain. ngu
correspond to the number of points at the boundary. Leave it alone. nx
and ny
are the number of grid points along the x- and y-directions. lxu
and lxd
are the upper and lower corners of the mesh along the x-direction. lyu
and lyd
are the corners for the y-direction. dxi
and dyi
are the inverse of the grid resolution dx
and dy
. Since we use both 1/dx
and /dy
a lot and it is more expansive to compute inverse, we compute them once and that's it. Since python is interactive, we could see them as constants here.
Every time you want to restart a new computation you need to recompile what's below.
ngu=2
nx=45
ny=31
lxu=15e-3/L0 #note that 15e-3 is in m while 15e-3/L0 is dimensionless
lxd=0
lyd=0
lyu = lyd + (ny-2*ngu)*(lxu-lxd)/(nx-2*ngu)
lyd=-lyu/2. #we do this to center the grid on 0
lyu=lyu/2.
dxi = (nx-2*ngu)/(lxu-lxd)
dyi = (ny-2*ngu)/(lyu-lyd)
As discussed earlier, it is important to keep the time step dt
small enough to keep the code stable. For that we use a combination of flow speed and sound speed. There is a stability parameter that is called the Courant–Friedrichs–Lewy condition (or CFL). The fastest speed $U$ in the computation (the maximum of the flow speed and sound speed) defines the largest time step $\Delta t$ that can be taken as a function of the resolution $\Delta x$.
$$ UC_{CFL}=\frac{\Delta x}{\Delta t}$$
$C_{CFL}<1$. Here we take it 0.1 for second order time advance. Note that higher resolution (i.e. $\Delta x$ smaller) requires to take $\Delta t$ to be smaller. So every time the resolution is increased by a factor of 2 in both x and y, the computational time is $2^3=8$ times longer.
To find the right value for your problem, start with a large number, e.g. 0.5, and decrease that number until the code runs with not error. Then decrease it even more until the final solution of the problem look the same. Then you will have the optimum value for dt
. When if cfl
is too small, there are a lot of time steps and sharp features in the solution will start to be washed out. So it is important to find the right cfl
number. But once it is found then similar problems can be solved using the same condition.
def get_min_dt(Q):
cfl=0.05 #this is where the CFL condition is changed. Keep the rest identical
vmax =sqrt(aindex*1.6e-19*T_floor*te0/(mu*1.6e-27))/v0
for j in range(ny):
for i in range(nx):
dn = Q[i,j,rh]
dni=1./dn
vx = Q[i,j,mx]*dni
vy = Q[i,j,my]*dni
vz = Q[i,j,mz]*dni
T = (aindex - 1)*(Q[i,j,en]*dni - 0.5*(vx**2 + vy**2 + vz**2))
if(T < T_floor):
T = T_floor
cs=sqrt(aindex*1.6e-19*T*te0/(mu*1.6e-27))/v0
if(vmax < cs and T > rh_mult*T_floor):
vmax = cs
v = sqrt(vx**2+vy**2+vz**2)
if(vmax < v and Q[i,j,rh] > rh_mult*rh_floor):
vmax = v
return min(1./(vmax*dxi),1./(vmax*dyi))*cfl
Below we defined the initial ti
and final tf
times. n_out
corresponds to to the total number of outputs. The rest are variable initialization. Note the most important one, the time t
.
ti = 00.0E-2
tf = 5.e-6/t0
n_out=20
n_steps=0
t_count=0
dt=0.
t = ti
And now all our arrays are defined here. Note that we use a mask MMask
for parts of the simulation space were we want to preclude motion. That will come handy later in the course.
Q=np.zeros((nx,ny,nQ))
MMask=np.zeros((nx,ny))
Q1=np.copy(Q)
Q2=np.copy(Q)
sources=np.zeros((nx,ny,nQ))
flux_x=np.zeros((nx,ny,nQ))
flux_y=np.zeros((nx,ny,nQ))
That's for an empty domain.
Q[:,:,rh]=rh_floor
Q[:,:,en]=0.026/te0*Q[:,:,rh]/(aindex-1)
Below we define our boundary conditions. In this example, we do not use the time t
and focus only on steady state solutions. The first block define the default boundary conditions for the whole domain. Note that the conditions for the y-direction are pretty simple. These boundary conditions correspond to an open boundary. everything can go through it.
Boundary conditions are a bit more complex for the x-direction. We are using a cylindrical coordinate system with open boundary conditions on the right but reflecting boundary conditions on the left. That's because nothing can pass through the axis of symmetry $r=0$.
At the end we have the boundary conditions that make up our problem, e.g. material free falling toward the axis of our coordinate system. Note that we are using very large density because the author works in high energy density plasmas. the initial parameters L0
, t0
and n0
can be changed to accommodate other regimes. But, chances are that the CFL scaler cflm
may have to be adjusted.
def set_bc(Qin,t):
if ((rc(0)==rc(1))or(lxd!=0)):
#left open boundary conditions at x=lxu
Qin[1,:,:] = Qin[2,:,:]*rc(2)/rc(1)
Qin[0,:,:] = Qin[1,:,:]*rc(1)/rc(0)
else:
#left reflecting boundary conditions at x=lxd when the symmetry axis is located there
for l in range(ngu):
for k in range(nQ):
if(k == mx):
Qin[l,:,k] = -Qin[2*ngu-l-1,:,k]
if (k == mz) :
Qin[l,:,k] = -Qin[2*ngu-l-1,:,k]
if (k == my or k == rh or k == en) :
Qin[l,:,k] = Qin[2*ngu-l-1,:,k]
#right open boundary conditions at x=lxu
Qin[nx-ngu,:,:] = Qin[nx-ngu-1,:,:]*rc(nx-ngu-1)/rc(nx-ngu)
Qin[nx-ngu+1,:,:] = Qin[nx-ngu,:,:]*rc(nx-ngu)/rc(nx-ngu+1)
#bottom open boundary conditions at y=lyd
Qin[:,1,:] = Qin[:,2,:]
Qin[:,0,:] = Qin[:,1,:]
#top open boundary conditions at ly=lyu
Qin[:,ny-ngu,:] = Qin[:,ny-ngu-1,:]
Qin[:,ny-ngu+1,:] = Qin[:,ny-ngu,:]
#our boundary condtions for injecting material radially into our domain
#note that all the values used are dimensionless
N=3
for j in range (int(ny/2)-N,int(ny/2)+N):
for k in range(ngu):
Q[nx-1-k,j,rh]=6e28/n0
Q[nx-1-k,j,mx]=-4e2/v0*Q[nx-1-k,j,rh]
Q[nx-1-k,j,mz]=-1e2/v0*Q[nx-1-k,j,rh]
Q[nx-1-k,j,en]=0.026/te0*Q[nx-1-k,j,rh]/(aindex-1)+0.5*(Q[nx-1-k,j,mx]**2+Q[nx-1-k,j,my]**2+Q[nx-1-k,j,mz]**2)/Q[nx-1-k,j,rh]
Let's define some plotting parameters
matplotlib.rcParams.update({'font.size': 22})
xi = np.linspace(lxd*L0, lxu*L0, nx-2*ngu-1)
yi = np.linspace(lyd*L0, lyu*L0, ny-2*ngu-1)
progress_bar = FloatProgress(min=ti, max=tf,description='Progress')
output_bar = FloatProgress(min=0, max=(tf-ti)/n_out,description='Next output')
columns = 2
rows = 3
box=np.array([lxd,lxu,lyd,lyu])*L0/1e-3
And here we go!
while t<tf:
dt=get_min_dt(Q) #compute the optimal dt
set_bc(Q,t) #compute the boundary cnditions
sources=get_sources(Q) #compute the sources
flux_x,flux_y=get_flux(Q) #and ten the fluxes
Q1=advance_time_level(Q,flux_x,flux_y,sources) #advance the solution
limit_flow(Q1) #limit flows that are too fast
set_bc(Q1,t+dt) #let's do it again
sources=get_sources(Q1)
flux_x,flux_y=get_flux(Q1)
Q2=advance_time_level(Q1,flux_x,flux_y,sources)
limit_flow(Q2)
Q=0.5*(Q+Q2) #here we do a simple second average
limit_flow(Q)
n_steps+=1 #time step increased by 1
t_count+=dt #and the time by dt
progress_bar.value=t #we have defined some progress bars cause python is slow
output_bar.value+=dt # we need to monitor our progress and reduce resolution if it takes too long
if ((t==ti) or (t_count>(tf-ti)/n_out)): #everything below is plotting....
fig=plt.figure(figsize=(20, 19))
for i in range(1, rows+1):
for j in range(1, columns+1):
k=j+(i-1)*columns
fig.add_subplot(rows, columns, k)
if (k==1):
data= np.flip(np.transpose(Q[ngu:nx-ngu-1,ngu:ny-ngu-1,rh]*n0),0)
im = plt.imshow(data, cmap=plt.cm.nipy_spectral, norm=colors.LogNorm(vmin=data.min(), vmax=data.max()),extent=box)
cb = fig.colorbar(im,fraction=0.046, pad=0.04)
cb.ax.set_ylabel('$n_i(m^{-3})$', rotation=-90)
if (k==3):
data=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,mx]**2
data+=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,my]**2
data+=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,mz]**2
data/=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,rh]**2
data=np.sqrt(data)*L0/t0
data= np.flip(np.transpose(data),0)
im = plt.imshow(data, extent=box)
cb = fig.colorbar(im,fraction=0.046, pad=0.04)
cb.ax.set_ylabel('$u(m/s)$', rotation=-90)
if (k==2):
data=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,mx]**2
data/=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,rh]**2
data=np.sqrt(data)*L0/t0
data= np.flip(np.transpose(data),0)
im = plt.imshow(data, extent=box)
cb = fig.colorbar(im,fraction=0.046, pad=0.04)
cb.ax.set_ylabel('$u_r(m/s)$', rotation=-90)
if (k==4):
data=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,my]**2
data/=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,rh]**2
data=np.sqrt(data)*L0/t0
data= np.flip(np.transpose(data),0)
im = plt.imshow(data, extent=box)
cb = fig.colorbar(im,fraction=0.046, pad=0.04)
cb.ax.set_ylabel('$u_z(m/s)$', rotation=-90)
if (k==6):
data=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,mz]**2
data/=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,rh]**2
data=np.sqrt(data)*L0/t0
data= np.flip(np.transpose(data),0)
im = plt.imshow(data, extent=box)
cb = fig.colorbar(im,fraction=0.046, pad=0.04)
cb.ax.set_ylabel('$u_\phi(m/s)$', rotation=-90)
if (k==5):
data=Q[ngu:nx-ngu-1,ngu:ny-ngu-1,tp]*te0
data= np.flip(np.transpose(data),0)
im = plt.imshow(data, cmap=plt.cm.jet, extent=box)
cb = fig.colorbar(im,fraction=0.046, pad=0.04)
cb.ax.set_ylabel('$T(eV)$', rotation=-90)
im.set_interpolation('quadric')
cb.ax.yaxis.set_label_coords(6.5, 0.5)
plt.gca().axes.get_yaxis().set_visible(False)
plt.gca().axes.get_xaxis().set_visible(False)
if (j==1):
plt.ylabel('z(mm)', rotation=90)
plt.gca().axes.get_yaxis().set_visible(True)
if (i==rows):
plt.xlabel('r(mm)', rotation=0)
plt.gca().axes.get_xaxis().set_visible(True)
plt.tight_layout()
clear_output()
output_bar.value=0
display(progress_bar) # display the bar
display(output_bar) # display the bar
t_count=0
show_inline_matplotlib_plots()
t=t+dt
#we are now done. Let's turn off the progress bars.
output_bar.close()
del output_bar
progress_bar.close()
del progress_bar
print("DONE")
FloatProgress(value=49.22193753076992, description='Progress', max=50.00000000000001)
FloatProgress(value=0.0, description='Next output', max=2.5000000000000004)