Authors: B. Donvil and P. Menczel, 2023
In this tutorial, we will discuss two examples where systems are described by time-local non-Markovian master equations, that is, Lindblad-like master equations with "rates" that may become negative. We will demonstrate how these master equations arise in physical scenarios and how they can be simulated using QuTiP's Non-Markovian Monte Carlo solver. This solver is based on the influence martingale formalism, which is described in Refs. [1, 2]. The examples, taken from Ref. [1], are a two-level atom in a photonic band gap (based on Ref. [3]) and a Redfield master equation for two non-interacting qubits coupling collectively to a common environment.
An advantage of the quantum Monte Carlo technique is that simulations can be easily parallelized.
QuTiP is able to interact with the mpi4py
package, and thus makes it possible to take advantage of the massive parallelization capabilities of high performance computing clusters.
As part of Example 2, we demonstrate at the end how that example can be run in an MPI environment.
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
from matplotlib.colors import LogNorm
import numpy as np
import qutip as qt
from scipy import special, optimize
from scipy.interpolate import CubicSpline
import os
We discuss here the master equation for a 2-level atom in a photonic band gap based on Ref. [3]. The total Hamiltonian describing the 2-level system plus the radiation field in a three-dimensional periodic dielectric is
$$ H = \sum_\lambda \hbar \omega_\lambda a^\dagger_\lambda a_\lambda + \omega \sigma_z + i\hbar \sum_\lambda g_\lambda (a_\lambda^\dagger\sigma_- - \sigma_+ a_\lambda) $$where $a_\lambda$, $a_\lambda^\dagger$ are the canonical ladder operators and $\sigma_\pm=(\sigma_x\pm i \sigma_y)/2$ with the Pauli matrices $\sigma_{x,y,z}$. The sums enumerate the electromagnetic field modes and polarizations $\lambda = (\vec k, \sigma)$ and $\omega$ (respectively $\omega_\lambda$) denotes the characteristic frequency of the atom (of the mode $\lambda$). Finally, $g_\lambda \propto \cos(\theta)\, /\, \omega_\lambda^{1/2}$ are the atom-field coupling constants, where $\theta$ is the angle between the mode's polarization vector and the atomic dipole moment.
We make the following assumptions:
The dynamics of the full system can then be solved analytically. The combined state at time $t$ is
$$ |\psi(t)\rangle = e^{-i\omega t} c(t) |1,\{0\}\rangle + \sum_\lambda c_\lambda(t) |0,\{\lambda\}\rangle e^{-i\omega_\lambda t}$$where $|1,\{0\}\rangle$ is the excited 2-level state with the radiation field in the vacuum, and $|0,\{\lambda\}\rangle$ the state with the 2-level system in the ground state and radiation mode $\lambda$ excited. The functions $c_\lambda(t)$ are defined as $$c_\lambda(t) = g_\lambda \int_0^t c(\tau) e^{i(\omega_\lambda-\omega)\tau} d\tau$$
and $c(t)$ is the complicated expression defined in the code below (see Eq. (2.21) of Ref. [3]). It depends on the two parameters
$$ \begin{aligned} \delta &= \omega-\omega_c \quad \text{and} \\ \beta^{3/2} &= \frac{\omega^2 \omega_c^{3/2} d^2 }{6\pi \epsilon_0 \hbar c^3} , \end{aligned} $$where $d$ is the absolute value of the atomic dipole moment.
Finally, the field modes can be traced out exactly [4] to obtain the master equation
$$ \frac{d}{dt} \rho(t) = -2i S(t) [\sigma_+\sigma_-,\rho(t)] + \Gamma(t) \left(\sigma_-\rho(t)\sigma_+ -\frac{1}{2}\{\sigma_+\sigma_-,\rho(t)\}\right) , $$for the reduced density matrix $\rho(t)$ of the atom. Here,
$$ \begin{aligned} S(t) &= -2 \operatorname{Im} \frac{\dot{c}(t)}{c(t)} \quad \text{and} \\ \Gamma(t) &= -2 \operatorname{Re} \frac{\dot{c}(t)}{c(t)} . \end{aligned} $$We choose the model parameters $\delta$ and $\beta$:
delta = -1
beta = 1
Some derived parameters and the function $c(t)$ introduced above:
Ap = (1 / 2 + 1 / 2 * (1 + 4 / 27 * (delta / beta) ** 3) ** (1 / 2)) ** (1 / 3)
Am = (1 / 2 - 1 / 2 * (1 + 4 / 27 * (delta / beta) ** 3) ** (1 / 2)) ** (1 / 3)
x1 = (Ap + Am) * np.exp(1j * np.pi / 4)
x2 = ((Ap * np.exp(-1j * np.pi / 6) - Am * np.exp(1j * np.pi / 6))
* np.exp(-1j * np.pi / 4))
x3 = ((Ap * np.exp(1j * np.pi / 6) - Am * np.exp(-1j * np.pi / 6))
* np.exp(3 * 1j * np.pi / 4))
a1 = x1 / ((x1 - x2) * (x1 - x3))
a2 = x2 / ((x2 - x1) * (x2 - x3))
a3 = x3 / ((x3 - x2) * (x3 - x1))
y1 = (x1**2) ** (1 / 2)
y2 = (x2**2) ** (1 / 2)
y3 = (x3**2) ** (1 / 2)
def c(t):
return (
2 * x1 * a1 * np.exp((beta * x1**2 + 1j * delta) * t)
+ a2 * (x2 + y2) * np.exp((beta * x2**2 + 1j * delta) * t)
- (a1 * y1 * (1 - special.erf((beta * x1**2 * t) ** (1 / 2)))
* np.exp((beta * x1**2 + 1j * delta) * t))
- (a2 * y2 * (1 - special.erf((beta * x2**2 * t) ** (1 / 2)))
* np.exp((beta * x2**2 + 1j * delta) * t))
- (a3 * y3 * (1 - special.erf((beta * x3**2 * t) ** (1 / 2)))
* np.exp((beta * x3**2 + 1j * delta) * t))
)
def cd(t): # time derivative
return (
((beta * x1**2 + 1j * delta) * 2 * x1 * a1
* np.exp((beta * x1**2 + 1j * delta) * t))
+ ((beta * x2**2 + 1j * delta) * a2 * (x2 + y2)
* np.exp((beta * x2**2 + 1j * delta) * t))
- ((beta * x1**2 + 1j * delta) * a1 * y1
* (1 - special.erf((beta * x1**2 * t) ** (1 / 2)))
* np.exp((beta * x1**2 + 1j * delta) * t))
- ((beta * x2**2 + 1j * delta) * a2 * y2
* (1 - special.erf((beta * x2**2 * t) ** (1 / 2)))
* np.exp((beta * x2**2 + 1j * delta) * t))
- ((beta * x3**2 + 1j * delta) * a3 * y3
* (1 - special.erf((beta * x3**2 * t) ** (1 / 2)))
* np.exp((beta * x3**2 + 1j * delta) * t))
+ (a1 * y1 * x1 * np.exp(-beta * t * x1**2)
* (beta / (t * np.pi + 0.00001)) ** (1 / 2)
* np.exp((beta * x1**2 + 1j * delta) * t))
+ (a2 * y2 * x2 * np.exp(-beta * t * x2**2)
* (beta / (t * np.pi + 0.00001)) ** (1 / 2)
* np.exp((beta * x2**2 + 1j * delta) * t))
+ (a3 * y3 * x3 * np.exp(-beta * t * x3**2)
* (beta / (t * np.pi + 0.00001)) ** (1 / 2)
* np.exp((beta * x3**2 + 1j * delta) * t))
)
def S(t):
return -2 * np.imag(cd(t) / c(t))
def Gamma(t):
return -2 * np.real(cd(t) / c(t))
Define the time interval. The initial time is shifted to avoid negative values of $\Gamma(t)$ at $t=0$ (which is unphysical):
ti = optimize.root_scalar(Gamma, bracket=(1.4, 1.5)).root
duration = 10
steps = 100
times = np.linspace(ti, ti + duration, steps + 1)
We plot the functions $\Gamma(t)$ and $S(t)$ over the time interval, demonstrating that $\Gamma(t)$ becomes negative:
Gamma_values = Gamma(times)
S_values = S(times)
plt.plot(times - ti, Gamma_values, label=r"$\Gamma(t)$")
plt.plot(times - ti, S_values, label=r"$S(t)$")
plt.xlabel(r"$t$")
plt.legend()
plt.show()
To speed up the following calculation by a factor 3-4, we store $\Gamma(t)$ and $S(t)$ as interpolations.
Gamma_int = CubicSpline(times, np.complex128(Gamma_values))
S_int = CubicSpline(times, np.complex128(S_values))
We specify some numerical parameters, i.e., the number of trajectories and whether to use parallel computation. Then we define the Hamiltonian and the jump operator corresponding to the master equation, and the initial state.
nmmc_options = {"map": "parallel",
"norm_steps": 10} # options specific to nm_mcsolve
options = {"progress_bar": "enhanced"} # options shared by all solvers
ntraj = 5000
H = 2 * qt.sigmap() * qt.sigmam()
ops_and_rates = [[qt.sigmam(), Gamma_int]]
psi0 = qt.basis(2, 0)
e_ops = [H]
In analogy with mcsolve
and the corresponding MCSolver
class, we may either instantiate a NonMarkovianMCSolver
object and call its run
or start
/ step
methods to run the Monte-Carlo simulation, or make use of the nm_mcsolve
convenience function.
Here, we construct a NonMarkovianMCSolver
in order to explicitly verify that the solver ensures that the jump operators satisfy the completeness relation $\sum_n A^\dagger_n A_n \propto 1$.
We can see that a second jump operator is added in order to satisfy this relation.
solver = qt.NonMarkovianMCSolver(qt.QobjEvo([H, S_int]),
ops_and_rates,
options=(options | nmmc_options))
for L in solver.ops:
print(L, "\n")
completeness_check = sum(L.dag() * L for L in solver.ops)
with qt.CoreOptions(atol=1e-5):
assert completeness_check == qt.qeye(2)
Quantum object: dims=[[2], [2]], shape=(2, 2), type='oper', dtype=CSR, isherm=False Qobj data = [[0. 0.] [1. 0.]] Quantum object: dims=[[2], [2]], shape=(2, 2), type='oper', dtype=Dense, isherm=True Qobj data = [[0. 0.] [0. 1.]]
Finally, we run the Monte-Carlo simulation:
MCSol = solver.run(psi0, tlist=times, ntraj=ntraj, e_ops=e_ops)
Total run time: 32.83s
We will compare our results to the following exact mesolve
simulation:
d_ops = [[qt.lindblad_dissipator(qt.sigmam(), qt.sigmam()), Gamma]]
MESol = qt.mesolve([H, S], psi0, times, d_ops, e_ops, options=options)
Total run time: 0.11s
Using the e_ops
parameter of both solvers, we have computed the expectation value $\langle H \rangle$ for the exact simulation, and its average over ntraj
trajectories in the Monte-Carlo simulation. In the following plot, we show both the exact solution and the Monte-Carlo estimate, together with an estimation for the error of the Monte-Carlo simulation which is given by
$$ \textrm{Error}_{\text{MC}} = \sigma / \sqrt{N} $$
where $\sigma$ is the standard deviation of the values returned by the individual trajectories (which is automatically computed by QuTiP's solver) and $N$ is the number of trajectories.
In the influence martingale approach to Monte-Carlo simulations of non-Markovian master equations, the trace of the state is not conserved along each trajectory. In fact, the trace of the state on a trajectory is given by the influence martingale. Due to its martingale property, the expectation value of the trace when averaging over sufficiently many trajectories is one. QuTiP automatically stores the average trace and its standard deviation in the result object. We read it out and display it in the following plot; the deviation of the average trace from $1$ gives an indication of how well the Monte-Carlo simulation has converged.
Note that ntraj
has been set to 2500
to keep the evaluation time of this notebook reasonable. Increasing it to 10000
or more improves the convergence.
plt.plot(times - ti, MESol.expect[0] / 2, "k-", label="Exact")
plt.plot(times - ti, MCSol.expect[0] / 2, "kx", label="Monte-Carlo")
plt.fill_between(
times - ti,
(MCSol.expect[0] - MCSol.std_expect[0] / np.sqrt(ntraj)) / 2,
(MCSol.expect[0] + MCSol.std_expect[0] / np.sqrt(ntraj)) / 2,
alpha=0.5,
)
plt.plot(times - ti, np.ones_like(times), "-", color="0.5")
plt.plot(times - ti, MCSol.trace, "x", color="0.5")
plt.fill_between(
times - ti,
MCSol.trace - MCSol.std_trace / np.sqrt(ntraj),
MCSol.trace + MCSol.std_trace / np.sqrt(ntraj),
alpha=0.5,
)
plt.xlabel(r"$t$")
plt.ylabel(r"$\langle H \rangle\, /\, 2$")
plt.legend()
plt.show()
We consider two qubits that couple collectively to the same bosonic bath but do not interact with each other directly. The full model Hamiltonian is
$$ H = \omega_1 \sigma^{(1)}_+\sigma^{(1)}_- + \omega_2 \sigma^{(2)}_+\sigma^{(2)}_- + \sum_k \epsilon_k b_k^\dagger b_k +\sum_k g_k [b_k(\sigma^{(1)}_++\sigma^{(2)}_+)+b_k^\dagger(\sigma^{(1)}_-+\sigma^{(2)}_-)]$$Here, $b_k$, $b_k^\dagger$ are bosonic ladder operators and $\sigma^{(j)}_\pm=(\sigma_x^{(j)}\pm i\sigma_y^{(j)})/2$ and $\sigma_{x,y,z}^{(j)}$ are the Pauli matrices acting on the $j$th qubit. Further, $\omega_j$ and $\epsilon_k$ denote the characteristic frequencies of the qubits and the bath modes, and $g_k$ are coupling constants. The initial state of the total system is $\rho_{2q}\otimes |0\rangle\langle 0|$, where $\rho_{2q}$ is the two-qubit initial state and $|0\rangle\langle 0|$ the boson bath vacuum state.
Following Ref. [5], we perform Born-Markov approximations to arrive at the following Redfield master equation for the qubits: $$ \quad \frac{d}{dt}\rho(t) = -i \sum_{i,j=1}^2 A_{i,j} [\sigma_+^{(j)}\sigma_-^{(i)},\rho(t)] + \sum_{i,j=1}^2B_{i,j}\left( \sigma_-^{(i)}\rho(t)\sigma_+^{(j)} -\frac{1}{2}\{\sigma_+^{(j)}\sigma_-^{(i)},\rho(t)\}\right) . \tag 1 $$ Here, we introduced the matrices $$ \begin{aligned} A &= \begin{pmatrix} \omega_1 + \alpha & \alpha + \frac{\kappa}{2} - i\frac{\gamma_1-\gamma_2}{8} \\ \alpha + \frac{\kappa}{2} - i\frac{\gamma_2-\gamma_1}{8} & \omega_2 + \alpha + \kappa \end{pmatrix} \quad \text{and} \\ B &= \frac{1}{2}\begin{pmatrix} \gamma_1 & \frac{\gamma_1+\gamma_2}{2} - 2i\kappa \\ \frac{\gamma_1+\gamma_2}{2} + 2i\kappa & \gamma_2 \end{pmatrix} , \end{aligned} $$ where the parameters $\alpha$, $\kappa$ and $\gamma_{1,2} \geq 0$ are related to the full Hamiltonian as follows: $$ \begin{aligned} \gamma_j &= 2\int_{-\infty}^\infty dt\, \sum_k g_k^2\, e^{i(\omega_j - \epsilon_k)t} , \\ \alpha &= \operatorname{Im} \int_0^\infty dt\, \sum_k g_k^2\, e^{i(\omega_1 - \epsilon_k)t} , \\ \kappa &= \operatorname{Im} \int_0^\infty dt\, \sum_k g_k^2\, \Bigl( e^{i(\omega_2 - \epsilon_k)t} - e^{i(\omega_1 - \epsilon_k)t} \Bigr) . \end{aligned} $$
The matrix $B$ is self adjoint, and therefore it can be diagonalised with some unitary $U$: $$ U^\dagger\, B\, U = \begin{pmatrix} \lambda_{1}&0\\0&\lambda_{2} \end{pmatrix} $$ with $\lambda_{i} = \frac{\gamma_1+\gamma_2}{4}+(-1)^{i}\sqrt{\frac{\gamma_1^2+\gamma_2^2+8\,\kappa^2}{8}}$. We define the Lindblad operators $$ L_j = \sum_{i=1}^2 \sigma_-^{(i)} U_{ij} $$ such that equation (1) becomes $$ \frac{d}{dt}\rho(t) = -i \sum_{i,j=1}^2 A_{i,j} [\sigma_+^{(j)}\sigma_-^{(i)},\rho(t)] + \sum_{i=1}^2\lambda_i\left( L_i\rho(t)L_i^\dagger -\frac{1}{2}\{L_i^\dagger L_i,\rho(t)\}\right) . \tag 2 $$ Now it is clear that when $\kappa>0$, $\lambda_1<0$ and thus Eq. (2) is not in strict Lindblad form.
We choose the model parameters $\alpha$, $\kappa$ and $\gamma_{1,2}$, and calculate the rates $\lambda_1$ and $\lambda_2$.
omeg1 = 0.25
omeg2 = 0.5
gam1 = 1
gam2 = 4
alpha = 3
kappa = 1
lamb1 = (gam1 + gam2) / 4 - np.sqrt((gam1**2 + gam2**2 + 8 * kappa**2) / 8)
lamb2 = (gam1 + gam2) / 4 + np.sqrt((gam1**2 + gam2**2 + 8 * kappa**2) / 8)
We choose the time interval and numerical parameters.
times2 = np.linspace(0, 2.5, 100)
nmmc_options = {"map": "parallel",
"keep_runs_results": True,
"norm_steps": 10} # options specific to nm_mcsolve
options = {"progress_bar": "enhanced"} # options shared by all solvers
ntraj = 5000
We will now define our system in the basis where $L_1^\dagger |0\rangle = |1\rangle$, $L_2^\dagger |0\rangle = |2\rangle$ and $L_1^\dagger L_2^\dagger |0\rangle = |3\rangle$. Note that to define the Hamiltonian, it must be rotated to the appropriate basis. The initial state is chosen as $|\psi_0\rangle = \sqrt{0.4}\, |0\rangle + \sqrt{0.4}\, |1\rangle + \sqrt{0.2}\, |2\rangle$.
# Initial state of the system
psi0 = (
np.sqrt(0.4) * qt.basis(4, 0)
+ np.sqrt(0.4) * qt.basis(4, 1)
+ np.sqrt(0.2) * qt.basis(4, 2)
)
L1 = qt.Qobj(np.array([[0, 0, 0, 0],
[1, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 1, 0]])).dag()
L2 = qt.Qobj(np.array([[0, 0, 0, 0],
[0, 0, 0, 0],
[1, 0, 0, 0],
[0, 1, 0, 0]])).dag()
norm1 = np.sqrt(
1 + (gam2 - gam1 + np.sqrt(2) * np.sqrt(gam1**2 + gam2**2 + 8*kappa**2))**2
/ ((gam1 + gam2) ** 2 + 16 * kappa**2))
norm2 = np.sqrt(
1 + (gam2 - gam1 - np.sqrt(2) * np.sqrt(gam1**2 + gam2**2 + 8*kappa**2))**2
/ ((gam1 + gam2) ** 2 + 16 * kappa**2))
U = qt.Qobj(np.array([
[(gam1 - gam2 - np.sqrt(2) * np.sqrt(gam1**2 + gam2**2 + 8*kappa**2))
/ (gam1 + gam2 + 4j*kappa) / norm1,
(gam1 - gam2 + np.sqrt(2) * np.sqrt(gam1**2 + gam2**2 + 8*kappa**2))
/ (gam1 + gam2 + 4j*kappa) / norm2],
[1 / norm1, 1 / norm2]]))
# Write \sigma_\pm^{1,2} in terms of L_{1,2} using the explicit definition of U
Udag = U.dag()
sigmam1 = Udag[0, 0] * L1 + Udag[1, 0] * L2
sigmam2 = Udag[0, 1] * L1 + Udag[1, 1] * L2
sigmap1 = sigmam1.dag()
sigmap2 = sigmam2.dag()
H = ((omeg1 + alpha) * sigmap1 * sigmam1
+ (omeg2 + alpha + kappa) * sigmap2 * sigmam2
+ (alpha + kappa / 2 - 1j * (gam1 - gam2) / 8) * sigmap2 * sigmam1
+ (alpha + kappa / 2 - 1j * (gam2 - gam1) / 8) * sigmap1 * sigmam2)
eops = [qt.basis(4, 0) * qt.basis(4, 0).dag(),
qt.basis(4, 1) * qt.basis(4, 1).dag(),
qt.basis(4, 2) * qt.basis(4, 2).dag()]
MCSol2 = qt.nm_mcsolve(H, psi0, times2, ntraj=ntraj,
options=(options | nmmc_options),
ops_and_rates=[[L1, lamb1], [L2, lamb2]],
e_ops=eops)
Total run time: 32.17s
d_ops = [lamb1 * qt.lindblad_dissipator(L1, L1),
lamb2 * qt.lindblad_dissipator(L2, L2)]
MESol2 = qt.mesolve(H, psi0, times2, d_ops,
e_ops=eops,
options=options)
Total run time: 0.06s
We will first make a plot similar to that in the first example. We plot the populations of the ground state and of the states $|1\rangle$ and $|2\rangle$; note that the population of $|3\rangle$ remains zero for our initial conditions. The solid lines and crosses are the exact solution of the Master equation and the Monte-Carlo average, respectively. The shaded areas show the Monte-Carlo error $\text{Error}_\text{MC}$ of the respective quantities. We see that the technique works best at small and intermediate times, and that the error seems to grow exponentially at larger times. We will investigate this effect below.
plt.plot(times2, MESol2.expect[0], color='C0',
label=r"$\langle e_0 \mid \rho \mid e_0 \rangle$")
plt.plot(times2, MCSol2.average_expect[0], 'x', color='C0')
plt.fill_between(
times2,
MCSol2.average_expect[0] - MCSol2.std_expect[0] / np.sqrt(ntraj),
MCSol2.average_expect[0] + MCSol2.std_expect[0] / np.sqrt(ntraj),
alpha=0.2, color='C0'
)
plt.plot(times2, MESol2.expect[1], color='C1',
label=r"$\langle e_1 \mid \rho \mid e_1 \rangle$")
plt.plot(times2, MCSol2.average_expect[1], 'x', color='C1')
plt.fill_between(
times2,
MCSol2.average_expect[1] - MCSol2.std_expect[1] / np.sqrt(ntraj),
MCSol2.average_expect[1] + MCSol2.std_expect[1] / np.sqrt(ntraj),
alpha=0.2, color='C1'
)
plt.plot(times2, MESol2.expect[2], color='C2',
label=r"$\langle e_2 \mid \rho \mid e_2 \rangle$")
plt.plot(times2, MCSol2.average_expect[2], 'x', color='C2')
plt.fill_between(
times2,
MCSol2.average_expect[2] - MCSol2.std_expect[2] / np.sqrt(ntraj),
MCSol2.average_expect[2] + MCSol2.std_expect[2] / np.sqrt(ntraj),
alpha=0.2, color='C2'
)
plt.plot(times2, np.ones_like(times2), color="0.5",
label=r"$\operatorname{tr} \rho$")
plt.plot(times2, MCSol2.average_trace, "x", color="0.5")
plt.fill_between(
times2,
MCSol2.average_trace - MCSol2.std_trace / np.sqrt(ntraj),
MCSol2.average_trace + MCSol2.std_trace / np.sqrt(ntraj),
alpha=0.2, color="0.5"
)
plt.xlabel(r"$t$")
plt.ylabel('Expectation values')
plt.legend()
plt.show()
When computing a quantity such as the expectation value of $H$ with nm_mcsolve
, the solver first computes that quantity along each trajectory and then averages over the trajectory values, weighing each trajectory with the value of the associated influence martingale $\mu$. In the plot below, we show the evolution of this expectation value along a number of individual trajectories. (The plots do not include the martingale weight; therefore, the vertical axis is labelled as $\langle H \rangle\, /\, \mu$.) The shading of the lines in this plot represent the value of the martingale. The solid red line is the exact value of $\langle H \rangle$, which would be obtained by averaging $( \langle H \rangle\, /\, \mu )$ with weights $\mu$ over sufficiently many trajectories.
The plot shows that the number of trajectories contributing to the average declines rapidly, due to two effects. First, jumps with the jump operators $L_1$ or $L_2$ bring the system into the ground state. A trajectory that enters the ground state at any time cannot leave it again. Second, the solver adds a third jump channel with zero jump rate. A jump in this channel sets the martingale of the corresponding trajectory to zero for the remainder of the time. In the plot below, we visualize this effect by making the trajectories stop before the end of the time interval.
If $\langle H \rangle$ went to zero, a declining number of contributing trajectories would not be an issue. However, due to the negative rates, the ensemble average does not converge to $|0\rangle\langle 0|$. Therefore, the weights of the remaining contributing trajectories (as indicated by their shading) must grow rapidly to counteract the non-contributing trajectories, leading to a rapidly growing error because of insufficient sampling of such trajectories.
Note that the trajectories are plotted with a small offset since otherwise, they would overlap.
# Adapted from matplotlib tutorial,
# https://matplotlib.org/stable/gallery/lines_bars_and_markers/multicolored_line.html
# --- Settings ---
# Lines are plotted w/ offset, spread is difference between max and min offset
spread = 0.015
N = 150 # Number of trajectories
# ----------------
all_traces = [np.abs(tr) for traj in MCSol2.trajectories[0:N]
for tr in traj.trace if np.abs(tr) > 0]
min_martingale = min(all_traces)
max_martingale = max(all_traces)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
norm = LogNorm(min_martingale, max_martingale)
for i, traj in enumerate(MCSol2.trajectories[0:N]):
offset = -spread / 2 + spread * i / N
traj_defined = np.abs(traj.trace) > 0
traj_times = times2[traj_defined]
traj_trace = np.array(traj.trace)[traj_defined]
traj_ex0 = traj.expect[0][traj_defined] / traj_trace
traj_ex1 = traj.expect[1][traj_defined] / traj_trace
traj_ex2 = traj.expect[2][traj_defined] / traj_trace
traj_exH = omeg1 * traj_ex1 + omeg2 * traj_ex2
points = np.array([traj_times, traj_exH + offset]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
colors = np.abs(traj_trace[:-1])
lc = LineCollection(segments, cmap='PuBu', norm=norm)
lc.set_array(colors)
line = ax.add_collection(lc)
ax.plot(times2, omeg1 * MESol2.expect[1] + omeg2 * MESol2.expect[2],
color='red', label=r"$\langle H \rangle$ (exact)")
ax.plot(times2,
omeg1 * MCSol2.average_expect[1] + omeg2 * MCSol2.average_expect[2],
color='blue', label=r"$\langle H \rangle$ (MC)")
ax.legend()
fig.colorbar(line, ax=ax, label=r"Martingale $\mu$")
ax.set_xlabel(r"$t$")
ax.set_xlim(times2[0], times2[-1])
ax.set_ylabel(r"$\langle H \rangle\, /\, \mu$")
ax.set_ylim(-0.02, 0.22)
plt.show()
To improve the convergence behavior of the example above at long times, many trajectories are needed.
We use this fact as a motivation to demonstrate QuTiP's MPI capabilities.
On the QuTiP side, running Monte Carlo simulations on massively parallel infrastructure through MPI is as easy as replacing 'map': 'parallel'
by 'map': 'mpi'
in the provided options.
QuTiP will then use the MPIPoolExecutor
provided by the mpi4py
module to simulate the trajectories in parallel.
Such calculations can typically not be started from within a Jupyter notebook.
The code below is an example for a standalone script that could be submitted to a job scheduler on a supercomputer.
# Beginning of `qutip-mpi-example.py` file
import numpy as np
import qutip as qt
# --- SETTINGS ---
# Maximum number of MPI worker processes that can be used
NUM_WORKER_PROCESSES = 500
# Create batches averaged over this number of trajectories
BATCH_SIZE = 1000
# Create this number of batches
# (total number of trajectories is BATCH_SIZE * NUM_BATCHES)
NUM_BATCHES = 500
def setup_system():
omeg1, omeg2 = 0.25, 0.5
gam1, gam2, alpha, kappa = 1, 4, 3, 1
lamb1 = (gam1 + gam2) / 4 - np.sqrt((gam1**2 + gam2**2 + 8 * kappa**2) / 8)
lamb2 = (gam1 + gam2) / 4 + np.sqrt((gam1**2 + gam2**2 + 8 * kappa**2) / 8)
L1 = qt.Qobj(np.array([[0, 0, 0, 0], [1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 0]])).dag()
L2 = qt.Qobj(np.array([[0, 0, 0, 0], [0, 0, 0, 0], [1, 0, 0, 0], [0, 1, 0, 0]])).dag()
norm1 = np.sqrt(1 + (gam2 - gam1 + np.sqrt(2) * np.sqrt(gam1**2 + gam2**2 + 8*kappa**2))**2 / ((gam1 + gam2) ** 2 + 16 * kappa**2))
norm2 = np.sqrt(1 + (gam2 - gam1 - np.sqrt(2) * np.sqrt(gam1**2 + gam2**2 + 8*kappa**2))**2 / ((gam1 + gam2) ** 2 + 16 * kappa**2))
Udag = qt.Qobj(np.array([
[(gam1 - gam2 - np.sqrt(2) * np.sqrt(gam1**2 + gam2**2 + 8*kappa**2)) / (gam1 + gam2 + 4j*kappa) / norm1,
(gam1 - gam2 + np.sqrt(2) * np.sqrt(gam1**2 + gam2**2 + 8*kappa**2)) / (gam1 + gam2 + 4j*kappa) / norm2],
[1 / norm1,
1 / norm2]
])).dag()
sigmam1 = Udag[0, 0] * L1 + Udag[1, 0] * L2
sigmam2 = Udag[0, 1] * L1 + Udag[1, 1] * L2
sigmap1 = sigmam1.dag()
sigmap2 = sigmam2.dag()
H = ((omeg1 + alpha) * sigmap1 * sigmam1
+ (omeg2 + alpha + kappa) * sigmap2 * sigmam2
+ (alpha + kappa / 2 - 1j * (gam1 - gam2) / 8) * sigmap2 * sigmam1
+ (alpha + kappa / 2 - 1j * (gam2 - gam1) / 8) * sigmap1 * sigmam2)
psi0 = np.sqrt(0.4) * qt.basis(4, 0) + np.sqrt(0.4) * qt.basis(4, 1) + np.sqrt(0.2) * qt.basis(4, 2)
ops_and_rates = [[L1, lamb1], [L2, lamb2]]
return H, psi0, ops_and_rates
def main():
times = np.linspace(0, 10, 250)
H, psi0, ops_and_rates = setup_system()
for i in range(NUM_BATCHES):
result = qt.nm_mcsolve(H, psi0, times, ops_and_rates, ntraj=BATCH_SIZE,
options={'store_states': True,
'progress_bar': False,
'map': 'mpi',
'num_cpus': NUM_WORKER_PROCESSES,
'norm_steps': 10}
)
qt.qsave(result, f"./result-{i}")
if __name__ == "__main__":
main()
How to run this script in practice will depend on your infrastructure. Some guidance is provided on the mpi4py.futures users' guide. The authors of this tutorial used the batch script below to submit the task to a SLURM workload manager on the supercomputer HOKUSAI, using the MPICH implementation of the MPI standard. Using 501 CPU cores (1 manager and 500 workers) distributed over 5 nodes on this supercomputer, the task executed in approximately 50 minutes, generating 500k trajectories.
#!/bin/bash
#SBATCH --partition=XXXXX
#SBATCH --account=XXXXX
#SBATCH --nodes=5
#SBATCH --ntasks=501
#SBATCH --mem-per-cpu=1G
#SBATCH --time=0-10:00
source ~/.bashrc
module purge
module load mpi/mpich-x86_64
conda activate qutip-environment
mpirun -np $SLURM_NTASKS -bind-to core python -m mpi4py.futures qutip-mpi-example.py
The code above generates a number of result files.
Copy these files into an mpi_results
folder to execute the following code.
We will first plot the averaged result like above.
# copied from script above
BATCH_SIZE = 1000
times3 = np.linspace(0, 10, 250)
# load all available result files
results_folder_exists = os.path.isdir('mpi_results')
if results_folder_exists:
batches = []
while True:
i = len(batches)
filename = f'mpi_results/result-{i}'
if not os.path.exists(f'{filename}.qu'):
break
batches.append(qt.qload(filename))
NUM_BATCHES = len(batches)
# combine result files
if results_folder_exists and NUM_BATCHES > 0:
combined_result = sum(batches[1:], start=batches[0])
exact_solution = qt.mesolve(H, psi0, times3, d_ops, options=options)
if results_folder_exists and NUM_BATCHES > 0:
plt.plot(times3, qt.expect(exact_solution.states, eops[0]),
color='C0', label=r"$\langle e_0 \mid \rho \mid e_0 \rangle$")
plt.plot(times3, qt.expect(combined_result.states, eops[0]),
'x', color='C0')
plt.plot(times3, qt.expect(exact_solution.states, eops[1]),
color='C1', label=r"$\langle e_1 \mid \rho \mid e_1 \rangle$")
plt.plot(times3, qt.expect(combined_result.states, eops[1]),
'x', color='C1')
plt.plot(times3, qt.expect(exact_solution.states, eops[2]),
color='C2', label=r"$\langle e_2 \mid \rho \mid e_2 \rangle$")
plt.plot(times3, qt.expect(combined_result.states, eops[2]),
'x', color='C2')
plt.xlabel(r"$t$")
plt.xlim((0, 5))
plt.ylabel('Expectation values')
plt.ylim((-.1, .7))
plt.legend()
plt.show()
else:
print('No result files found.')
No result files found.
We can now do the following analysis, adopted from Ref. [6].
Let $k \leq N$, where $N$ is the total number of trajectories. Let $I = [0, 10]$ be the total considered time interval, and let $$ I_k = \{ t \in I : \lVert \rho_{\text{MC},k} - \rho_{\text{exact}} \rVert \leq 0.1 \} . $$ Here, $\rho_{\text{MC},k}$ is the estimated state using only $k$ trajectories. We plot $\mu(I_k)$ as a function of $k$, where $\mu(I_k)$ is the measure of the set $I_k$.
if results_folder_exists and NUM_BATCHES > 0:
result = {}
progress = qt.ui.TqdmProgressBar(NUM_BATCHES)
for k, batch in enumerate(batches):
# average of the batches up to index k
if k == 0:
average = batch
else:
average = average + batch
# how many points are close to the exact solution
diff = [(mc - exact).norm()
for mc, exact in zip(average.states, exact_solution.states)]
good_points = np.count_nonzero( # count 'True' entries
np.isclose(diff, np.zeros_like(diff), rtol=0, atol=0.1)
)
num_trajectories = (k + 1) * BATCH_SIZE
result.update({num_trajectories: good_points / len(times3)})
progress.update()
progress.finished()
xval = np.array(list(result.keys()))
yval = np.array(list(result.values()))
fit = np.polyfit(np.log(xval), yval, 1)
print(('Approximate number of trajectories required for convergence until '
'time t (according to linear fit):\n'
f'N = {np.exp(-fit[1] / fit[0]) :.2f} * '
f'exp( {1 / fit[0] / times3[-1] :.2f} * t )\n'))
plt.semilogx(xval, yval, label='Simulation result')
plt.semilogx(xval, fit[0] * np.log(xval) + fit[1], '--', label='Fit')
plt.xlabel('Number of trajectories, $k$')
plt.ylabel(r'$\mu(I_k)$')
plt.legend()
plt.show()
else:
print('No result files found.')
No result files found.
[1] Donvil and Muratore-Ginanneschi, Nat Commun (2022).
[2] Donvil and Muratore-Ginanneschi, arXiv:2209.08958 [quant-ph].
[3] John and Quang, Phys. Rev. A (1994).
[4] Breuer and Petruccione, The Theory of Open Quantum Systems.
[5] Mozgunov and Lidar, Quantum (2020).
[6] Menczel et al., arXiv:2401.11830 [quant-ph].
qt.about()
QuTiP: Quantum Toolbox in Python ================================ Copyright (c) QuTiP team 2011 and later. Current admin team: Alexander Pitchford, Nathan Shammah, Shahnawaz Ahmed, Neill Lambert, Eric Giguère, Boxi Li, Jake Lishman, Simon Cross and Asier Galicia. Board members: Daniel Burgarth, Robert Johansson, Anton F. Kockum, Franco Nori and Will Zeng. Original developers: R. J. Johansson & P. D. Nation. Previous lead developers: Chris Granade & A. Grimsmo. Currently developed through wide collaboration. See https://github.com/qutip for details. QuTiP Version: 5.1.0.dev0+50fc9b7 Numpy Version: 1.22.4 Scipy Version: 1.13.0 Cython Version: 3.0.10 Matplotlib Version: 3.5.2 Python Version: 3.10.4 Number of CPUs: 4 BLAS Info: Generic INTEL MKL Ext: False Platform Info: Linux (x86_64) Installation path: /usr/share/miniconda3/envs/test-environment/lib/python3.10/site-packages/qutip ================================================================================ Please cite QuTiP in your publication. ================================================================================ For your convenience a bibtex reference can be easily generated using `qutip.cite()`
# -- first example --
assert np.any(np.array([Gamma(t) for t in times]) < 0)
np.testing.assert_array_less(MESol.expect[0][1:] / 2, 1)
np.testing.assert_array_less(0, MESol.expect[0] / 2)
np.testing.assert_allclose(MCSol.trace, 1, atol=0, rtol=0.25)
np.testing.assert_allclose(MCSol.expect[0], MESol.expect[0], atol=0, rtol=0.25)
# -- second example --
MAX_TIME = 1
mc_ex0 = MCSol2.average_expect[0][times2 <= 1]
me_ex0 = MESol2.expect[0][times2 <= 1]
np.testing.assert_allclose(mc_ex0, me_ex0, atol=0.2, rtol=0)
mc_ex1 = MCSol2.average_expect[1][times2 <= 1]
me_ex1 = MESol2.expect[1][times2 <= 1]
np.testing.assert_allclose(mc_ex1, me_ex1, atol=0.2, rtol=0)
mc_ex2 = MCSol2.average_expect[2][times2 <= 1]
me_ex2 = MESol2.expect[2][times2 <= 1]
np.testing.assert_allclose(mc_ex2, me_ex2, atol=0.2, rtol=0)