For complete written solutions, please see. We will mainly be concerned with the Python code here.
import numpy as np
q1 = np.array([-6,3,3])
q2 = np.array([2,-3,1])
q3 = np.array([2,7,-9])
Q = np.array([q1, q2, q3])
print(Q)
Qt = Q.transpose()
eigen = np.linalg.eig(Qt)
print(eigen[0]) # values
print("######################")
print(eigen[1]) # eigenvectors as columns
# you need to identify which is the eigenvector
# corresponds to the zero eigenvector
print("######################")
eigen = eigen[1]
v = eigen[:,0]
print(v)
print("######################")
print("######################")
stat = v/np.sum(v)
np.sum(stat)
print("stationary distribution obtained")
print(stat) # stationary distribution obtained
print("######################")
print("######################")
[[-6 3 3] [ 2 -3 1] [ 2 7 -9]] [ 1.77635684e-15 -8.00000000e+00 -1.00000000e+01] ###################### [[-3.74765844e-01 -4.08248290e-01 -1.91206752e-16] [-8.99438027e-01 8.16496581e-01 -7.07106781e-01] [-2.24859507e-01 -4.08248290e-01 7.07106781e-01]] ###################### [-0.37476584 -0.89943803 -0.22485951] ###################### ###################### stationary distribution obtained [0.25 0.6 0.15] ###################### ######################
We code one jump of the chain, using the competing clocks description. Depending on your indexing, state $i$ is actually is stored as state $i-1$ in Python.
def jump(i): # one jump starting at state i (which should be inputed as i-1)
q = Q[i]
clocks = np.array([np.inf, np.inf, np.inf]) # intialize to infinity
index = np.array([0,1,2])
index = np.delete(index,i)
clocks[index[0]] = np.random.exponential(scale = 1/q[index[0]] )
clocks[index[1]] = np.random.exponential(scale = 1/q[index[1]] )
j = np.argmin(clocks) # which location is the least
time = clocks[j]
return j, time #if you put in np.array you might get type errors later
Now we code a chain running for time $t$, starting at state $i$, by applying a while loop on the jump function. The function returns the state of the Markov chain at time t.
def runMC(i,t):
totaltime =0
while(totaltime <t ):
new = jump(i)
totaltime = totaltime + new[1]
if (totaltime <t):
i = new[0]
return i
0
Now we run the chain for a long time $t=150$, and run this experiment $n=500$ times, and record the number of times we are in each state; our covergence theory says when we divide by $500$ we should get something close to the stationary distribution. Starting at state $1$, means starting at $0$ in this code.
y = [runMC(0, 150) for _ in range(500) ]
freq = np.unique(y,return_counts = True)
state1 = (1/500)*freq[1][0]
state2 = (1/500)*freq[1][1]
state3 = (1/500)*freq[1][2]
print(state1-stat[0])
print(state2-stat[1])
print(state3-stat[2])
0.0 -0.008000000000000007 0.008000000000000063
from datetime import datetime
print(datetime.now())
2021-11-08 18:12:50.090767