Daisuke Oyama
Faculty of Economics, University of Tokyo
Julia translation of the Python version, translated by Ryumei Nakada
This notebook demonstrates how to analyze finite-state Markov chains
with the MarkovChain
class.
For basic concepts and properties on Markov chains, see
For algorithmic issues in detecting reducibility and periodicity of a Markov chain, see, for example,
from which we draw some examples below.
using Plots
using QuantEcon
using StatsBase
plotlyjs()
Plotly javascript loaded.
To load again call
init_notebook(true)
Plots.PlotlyJSBackend()
Define a custom pretty printing function:
prettyprint(arr) = Base.showarray(STDOUT, arr, false, header=false)
prettyprint (generic function with 1 method)
Consider the Markov chain given by the following stochastic matrix, taken from Exercise 3 in Jarvis and Shier (where the actual values of non-zero probabilities are not important):
P = zeros(6, 6)
P[1, 1] = 1
P[2, 5] = 1
P[3, 3], P[3, 4], P[3, 5] = 1/3, 1/3, 1/3
P[4, 1], P[4, 6] = 1/2, 1/2
P[5, 2], P[5, 5] = 1/2, 1/2
P[6, 1], P[6, 4] = 1/2, 1/2
P
6×6 Array{Float64,2}: 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.333333 0.333333 0.333333 0.0 0.5 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.5 0.0 0.0
Create a MarkovChain instance:
mc1 = MarkovChain(P)
Discrete Markov Chain stochastic matrix of type Array{Float64,2}: [1.0 0.0 … 0.0 0.0; 0.0 0.0 … 1.0 0.0; … ; 0.0 0.5 … 0.5 0.0; 0.5 0.0 … 0.0 0.0]
This Markov chain is reducible:
is_irreducible(mc1)
length(communication_classes(mc1))
Determine the communication classes:
communication_classes(mc1)
4-element Array{Array{Int64,1},1}: [1] [2, 5] [4, 6] [3]
Classify the states of this Markov chain:
recurrent_classes(mc1)
2-element Array{Array{Int64,1},1}: [1] [2, 5]
Obtain a list of the recurrent states:
recurrent_states = vcat(recurrent_classes(mc1)...)
3-element Array{Int64,1}: 1 2 5
Obtain a list of the transient states:
transient_states = setdiff(collect(1:n_states(mc1)), recurrent_states)
3-element Array{Int64,1}: 3 4 6
A Markov chain is reducible (i.e., its directed graph is not strongly connected) if and only if by symmetric permulations of rows and columns, its transition probability matrix is written in the form ("canonical form") $$ \begin{pmatrix} U & 0 \\ W & V \end{pmatrix}, $$ where $U$ and $W$ are square matrices.
Such a form for mc1
is obtained by the following:
permutation = vcat(recurrent_states, transient_states)
mc1.p[permutation, permutation]
6×6 Array{Float64,2}: 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.333333 0.0 0.5 0.0 0.0 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.5 0.0
This Markov chain is aperiodic (i.e., the least common multiple of the periods of the recurrent sub-chains is one):
is_aperiodic(mc1)
Indeed, each of the sub-chains corresponding to the recurrent classes has period $1$, i.e., every recurrent state is aperiodic:
for recurrent_class in recurrent_classes(mc1)
sub_matrix = mc1.p[recurrent_class, recurrent_class]
d = period(MarkovChain(sub_matrix))
println("Period of the sub-chain")
prettyprint(sub_matrix)
println("\n = $d")
end
Period of the sub-chain 1.0 = 1 Period of the sub-chain 0.0 1.0 0.5 0.5 = 1
For each recurrent class $C$, there is a unique stationary distribution $\psi^C$
such that $\psi^C_i > 0$ for all $i \in C$ and $\psi^C_i = 0$ otherwise.
MarkovChain.stationary_distributions
returns
these unique stationary distributions for the recurrent classes.
Any stationary distribution is written as a convex combination of these distributions.
stationary_distributions(mc1)
2-element Array{Array{Float64,1},1}: [1.0, 0.0, 0.0, 0.0, 0.0, 0.0] [0.0, 0.333333, 0.0, 0.0, 0.666667, 0.0]
These are indeed stationary distributions:
stationary_distributions(mc1)[1]'* mc1.p
1×6 RowVector{Float64,Array{Float64,1}}: 1.0 0.0 0.0 0.0 0.0 0.0
stationary_distributions(mc1)[2]'* mc1.p
1×6 RowVector{Float64,Array{Float64,1}}: 0.0 0.333333 0.0 0.0 0.666667 0.0
Plot these distributions.
"""
Plot the given distribution.
"""
function draw_histogram(distribution;
title="", xlabel="", ylabel="", ylim=(0, 1),
show_legend=false, show_grid=false)
n = length(distribution)
p = bar(collect(1:n),
distribution,
xlim=(0.5, n+0.5),
ylim=ylim,
title=title,
xlabel=xlabel,
ylabel=ylabel,
xticks=1:n,
legend=show_legend,
grid=show_grid)
return p
end
draw_histogram
Stationary distribution for the recurrent class:
titles = ["$recurrent_class"
for recurrent_class in recurrent_classes(mc1)]
ps = []
for (title, dist) in zip(titles, stationary_distributions(mc1))
push!(ps, draw_histogram(dist, title=title, xlabel="States"))
end
plot(ps..., layout=(1, 2))
Let us simulate our Markov chain mc1
.
The simualte
method generates a sample path
of length given by the first argument, ts_length
,
with an initial state as specified by an optional argument init
;
if not specified, the initial state is randomly drawn.
A sample path from state 1
:
(Note: Transposing the output array here is just for visualization.)
simulate(mc1, 50, init=1)'
1×50 RowVector{Int64,Array{Int64,1}}: 1 1 1 1 1 1 1 1 1 1 1 1 1 … 1 1 1 1 1 1 1 1 1 1 1 1
As is clear from the transition matrix P
,
if it starts at state 1
, the chain stays there forever,
i.e., 1
is an absorbing state, a state that constitutes a singleton recurrent class.
Start with state 2
:
simulate(mc1, 50, init=2)'
1×50 RowVector{Int64,Array{Int64,1}}: 2 5 2 5 5 5 2 5 2 5 5 2 5 … 5 2 5 2 5 5 2 5 2 5 2 5
You can observe that the chain stays in the recurrent class $\{2,5\}$
and visits states 2
and 5
with certain frequencies.
If init
is not specified, the initial state is randomly chosen:
simulate(mc1, 50)'
1×50 RowVector{Int64,Array{Int64,1}}: 2 5 2 5 2 5 5 2 5 5 5 5 2 … 5 2 5 5 2 5 2 5 5 5 5 5
Now, let us compute the frequency distribution along a sample path, given by $$ \frac{1}{t} \sum_{\tau=0}^{t-1} \mathbf{1}\{X_{\tau} = s\} \quad (s \in S). $$
"""
Return the distribution of visits by a sample path of length t
of mc with an initial state init.
"""
function time_series_dist(mc, ts::Array{Int}; init=rand(1:n_states(mc)))
t_max = maximum(ts)
ts_size = length(ts)
X = simulate(mc, t_max, init=init)
dists = Array{Float64}(n_states(mc), ts_size)
bins = 1:n_states(mc)+1
for (i, t) in enumerate(ts)
h = fit(Histogram, X[1:t], bins, closed=:left)
dists[:, i] = h.weights / t
end
return dists
end
function time_series_dist(mc, t::Int; init=rand(1:n_states(mc)))
return time_series_dist(mc, [t], init=init)[:, 1]
end
time_series_dist (generic function with 2 methods)
Here is a frequency distribution along a sample path, of length 100,
from initial state 2
, which is a recurrent state:
time_series_dist(mc1, 100, init=2)
6-element Array{Float64,1}: 0.0 0.33 0.0 0.0 0.67 0.0
Length 10,000:
time_series_dist(mc1, 10^4, init=2)
6-element Array{Float64,1}: 0.0 0.3311 0.0 0.0 0.6689 0.0
The distribution becomes close to the stationary distribution (0, 1/3, 0, 0, 2/3, 0)
.
Plot the frequency distributions for a couple of different time lengths:
function plot_time_series_dists(mc, ts; init=rand(1:n_states(mc)), layout=(1,length(ts)))
dists = time_series_dist(mc, ts, init=init)
titles = ["t=$t" for t in ts]
ps = []
for (i, title) in enumerate(titles)
p = draw_histogram(dists[:, i], title=title, xlabel="States")
push!(ps, p)
end
plot(ps..., layout=layout)
end
plot_time_series_dists (generic function with 1 method)
init = 2
ts = [5, 10, 50, 100]
plot_time_series_dists(mc1, ts, init=init)
Start with state 3
,
which is a transient state:
init = 3
ts = [5, 10, 50, 100]
plot_time_series_dists(mc1, ts, init=init)
Run the above cell several times; you will observe that the limit distribution differs across sample paths. Sometimes the state is absorbed into the recurrent class $\{1\}$, while other times it is absorbed into the recurrent class $\{2,5\}$.
Some sample path with init=3
init = 3
ts = [5, 10, 50, 100]
plot_time_series_dists(mc1, ts, init=init)
Another sample path with init=3
plot_time_series_dists(mc1, ts, init=init)
In fact, for almost every sample path of a finite Markov chain $\{X_t\}$, for some recurrent class $C$ we have $$ \frac{1}{t} \sum_{\tau=0}^{t-1} \mathbf{1}\{X_{\tau} = s\} \to \psi^C[s] \quad \text{as $t \to \infty$} $$ for all states $s$, where $\psi^C$ is the stationary distribution associated with the recurrent class $C$.
If the initial state $s_0$ is a recurrent state, then the recurrent class $C$ above is the one that contains $s_0$, while if it is a transient state, then the recurrent class to which the convergence occurs depends on the sample path.
Let us simulate with the remaining states, 4
, 5
, and 6
.
Time series distributions for t=100
inits = [4, 5, 6]
t = 100
ps = []
for init in inits
p = draw_histogram(
time_series_dist(mc1, t, init=init),
title="Initial state = $init",
xlabel="States")
push!(ps, p)
end
plot(ps..., layout=(1, 3))
Next, let us repeat the simulation for many times (say, 10,000 times)
and obtain the distribution of visits to each state at a given time period t
.
That is, we want to simulate the marginal distribution at time t
.
function QuantEcon.simulate(mc, ts_length, num_reps::Int=10^4; init=1)
X = Array{Int}(ts_length, num_reps)
for i in 1:num_reps
X[:, i] = simulate(mc, ts_length, init=init)
end
return X
end
"""
Return the distribution of visits at time T by num_reps times of simulation
of mc with an initial state init.
"""
function cross_sectional_dist(mc, ts::Array{Int}, num_reps=10^4; init=rand(1:n_states(mc)))
t_max = maximum(ts)
ts_size = length(ts)
dists = Array{Float64}(n_states(mc), ts_size)
X = simulate(mc, t_max+1, num_reps, init=init)
bins = 1:n_states(mc)+1
for (i, t) in enumerate(ts)
h = fit(Histogram, X[t, :], bins, closed=:left)
dists[:, i] = h.weights / num_reps
end
return dists
end
function cross_sectional_dist(mc, t::Int, num_reps=10^4; init=rand(1:n_states(mc)))
return cross_sectional_dist(mc, [t], num_reps, init=init)[:, 1]
end
cross_sectional_dist (generic function with 4 methods)
Start with state 2
:
init = 2
t = 10
cross_sectional_dist(mc1, t, init=init)
6-element Array{Float64,1}: 0.0 0.3396 0.0 0.0 0.6604 0.0
t = 100
cross_sectional_dist(mc1, t, init=init)
6-element Array{Float64,1}: 0.0 0.3398 0.0 0.0 0.6602 0.0
The distribution is close to the stationary distribution (0, 1/3, 0, 0, 2/3, 0).
Plot the simulated marginal distribution at t
for some values of t
.
function plot_cross_sectional_dists(mc, ts, num_reps=10^4; init=1)
dists = cross_sectional_dist(mc, ts, num_reps, init=init)
titles = map(t -> "t=$t", ts)
ps = []
for (i, title) in enumerate(titles)
p = draw_histogram(dists[:, i], title=title, xlabel="States")
push!(ps, p)
end
plot(ps..., layout=(1, length(ts)))
end
plot_cross_sectional_dists (generic function with 2 methods)
init = 2
ts = [2, 3, 5, 10]
plot_cross_sectional_dists(mc1, ts, init=init)
Starting with a transient state 3
:
init = 3
t = 10
cross_sectional_dist(mc1, t, init=init)
6-element Array{Float64,1}: 0.4965 0.1658 0.0 0.0027 0.333 0.002
t = 100
dist = cross_sectional_dist(mc1, t, init=init)
6-element Array{Float64,1}: 0.5044 0.1636 0.0 0.0 0.332 0.0
draw_histogram(dist,
title="Cross sectional distribution at t=$t with init=$init",
xlabel="States")
Observe that the distribution is close to a convex combination of
the stationary distributions (1, 0, 0, 0, 0, 0)
and (0, 1/3, 0, 0, 2/3, 0)
,
which is a stationary distribution itself.
How the simulated marginal distribution evolves:
init = 3
ts = [2, 3, 5, 10]
plot_cross_sectional_dists(mc1, ts, init=init)
Since our Markov chain is aperiodic (i.e., every recurrent class is aperiodic), the marginal disribution at time $t$ converges as $t \to \infty$ to some stationary distribution, and the limit distribution depends on the initial state, according to the probabilities that the state is absorbed into the recurrent classes.
For initial states 4
, 5
, and 6
:
inits = [4, 5, 6]
t = 10
ps = []
for init in inits
p = draw_histogram(cross_sectional_dist(mc1, t, init=init),
title="Initial state = $init",
xlabel="States")
push!(ps, p)
end
plot(ps..., layout=(1, length(inits)))
The marginal distributions at time $t$ are obtained by $P^t$.
ts = [10, 20, 30]
for t in ts
P_t = mc1.p^t
println("P^$t =")
prettyprint(P_t)
println()
end
P^10 = 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.333984 0.0 0.0 0.666016 0.0 0.498072 0.166792 1.69351e-5 0.000767702 0.3332 0.00115155 0.999023 0.0 0.0 0.000976563 0.0 0.0 0.0 0.333008 0.0 0.0 0.666992 0.0 0.999023 0.0 0.0 0.0 0.0 0.000976563 P^20 = 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.333334 0.0 0.0 0.666666 0.0 0.499998 0.166667 2.86797e-10 7.6271e-7 0.333333 1.14407e-6 0.999999 0.0 0.0 9.53674e-7 0.0 0.0 0.0 0.333333 0.0 0.0 0.666667 0.0 0.999999 0.0 0.0 0.0 0.0 9.53674e-7 P^30 = 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.0 0.0 0.666667 0.0 0.5 0.166667 4.85694e-15 7.45054e-10 0.333333 1.11758e-9 1.0 0.0 0.0 9.31323e-10 0.0 0.0 0.0 0.333333 0.0 0.0 0.666667 0.0 1.0 0.0 0.0 0.0 0.0 9.31323e-10
In the canonical form:
Q = mc1.p[permutation, permutation]
println("Q =")
prettyprint(Q)
for t in ts
Q_t = Q^t
println("\nQ^$t =")
prettyprint(Q_t)
end
Q = 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.333333 0.0 0.5 0.0 0.0 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.5 0.0 Q^10 = 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.333984 0.666016 0.0 0.0 0.0 0.0 0.333008 0.666992 0.0 0.0 0.0 0.498072 0.166792 0.3332 1.69351e-5 0.000767702 0.00115155 0.999023 0.0 0.0 0.0 0.000976563 0.0 0.999023 0.0 0.0 0.0 0.0 0.000976563 Q^20 = 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.333334 0.666666 0.0 0.0 0.0 0.0 0.333333 0.666667 0.0 0.0 0.0 0.499998 0.166667 0.333333 2.86797e-10 7.6271e-7 1.14407e-6 0.999999 0.0 0.0 0.0 9.53674e-7 0.0 0.999999 0.0 0.0 0.0 0.0 9.53674e-7 Q^30 = 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.666667 0.0 0.0 0.0 0.0 0.333333 0.666667 0.0 0.0 0.0 0.5 0.166667 0.333333 4.85694e-15 7.45054e-10 1.11758e-9 1.0 0.0 0.0 0.0 9.31323e-10 0.0 1.0 0.0 0.0 0.0 0.0 9.31323e-10
Observe that the first three rows, which correspond to the recurrent states, are close to the stationary distributions associated with the corresponding recurrent classes.
Consider the Markov chain given by the following stochastic matrix, taken from Exercise 9 (see also Exercise 11) in Jarvis and Shier (where the actual values of non-zero probabilities are not important):
P = zeros(10, 10)
P[1, 4] = 1
P[2, [1, 5]] = 1/2
P[3, 7] = 1
P[4, [2, 3, 8]] = 1/3
P[5, 4] = 1
P[6, 5] = 1
P[7, 4] = 1
P[8, [7, 9]] = 1/2
P[9, 10] = 1
P[10, 6] = 1
P
10×10 Array{Float64,2}: 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.0 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.0 0.0 0.0 0.333333 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
mc2 = MarkovChain(P)
Discrete Markov Chain stochastic matrix of type Array{Float64,2}: [0.0 0.0 … 0.0 0.0; 0.5 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 1.0; 0.0 0.0 … 0.0 0.0]
This Markov chain is irreducible:
is_irreducible(mc2)
This Markov chain is periodic:
is_aperiodic(mc2)
Its period, which we denote by $d$:
d = period(mc2)
Cyclic classes are:
Note: The function to identify the cyclic classes is only implemented in Python version and not available in Julia version. The result below is retrieved from the paper introduced above: [Graph-Theoretic Analysis of Finite Markov Chains]((http://www.ces.clemson.edu/~shierd/Shier/markov.pdf)
cyclic_classes = [[1, 5, 7, 9], [4, 10], [2, 3, 6, 8]]
3-element Array{Array{Int64,1},1}: [1, 5, 7, 9] [4, 10] [2, 3, 6, 8]
If a Markov chain is periodic with period $d \geq 2$, then its transition probability matrix is written in the form ("cyclic normal form") $$ \begin{pmatrix} 0 & P_1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & P_2 & 0 & \cdots & 0 \\ 0 & 0 & 0 & P_3 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & P_{d-1} \\ P_d & 0 & 0 & 0 & \cdots & 0 \end{pmatrix}. $$
Represent our Markov chain in cyclic normal form:
permutation = vcat(cyclic_classes...)
Q = mc2.p[permutation, permutation]
10×10 Array{Float64,2}: 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.333333 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0
Re-define the Markov chain with the above matrix Q
:
mc2 = MarkovChain(Q)
Discrete Markov Chain stochastic matrix of type Array{Float64,2}: [0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 1.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0]
Obtain the block components $P_1, \cdots, P_{d}$:
cyclic_classes = [[1, 2, 3, 4], [5, 6], [7, 8, 9, 10]]
3-element Array{Array{Int64,1},1}: [1, 2, 3, 4] [5, 6] [7, 8, 9, 10]
P_blocks = []
for i in 1:d
push!(P_blocks, mc2.p[cyclic_classes[(i-1)%d+1], :][:, cyclic_classes[i%d+1]])
println("P_$i =")
prettyprint(P_blocks[i])
println()
end
P_1 = 1.0 0.0 1.0 0.0 1.0 0.0 0.0 1.0 P_2 = 0.333333 0.333333 0.0 0.333333 0.0 0.0 1.0 0.0 P_3 = 0.5 0.5 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.5 0.5
$P^d$ is block diagonal:
P_power_d = mc2.p^d
prettyprint(P_power_d)
0.166667 0.166667 0.5 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.166667 0.166667 0.5 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.166667 0.166667 0.5 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.833333 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.333333 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.333333 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.333333 0.0 0.0 0.0 0.0 0.0 0.0 0.166667 0.166667 0.5 0.166667
P_power_d_blocks = []
ordinals = ["1st", "2nd", "3rd"]
for (i, ordinal) in enumerate(ordinals)
push!(P_power_d_blocks, P_power_d[cyclic_classes[i], :][:, cyclic_classes[i]])
println("$ordinal diagonal block of P^d =")
prettyprint(P_power_d_blocks[i])
println()
end
1st diagonal block of P^d = 0.166667 0.166667 0.5 0.166667 0.166667 0.166667 0.5 0.166667 0.166667 0.166667 0.5 0.166667 0.0 1.0 0.0 0.0 2nd diagonal block of P^d = 0.833333 0.166667 1.0 0.0 3rd diagonal block of P^d = 0.333333 0.333333 0.0 0.333333 0.333333 0.333333 0.0 0.333333 0.333333 0.333333 0.0 0.333333 0.166667 0.166667 0.5 0.166667
The $i$th diagonal block of $P^d$ equals $P_i P_{i+1} \cdots P_d P_1 \cdots P_{i-1}$:
products = []
for i in 1:d
R = P_blocks[i]
string = "P_$i"
for j in 1:d-1
R = R * P_blocks[(i+j-1)%d+1]
string *= " P_$((i+j-1)%d+1)"
end
push!(products, R)
println(string, " =")
prettyprint(R)
println()
end
P_1 P_2 P_3 = 0.166667 0.166667 0.5 0.166667 0.166667 0.166667 0.5 0.166667 0.166667 0.166667 0.5 0.166667 0.0 1.0 0.0 0.0 P_2 P_3 P_1 = 0.833333 0.166667 1.0 0.0 P_3 P_1 P_2 = 0.333333 0.333333 0.0 0.333333 0.333333 0.333333 0.0 0.333333 0.333333 0.333333 0.0 0.333333 0.166667 0.166667 0.5 0.166667
for (matrix0, matrix1) in zip(P_power_d_blocks, products)
println(matrix0 == matrix1)
end
true true true
The Markov chain mc2
has a unique stationary distribution,
which we denote by $\psi$:
length(stationary_distributions(mc2))
psi = stationary_distributions(mc2)[1]
10-element Array{Float64,1}: 0.047619 0.0952381 0.142857 0.047619 0.285714 0.047619 0.0952381 0.0952381 0.047619 0.0952381
draw_histogram(psi,
title="Stationary distribution",
xlabel="States",
ylim=(0, 0.35))
Obtain the stationary distributions $\psi^1, \ldots, \psi^{d}$ each associated with the diagonal blocks of $P^d$:
psi_s = []
for i in 1:d
push!(psi_s, stationary_distributions(MarkovChain(P_power_d_blocks[i]))[1])
println("psi^$i =")
println(psi_s[i])
end
psi^1 = [0.142857, 0.285714, 0.428571, 0.142857] psi^2 = [0.857143, 0.142857] psi^3 = [0.285714, 0.285714, 0.142857, 0.285714]
ps = []
for i in 1:d
psi_i_full_dim = zeros(n_states(mc2))
psi_i_full_dim[cyclic_classes[i]] = psi_s[i]
p = draw_histogram(psi_i_full_dim,
title="ψ^$i",
xlabel="States")
push!(ps, p)
end
plot(ps..., layout=(1, 3))
Verify that $\psi^{i+1} = \psi^i P_i$:
for i in 1:d
println("psi^$i P_$i =")
println(psi_s[i]' * P_blocks[i])
end
psi^1 P_1 = [0.857143 0.142857] psi^2 P_2 = [0.285714 0.285714 0.142857 0.285714] psi^3 P_3 = [0.142857 0.285714 0.428571 0.142857]
Verify that $\psi = (\psi^1 + \cdots + \psi^d)/d$:
# Right hand side of the above identity
rhs = zeros(n_states(mc2))
for i in 1:d
rhs[cyclic_classes[i]] = psi_s[i]
end
rhs /= d
rhs
10-element Array{Float64,1}: 0.047619 0.0952381 0.142857 0.047619 0.285714 0.047619 0.0952381 0.0952381 0.047619 0.0952381
maximum(abs.(psi - rhs))
meaning right hand side is very close to $\psi$
Since the Markov chain in consideration is periodic, the marginal distribution does not converge, but changes periodically.
Let us compute the powers of the transition probability matrix (in cyclic normal form):
Print $P^1, P^2, \ldots, P^d$:
for i in 1:d+1
println("P^$i =")
prettyprint(mc2.p^i)
println()
end
P^1 = 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.333333 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.0 P^2 = 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.333333 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.333333 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.333333 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.166667 0.166667 0.5 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.0 P^3 = 0.166667 0.166667 0.5 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.166667 0.166667 0.5 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.166667 0.166667 0.5 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.833333 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.333333 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.333333 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.333333 0.0 0.0 0.0 0.0 0.0 0.0 0.166667 0.166667 0.5 0.166667 P^4 = 0.0 0.0 0.0 0.0 0.833333 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.833333 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.833333 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.277778 0.277778 0.166667 0.277778 0.0 0.0 0.0 0.0 0.0 0.0 0.333333 0.333333 0.0 0.333333 0.166667 0.166667 0.5 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.166667 0.166667 0.5 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.166667 0.166667 0.5 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.0833333 0.583333 0.25 0.0833333 0.0 0.0 0.0 0.0 0.0 0.0
Print $P^{2d}$, $P^{4d}$, and $P^{6d}$:
for i in [k*d for k in [2, 4, 6]]
println("P^$i =")
prettyprint(mc2.p^i)
println()
end
P^6 = 0.138889 0.305556 0.416667 0.138889 0.0 0.0 0.0 0.0 0.0 0.0 0.138889 0.305556 0.416667 0.138889 0.0 0.0 0.0 0.0 0.0 0.0 0.138889 0.305556 0.416667 0.138889 0.0 0.0 0.0 0.0 0.0 0.0 0.166667 0.166667 0.5 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.861111 0.138889 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.833333 0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.277778 0.277778 0.166667 0.277778 0.0 0.0 0.0 0.0 0.0 0.0 0.277778 0.277778 0.166667 0.277778 0.0 0.0 0.0 0.0 0.0 0.0 0.277778 0.277778 0.166667 0.277778 0.0 0.0 0.0 0.0 0.0 0.0 0.305556 0.305556 0.0833333 0.305556 P^12 = 0.142747 0.286265 0.428241 0.142747 0.0 0.0 0.0 0.0 0.0 0.0 0.142747 0.286265 0.428241 0.142747 0.0 0.0 0.0 0.0 0.0 0.0 0.142747 0.286265 0.428241 0.142747 0.0 0.0 0.0 0.0 0.0 0.0 0.143519 0.282407 0.430556 0.143519 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.857253 0.142747 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.856481 0.143519 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.285494 0.285494 0.143519 0.285494 0.0 0.0 0.0 0.0 0.0 0.0 0.285494 0.285494 0.143519 0.285494 0.0 0.0 0.0 0.0 0.0 0.0 0.285494 0.285494 0.143519 0.285494 0.0 0.0 0.0 0.0 0.0 0.0 0.286265 0.286265 0.141204 0.286265 P^18 = 0.142854 0.28573 0.428562 0.142854 0.0 0.0 0.0 0.0 0.0 0.0 0.142854 0.28573 0.428562 0.142854 0.0 0.0 0.0 0.0 0.0 0.0 0.142854 0.28573 0.428562 0.142854 0.0 0.0 0.0 0.0 0.0 0.0 0.142876 0.285622 0.428627 0.142876 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.857146 0.142854 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.857124 0.142876 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.285708 0.285708 0.142876 0.285708 0.0 0.0 0.0 0.0 0.0 0.0 0.285708 0.285708 0.142876 0.285708 0.0 0.0 0.0 0.0 0.0 0.0 0.285708 0.285708 0.142876 0.285708 0.0 0.0 0.0 0.0 0.0 0.0 0.28573 0.28573 0.142811 0.28573
$P^{kd}$ converges as $k \to \infty$ to a matrix that contains $\psi^1, \ldots, \psi^d$.
Print $P^{kd+1}, \ldots, P^{kd+d}$ with $k = 10$ for example:
for i in 10*d+1:11*d
println("P^$i =")
prettyprint(mc2.p^i)
println()
end
P^31 = 0.0 0.0 0.0 0.0 0.857143 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.857143 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.857143 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.857143 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.285714 0.285714 0.142857 0.285714 0.0 0.0 0.0 0.0 0.0 0.0 0.285714 0.285714 0.142857 0.285714 0.142857 0.285714 0.428571 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.142857 0.285714 0.428571 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.142857 0.285714 0.428571 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.142857 0.285714 0.428571 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 P^32 = 0.0 0.0 0.0 0.0 0.0 0.0 0.285714 0.285714 0.142857 0.285714 0.0 0.0 0.0 0.0 0.0 0.0 0.285714 0.285714 0.142857 0.285714 0.0 0.0 0.0 0.0 0.0 0.0 0.285714 0.285714 0.142857 0.285714 0.0 0.0 0.0 0.0 0.0 0.0 0.285714 0.285714 0.142857 0.285714 0.142857 0.285714 0.428571 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.142857 0.285714 0.428571 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.857143 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.857143 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.857143 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.857143 0.142857 0.0 0.0 0.0 0.0 P^33 = 0.142857 0.285714 0.428571 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.142857 0.285714 0.428571 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.142857 0.285714 0.428571 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.142857 0.285714 0.428571 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.857143 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.857143 0.142857 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.285714 0.285714 0.142857 0.285714 0.0 0.0 0.0 0.0 0.0 0.0 0.285714 0.285714 0.142857 0.285714 0.0 0.0 0.0 0.0 0.0 0.0 0.285714 0.285714 0.142857 0.285714 0.0 0.0 0.0 0.0 0.0 0.0 0.285714 0.285714 0.142857 0.285714
But $P^i$ itself does not converge.
Plot the frequency distribution of visits to the states
along a sample path starting at state 1
:
init = 1
dist = time_series_dist(mc2, 10^4, init=init)
10-element Array{Float64,1}: 0.0449 0.1008 0.1372 0.0505 0.2828 0.0505 0.0951 0.0908 0.0505 0.0969
draw_histogram(dist,
title="Time series distribution with init=$init",
xlabel="States", ylim=(0, 0.35))
Observe that the distribution is close to the (unique) stationary distribution $\psi$.
psi
10-element Array{Float64,1}: 0.047619 0.0952381 0.142857 0.047619 0.285714 0.047619 0.0952381 0.0952381 0.047619 0.0952381
bar(psi, legend=false, xlabel="States", title="ψ")
Next, plot the simulated marginal distributions
at $T = 10d+1, \ldots, 11d, 11d+1, \ldots, 12d$ with initial state 1
:
init = 1
k = 10
ts = [k*d + i + 1 for i in 1:2*d]
num_reps = 10^2
dists = cross_sectional_dist(mc2, ts, num_reps, init=init)
ps = []
for (i, t) in enumerate(ts)
p = draw_histogram(dists[:, i],
title="t = $t")
push!(ps, p)
end
plot(ps..., layout=(2, d))
Compare these with the rows of $P^{10d+1}, \ldots, P^{10d+d}$.
Consider the Markov chain given by the following stochastic matrix $P^{\varepsilon}$, parameterized by $\varepsilon$:
function P_epsilon(eps, p=0.5)
P = [1-(p+eps) p eps;
p 1-(p+eps) eps;
eps eps 1-2*eps]
return P
end
P_epsilon (generic function with 2 methods)
If $\varepsilon = 0$,
then the Markovh chain is reducible into two recurrent classes, [1, 2]
and [3]
:
P_epsilon(0)
3×3 Array{Float64,2}: 0.5 0.5 0.0 0.5 0.5 0.0 0.0 0.0 1.0
recurrent_classes(MarkovChain(P_epsilon(0)))
2-element Array{Array{Int64,1},1}: [1, 2] [3]
If $\varepsilon > 0$ but small, the chain is irreducible,
but transition within each of the subsets [1, 2]
and [3]
is much more likely
than that between these sets.
P_epsilon(0.001)
3×3 Array{Float64,2}: 0.499 0.5 0.001 0.5 0.499 0.001 0.001 0.001 0.998
recurrent_classes(MarkovChain(P_epsilon(0.001)))
1-element Array{Array{Int64,1},1}: [1, 2, 3]
Analytically, the unique stationary distribution of the chain with $\varepsilon > 0$
is (1/3, 1/3, 1/3)
, independent of the value of $\varepsilon$.
However, for such matrices with small values of $\varepsilon > 0$, general purpose eigenvalue solvers are numerically unstable.
For example, if we use Base.LinAlg.eig
to compute the eigenvector that corresponds
to the dominant (i.e., largest in magnitude) eigenvalue:
epsilons = [10.0^(-i) for i in 11:17]
for eps in epsilons
println("epsilon = $eps")
w, v = eig(P_epsilon(eps)')
i = indmax(w)
println(v[:, i]/sum(v[:, i]))
end
epsilon = 1.0e-11 [0.333333, 0.333333, 0.333333] epsilon = 1.0e-12 [0.333316, 0.333316, 0.333368] epsilon = 1.0e-13 [0.333304, 0.333304, 0.333393] epsilon = 1.0e-14 [0.331651, 0.331651, 0.336699] epsilon = 1.0e-15 [0.297638, 0.297638, 0.404725] epsilon = 1.0e-16 [0.5, 0.5, -0.0] epsilon = 1.0e-17 [0.5, 0.5, -0.0]
The same applies to Base.LinAlg.eigs
:
epsilons = [10.0^(-i) for i in 11:17]
for eps in epsilons
println("epsilon = $eps")
w, v = eigs(P_epsilon(eps)', nev=2)
i = indmax(w)
println(v[:, i]/sum(v[:, i]))
end
epsilon = 1.0e-11 [0.333334, 0.333334, 0.333332] epsilon = 1.0e-12 [0.333333, 0.333333, 0.333333] epsilon = 1.0e-13 [0.333309, 0.333309, 0.333383] epsilon = 1.0e-14 [0.333185, 0.333185, 0.33363] epsilon = 1.0e-15 [0.332257, 0.332257, 0.335485] epsilon = 1.0e-16 [0.300527, 0.300527, 0.398946] epsilon = 1.0e-17 [-0.852598, -0.852598, 2.7052]
The output becomes farther from the actual stationary distribution (1/3, 1/3, 1/3)
as $\varepsilon$ becomes smaller.
MarkovChain
in quantecon
employs
the algorithm called the "GTH algorithm",
which is a numerically stable variant of Gaussian elimination,
specialized for Markov chains.
epsilons = [10.0^(-i) for i in 12:17]
push!(epsilons, 1e-100)
for eps in epsilons
println("epsilon = $eps")
println(stationary_distributions(MarkovChain(P_epsilon(eps)))[1])
end
epsilon = 1.0e-12 [0.333333, 0.333333, 0.333333] epsilon = 1.0e-13 [0.333333, 0.333333, 0.333333] epsilon = 1.0e-14 [0.333333, 0.333333, 0.333333] epsilon = 1.0e-15 [0.333333, 0.333333, 0.333333] epsilon = 1.0e-16 [0.333333, 0.333333, 0.333333] epsilon = 1.0e-17 [0.333333, 0.333333, 0.333333] epsilon = 1.0e-100 [0.333333, 0.333333, 0.333333]
It succeeds in obtaining the correct stationary distribution for any value of $\varepsilon$.