This notebook is part of a computational appendix that accompanies the paper
MATLAB, Python, Julia: What to Choose in Economics?
Coleman, Lyon, Maliar, and Maliar (2017)
In order to run the codes in this notebook you will need to install and configure a few Python packages. We recommend downloading Anaconda and/or following the instructions on quantecon.org. Once your Python installation is up and running, there are a few additional packages you will need in order to run the code here. To do this, you should run the following commands in your terminal
pip install interpolation
pip install quantecon
import numpy as np
import scipy.linalg as la
import scipy.optimize as opt
import time
import quantecon as qe
from collections import namedtuple
from interpolation.complete_poly import (CompletePolynomial,
n_complete, complete_polynomial,
complete_polynomial_der,
_complete_poly_impl,
_complete_poly_impl_vec,
_complete_poly_der_impl,
_complete_poly_der_impl_vec)
from numba import jit, vectorize
This section gives a short description of the commonly used stochastic Neoclassical growth model.
There is a single infinitely-lived representative agent who consumes and saves using capital. The consumer discounts the future with factor $\beta$ and derives utility from only consumption. Additionally, saved capital will depreciate at $\delta$.
The consumer has access to a Cobb-Douglas technology which uses capital saved from the previous period to produce and is subject to stochastic productivity shocks.
Productivity shocks follow an AR(1) in logs.
The agent's problem can be written recursively using the following Bellman equation
\begin{align} V(k_t, z_t) &= \max_{k_{t+1}} u(c_t) + \beta E \left[ V(k_{t+1}, z_{t+1}) \right] \\ &\text{subject to } \\ c_t &= z_t f(k_t) + (1 - \delta) k_t - k_{t+1} \\ \log z_{t+1} &= \rho \log z_t + \sigma \varepsilon \end{align}We begin by defining a namedtuple
that contains the parameters of our model. This is useful to pass the parameters around to functions that are just-in-time (JIT) compiled by numba
.
#
# Create a named tuple type that we can pass into the jitted functions
# so that we don't have to pass parameters one by one
#
Params = namedtuple("Params", ["A", "alpha", "beta", "delta", "gamma",
"rho", "sigma"])
@jit(nopython=True)
def param_unpack(params):
"Unpack parameters from the Params type"
out = (params.A, params.alpha, params.beta, params.delta,
params.gamma, params.rho, params.sigma)
return out
We will then define various helper functions to ensure that we don't repeat ourselves and that the inner functions can be JIT compiled.
#
# Helper functions to make sure things are jitted
#
@vectorize(nopython=True)
def u(c, gamma):
"CRRA utility function"
return -1e10 if c < 1e-10 else (c**(1-gamma) - 1.0)/(1-gamma)
@vectorize(nopython=True)
def du(c, gamma):
"Derivative of CRRA utility function"
return 1e10 if c < 1e-10 else c**(-gamma)
@vectorize(nopython=True)
def duinv(u, gamma):
"Inverse of the derivative of the CRRA utility function"
return u**(-1.0 / gamma)
@vectorize(nopython=True)
def f(k, z, A, alpha):
"C-D production function"
return A * z * k**alpha
@vectorize(nopython=True)
def df(k, z, A, alpha):
"Derivative of C-D production function"
return alpha*A*z*k**(alpha - 1.0)
@vectorize(nopython=True)
def expendables_t(k, z, A, alpha, delta):
"Budget constraint"
return (1-delta)*k + f(k, z, A, alpha)
This code block contains other functions that are useful for computing the policy using the envelope condition (to be discussed later), simulating, and computing Euler errors.
@jit(nopython=True)
def env_cond_kp(temp, params, degree, v_coeffs, kt, zt):
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = param_unpack(params)
# Compute derivative of VF wrt k
_complete_poly_der_impl_vec(np.array([kt, zt]), degree, 0, temp)
c = duinv(np.dot(temp, v_coeffs) / (1.0-delta+df(kt, zt, A, alpha)), gamma)
return expendables_t(kt, zt, A, alpha, delta) - c
@jit(nopython=True)
def jit_simulate_ncgm(params, degree, v_coeffs, T, nburn, shocks):
"Simulates economy using envelope condition as policy rule"
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = param_unpack(params)
# Allocate space for output
ksim = np.empty(T+nburn)
zsim = np.empty(T+nburn)
ksim[0], zsim[0] = 1.0, 1.0
# Allocate space for temporary vector to fill with complete polynomials
temp = np.empty(n_complete(2, degree))
# Simulate
for t in range(1, T+nburn):
# Evaluate policy for today given yesterdays state
kp = env_cond_kp(temp, params, degree, v_coeffs, ksim[t-1], zsim[t-1])
# Draw new z and update k using policy from above
zsim[t] = zsim[t-1]**rho * np.exp(sigma*shocks[t])
ksim[t] = kp
return ksim[nburn:], zsim[nburn:]
@jit(nopython=True)
def jit_ee(params, degree, v_coeffs, nodes, weights, ks, zs):
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = param_unpack(params)
# Allocate space for temporary vector to fill with complete polynomials
temp = np.empty(n_complete(2, degree))
T = ks.size
Qn = weights.size
# Allocate space to store euler errors
ee = np.empty(T)
# Iterate over all ks and zs
for t in range(T):
# Current states
k, z = ks[t], zs[t]
# Compute decision for kp and implied c
k1 = env_cond_kp(temp, params, degree, v_coeffs, k, z)
c = expendables_t(k, z, A, alpha, delta) - k1
# Compute euler error for period t
lhs = du(c, gamma)
rhs = 0.0
for i in range(Qn):
# Get productivity tomorrow
z1 = z**rho * np.exp(nodes[i])
# Compute decision for kpp and implied c
k2 = env_cond_kp(temp, params, degree, v_coeffs, k1, z1)
c1 = expendables_t(k1, z1, A, alpha, delta) - k2
rhs = rhs + weights[i]*du(c1, gamma)*(1-delta+df(k1, z1, A, alpha))
ee[t] = np.abs(1.0 - beta*rhs/lhs)
return ee
We also now define a class that contains
This again helps us maintain all of the relevant information in one place and simplifies passing it to other functions.
class NeoclassicalGrowth(object):
"""
The stochastic Neoclassical Growth model contains
parameters which include
* alpha: Capital share in output
* beta: discount factor
* delta: depreciation rate
* gamma: risk aversion
* rho: persistence of the log productivity level
* sigma: standard deviation of shocks to log productivity
"""
def __init__(self, alpha=0.36, beta=0.99, delta=0.02, gamma=2.0,
rho=0.95, sigma=0.01, kmin=0.9, kmax=1.1, nk=10,
zmin=0.9, zmax=1.1, nz=10, Qn=5):
# Household parameters
self.beta, self.gamma = beta, gamma
# Firm/technology parameters
self.alpha, self.delta, self.rho, self.sigma = alpha, delta, rho, sigma
# Make A such that CE steady state k is roughly 1
self.A = (1.0/beta - (1-delta)) / alpha
# Create t grids
self.kgrid = np.linspace(kmin, kmax, nk)
self.zgrid = np.linspace(zmin, zmax, nz)
self.grid = qe.gridtools.cartesian([self.kgrid, self.zgrid])
k, z = self.grid[:, 0], self.grid[:, 1]
# Create t+1 grids
self.ns, self.Qn = nz*nk, Qn
eps_nodes, weights = qe.quad.qnwnorm(Qn, 0.0, sigma**2)
self.weights = weights
self.z1 = z[:, None]**rho * np.exp(eps_nodes[None, :])
def _unpack_params(self):
out = (self.A, self.alpha, self.beta, self.delta,
self.gamma, self.rho, self.sigma)
return out
def _unpack_grids(self):
out = (self.kgrid, self.zgrid, self.grid, self.ztp1, self.weights)
return out
In this notebook, we describe the following solution methods:
Each of these solution methods will have a very similar structure that follows a few basic steps:
In order to reduce the amount of repeated code and keep the exposition as clean as possible (the notebook is plenty long as is...), we will define a class for each solution method that inherit various properties from a general solution parent class. The parent class will contain methods which apply to all of the other solution methods such as a general solve method, computing expectations, simulating, etc... This implementation may feel strange at first if one isn't familiar with object oriented programming, but these concepts can be powerful when properly used.
class GeneralSolution:
"""
This is a general solution method. We define this, so that we can
sub-class it and define specific update methods for each particular
solution method
"""
def __init__(self, ncgm, degree, prev_sol=None):
# Save model and approximation degree
self.ncgm, self.degree = ncgm, degree
# Unpack some info from ncgm
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
grid = self.ncgm.grid
k = grid[:, 0]
z = grid[:, 1]
# Use parameter values from model to create a namedtuple with
# parameters saved inside
self.params = Params(A, alpha, beta, delta, gamma, rho, sigma)
self.Phi = complete_polynomial(grid.T, degree).T
self.dPhi = complete_polynomial_der(grid.T, degree, 0).T
# Update to fill initial value and policy matrices
# If we give it another solution type then use it to
# generate values and policies
if issubclass(type(prev_sol), GeneralSolution):
oldPhi = complete_polynomial(ncgm.grid.T, prev_sol.degree).T
self.VF = oldPhi @ prev_sol.v_coeffs
self.KP = oldPhi @ prev_sol.k_coeffs
# If we give it a tuple then assume it is (policy, value) pair
elif type(prev_sol) is tuple:
self.KP = prev_sol[0]
self.VF = prev_sol[1]
# Otherwise guess a constant value function and a policy
# of roughly steady state
else:
# VF is 0 everywhere
self.VF = np.zeros(ncgm.ns)
# Roughly ss policy
c_pol = f(k, z, A, alpha) * (A-delta)/A
self.KP = expendables_t(k, z, A, alpha, delta) - c_pol
# Coefficients based on guesses
self.v_coeffs = la.lstsq(self.Phi, self.VF)[0]
self.k_coeffs = la.lstsq(self.Phi, self.KP)[0]
def _unpack_params(self):
return self.ncgm._unpack_params()
def build_VF(self):
"""
Using the current coefficients, this builds the value function
for all states
"""
VF = self.Phi @ self.v_coeffs
return VF
def build_KP(self):
"""
Using the current coefficients, this builds the value function
for all states
"""
KP = self.Phi @ self.k_coeffs
return KP
def update_v(self, new_v_coeffs, dampen=1.0):
"""
Updates the coefficients for the value function
"""
self.v_coeffs = (1-dampen)*self.v_coeffs + dampen*new_v_coeffs
return None
def update_k(self, new_k_coeffs, dampen=1.0):
"""
Updates the coefficients for the policy function
"""
self.k_coeffs = (1-dampen)*self.k_coeffs + dampen*new_k_coeffs
return None
def update(self):
"""
Given the current state of everything in solution, update the
value and policy coefficients
"""
emsg = "The update method is implemented in solution specific classes"
emsg += "\nand cannot be called from `GeneralSolution`"
raise ValueError(emsg)
def compute_coefficients(self, kp, VF):
"""
Given a policy and value return corresponding coefficients
"""
new_k_coeffs = la.lstsq(self.Phi, kp)[0]
new_v_coeffs = la.lstsq(self.Phi, VF)[0]
return new_k_coeffs, new_v_coeffs
def compute_EV_scalar(self, istate, kp):
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
# All possible exogenous states tomorrow
z1 = self.ncgm.z1[istate, :]
phi = complete_polynomial(np.vstack([np.ones(self.ncgm.Qn)*kp, z1]),
self.degree).T
val = self.ncgm.weights@(phi@self.v_coeffs)
return val
def compute_dEV_scalar(self, istate, kp):
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
# All possible exogenous states tomorrow
z1 = self.ncgm.z1[istate, :]
phi = complete_polynomial_der(np.vstack([np.ones(self.ncgm.Qn)*kp, z1]),
self.degree, 0).T
val = self.ncgm.weights@(phi@self.v_coeffs)
return val
def compute_EV(self, kp=None):
"""
Compute the expected value
"""
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
grid = self.ncgm.grid
ns, Qn = self.ncgm.ns, self.ncgm.Qn
# Use policy to compute kp and c
if kp is None:
kp = self.Phi @ self.k_coeffs
# Evaluate E[V_{t+1}]
Vtp1 = np.empty((Qn, grid.shape[0]))
for iztp1 in range(Qn):
grid_tp1 = np.vstack([kp, self.ncgm.z1[:, iztp1]])
Phi_tp1 = complete_polynomial(grid_tp1, self.degree).T
Vtp1[iztp1, :] = Phi_tp1 @ self.v_coeffs
EV = self.ncgm.weights @ Vtp1
return EV
def compute_dEV(self, kp=None):
"""
Compute the derivative of the expected value
"""
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
grid = self.ncgm.grid
ns, Qn = self.ncgm.ns, self.ncgm.Qn
# Use policy to compute kp and c
if kp is None:
kp = self.Phi @ self.k_coeffs
# Evaluate E[V_{t+1}]
dVtp1 = np.empty((Qn, grid.shape[0]))
for iztp1 in range(Qn):
grid_tp1 = np.vstack([kp, self.ncgm.z1[:, iztp1]])
dPhi_tp1 = complete_polynomial_der(grid_tp1, self.degree, 0).T
dVtp1[iztp1, :] = dPhi_tp1 @ self.v_coeffs
dEV = self.ncgm.weights @ dVtp1
return dEV
def envelope_policy(self):
"""
Applies the envelope condition to compute the policy for
k_{t+1} at every point on the grid
"""
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
grid = self.ncgm.grid
k, z = grid[:, 0], grid[:, 1]
dV = self.dPhi@self.v_coeffs
# Compute the consumption
temp = dV / (1 - delta + df(k, z, A, alpha))
c = duinv(temp, gamma)
return expendables_t(k, z, A, alpha, delta) - c
def compute_distance(self, kp, VF):
"""
Computes distance between policy functions
"""
return np.max(np.abs(1.0 - kp/self.KP))
def solve(self, tol=1e-6, maxiter=2500, verbose=False, nskipprint=25):
# Iterate until convergence
dist, it = 10.0, 0
while (dist>tol) and (it<maxiter):
# Run solution specific update code
kp, VF = self.update()
# Compute new policy and value coeffs
new_k_coeffs, new_v_coeffs = self.compute_coefficients(kp, VF)
# Update distance and iterations
dist = self.compute_distance(kp, VF)
self.KP, self.VF = kp, VF
it += 1
if verbose and it%nskipprint == 0:
print(it, dist)
# Update all coefficients
self.update_k(new_k_coeffs)
self.update_v(new_v_coeffs)
# After finishing iteration, iterate to convergence using policy
if not isinstance(self, IterateOnPolicy):
sol_iop = IterateOnPolicy(self.ncgm, self.degree, self)
kp, VF = sol_iop.solve(tol=1e-10)
# Save final versions of everything
self.KP, self.VF = kp, VF
new_k_coeffs, new_v_coeffs = sol_iop.compute_coefficients(kp, VF)
self.update_k(new_k_coeffs)
self.update_v(new_v_coeffs)
return self.KP, self.VF
def simulate(self, T=10000, nburn=200, shocks=None, seed=42):
"""
Simulates the neoclassical growth model with policy function
given by self.KP. Simulates for `T` periods and discarsd first
nburn `observations`
"""
if shocks is None:
np.random.seed(seed)
shocks = np.random.randn(T+nburn)
return jit_simulate_ncgm(self.params, self.degree, self.v_coeffs,
T, nburn, shocks)
def ee_residuals(self, ksim=None, zsim=None, Qn=10, seed=42):
"""
Computes the Euler Equation residuals of a simulated capital and
productivity levels (ksim and zsim) and uses Qn nodes for computing
the expectation
"""
if (ksim is None) or (zsim is None):
ksim, zsim = self.simulate(T=10000, nburn=200, seed=seed)
nodes, weights = qe.quad.qnwnorm(Qn, 0.0, self.ncgm.sigma**2)
ee = jit_ee(self.params, self.degree, self.v_coeffs,
nodes, weights, ksim, zsim)
return np.log10(np.mean(ee)), np.log10(np.max(ee))
This isn't one of the methods described above, but it is used as an element of a few of our methods (and also as a way to get a first guess at the value function). This method takes an initial policy function, $\bar{k}(k_t, z_t)$, as given, and then, without changing the policy, iterates until the value function has converged.
Thus the "update section" of the algorithm in this instance would be:
We override two of the methods from GeneralSolution
compute_distance
because when we are iterating to convergence on the value function we want to check distnace using value function rather than policy functioncompute_coefficients
because we don't need to update the policy functions coefficients.The update
method just repeatedly applies a particular policy function to update the value function.
class IterateOnPolicy(GeneralSolution):
"""
Subclass of the general solution method. The update method for this
class simply computes the fixed point of the value function given
a specific policy
"""
def compute_distance(self, kp, VF):
"""
Computes distance between policy functions. When we are
iterating on a specific policy, we would like to compute
distances by the difference between VFs
"""
return np.max(np.abs(1.0 - VF/self.VF))
def compute_coefficients(self, kp, VF):
"""
Given a policy and value return corresponding coefficients.
When we are iterating on a specific policy, we don't want to
update the policy coefficients.
"""
new_v_coeffs = la.lstsq(self.Phi, VF)[0]
return self.k_coeffs, new_v_coeffs
def update(self):
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
grid = self.ncgm.grid
# Get the policy and update value function
c = expendables_t(grid[:, 0], grid[:, 1], A, alpha, delta) - self.KP
# Update the value function
VF = u(c, gamma) + beta*self.compute_EV(self.KP)
return self.KP, VF
This is one of the first solution methods for macroeconomics a graduate student in economics typically learns.
In this solution method, one takes as given a value function, $V(k_t, z_t)$, and then solves for the optimal policy given the value function.
The update section takes the form:
class VFI(GeneralSolution):
"""
Updates the coefficients and value functions using the VFI
method
"""
def update(self):
"""
Updates the coefficients and value functions using the VFI_ECM
method
"""
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
grid = self.ncgm.grid
n_state = grid.shape[0]
# Get the policy and update it
kp = np.empty(n_state)
VF = np.empty(n_state)
for i_s in range(n_state):
# Pull out current vals
k, z = grid[i_s, :]
# Compute possible expendables
expend_t = expendables_t(k, z, A, alpha, delta)
# Negative of the objective function since we are minimizing
_f = lambda kp: (du(expend_t-kp, gamma) -
beta*self.compute_dEV_scalar(i_s, kp))
_kp = opt.brentq(_f, 0.25, expend_t - 1e-2, rtol=1e-12)
kp[i_s] = _kp
VF[i_s] = u(expend_t-_kp, gamma)+beta*self.compute_EV_scalar(i_s, _kp)
return kp, VF
Method introduced by Chris Carroll. The key to this method is that the grid of points being used to approximate is over $(k_{t+1}, z_{t})$ instead of $(k_t, z_t)$. The insightful piece of this algorithm is that the transformation allows one to write a closed form for the consumption function, $c^*(k_{t+1}, z_t) = u'^{-1} \left( V_1(k_{t+1}, z_{t+1}) \right]$.
Then for a given $(k_{t+1}, z_{t})$ the update section would be
class VFI_EGM(GeneralSolution):
def update(self):
"""
Updates the coefficients and value functions using the VFI_ECM
method
"""
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
ns = self.ncgm.ns
grid = self.ncgm.grid
# Use the grid as capital tomorrow instead of capital today
k1 = grid[:, 0]
z = grid[:, 1]
dEV = self.compute_dEV(kp=k1)
c = duinv(beta*dEV, gamma)
# Now find corresponding capital today such that kp is optimal
kt = np.empty(ns)
for i in range(ns):
ct = c[i]
ktp1 = k1[i]
zt = z[i]
obj = lambda k: expendables_t(k, zt, A, alpha, delta) - ct - ktp1
kt[i] = opt.bisect(obj, 0.25, 2.0)
# Compute value today
EV = self.compute_EV(kp=k1)
V = u(c, gamma) + beta*EV
# Get implied coefficients and evaluate kp and VF
phi = complete_polynomial(np.vstack([kt, z]), self.degree).T
temp_k_coeffs = la.lstsq(phi, k1)[0]
temp_v_coeffs = la.lstsq(phi, V)[0]
kp = self.Phi@temp_k_coeffs
VF = self.Phi@temp_v_coeffs
return kp, VF
Very similar to the previous method. The insight of this algorithm is that since we are already approximating the value function and can evaluate its derivative, we can skip the numerical optimization piece of the update method and compute directly the policy using the envelope condition (hence the name).
The envelope condition says:
$$c^*(k_t, z_t) = u'^{-1} \left( \frac{\partial V(k_t, z_t)}{\partial k_t} (1 - \delta + r)^{-1} \right)$$so
$$k^*(k_t, z_t) = z_t f(k_t) + (1-\delta)k_t - c^*(k_t, z_t)$$We defined functions which get the policy using the envelope condition in the helper function section and inside the GeneralSolution
class
The update method is then very similar to other value iteration style methods, but avoids the numerical solver.
class VFI_ECM(GeneralSolution):
"""
This class implements the Envelope Condition Method for Value
Function Iteration.
"""
def update(self):
"""
Updates the coefficients and value functions using the VFI_ECM
method
"""
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
grid = self.ncgm.grid
# Get the policy and update it
kp = self.envelope_policy()
c = expendables_t(grid[:, 0], grid[:, 1], A, alpha, delta) - kp
# Update the value function
VF = u(c, gamma) + beta*self.compute_EV(kp)
return kp, VF
This method uses the same insight of the "Envelope Condition Value Function Iteration," but, rather than iterate directly on the value function, it iterates on the derivative of the value function. The update steps are
Once it has converged, you use the implied policy rule and iterate to convergence using the "iterate to convergence (given policy)" method.
Here we override the compute_coefficients
method to ensure that we are using the derivative of basis matrices to update the coefficients.
class dVFI_ECM(GeneralSolution):
def compute_coefficients(self, kp, dVF):
"""
Given a policy and derivative of the value function
return corresponding coefficients
"""
new_k_coeffs = la.lstsq(self.Phi, kp)[0]
new_v_coeffs = la.lstsq(self.dPhi, dVF)[0]
return new_k_coeffs, new_v_coeffs
def update(self):
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
grid = self.ncgm.grid
# Get the policy and update it
kp = self.envelope_policy()
c = expendables_t(grid[:, 0], grid[:, 1], A, alpha, delta) - kp
dEV = self.compute_dEV(kp=kp)
dV = (1-delta+df(grid[:, 0], grid[:, 1], A, alpha))*beta*dEV
return kp, dV
Policy function iteration is different than value function iteration in that it starts with a policy function, then updates the value function, and finally finds the new optimal policy function. Given a policy $c(k_t, z_t)$ and for each pair $(k_t, z_t)$
class PFI(GeneralSolution):
def update(self):
"""
Updates the coefficients and value functions using the PFI
method
"""
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
ns = self.ncgm.ns
grid = self.ncgm.grid
# Update policy and use new policy to iterate to convergence
kp = np.empty(ns)
for i_s in range(ns):
# Pull out current vals
k, z = grid[i_s, :]
# Compute possible expendables
expend_t = expendables_t(k, z, A, alpha, delta)
# Negative of the objective function since we are minimizing
_f = lambda kp: (du(expend_t-kp, gamma) -
beta*self.compute_dEV_scalar(i_s, kp))
_kp = opt.brentq(_f, 0.25, expend_t - 1e-2, rtol=1e-12)
kp[i_s] = _kp
sol_itp = IterateOnPolicy(self.ncgm, self.degree, (kp, self.VF))
_, VF = sol_itp.solve(maxiter=5000)
return kp, VF
Similar to policy function iteration, but, rather than numerically solve for new policies, it uses the envelope condition to directly compute them. Given a starting policy $c(k_t, z_t)$ and for each pair $(k_t, z_t)$
class PFI_ECM(GeneralSolution):
"""
Updates the coefficients and value functions using the PFI_ECM
method
"""
def update(self):
"""
Updates the coefficients and value functions using the PFI_ECM
method
"""
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
ns = self.ncgm.ns
grid = self.ncgm.grid
# Update policy and use new policy to iterate to convergence
kp = self.envelope_policy()
sol_itp = IterateOnPolicy(self.ncgm, self.degree, (kp, self.VF))
_, VF = sol_itp.solve(maxiter=5000)
return kp, VF
Euler equation methods operate directly on the Euler equation: $u'(c_t) = \beta E \left[ u'(c_{t+1}) (1 - \delta + z_t f'(k_t)) \right]$.
Given an initial policy $c(k_t, z_t)$ for each grid point $(k_t, z_t)$
class EulEq(GeneralSolution):
def update(self):
# Unpack parameters
A, alpha, beta, delta, gamma, rho, sigma = self._unpack_params()
ns, Qn = self.ncgm.ns, self.ncgm.Qn
ncgm, grid = self.ncgm, self.ncgm.grid
k, z = grid[:, 0], grid[:, 1]
# Evaluate RHS of EE
temp = np.empty((n_complete(2, self.degree), ns))
rhs_ee = 0.0
for iz1 in range(Qn):
# Build t+1 decisions
zp = ncgm.z1[:, iz1]
_complete_poly_impl(np.vstack([self.KP, zp]), self.degree, temp)
# Compute k_{t+2} and the corresponding c_{t+1}
kpp = self.k_coeffs@temp
cp = expendables_t(self.KP, zp, A, alpha, delta) - kpp
foo = beta*ncgm.weights[iz1]*(1.0-delta+df(self.KP, zp, A, alpha))
foo *= du(cp, gamma)
rhs_ee += foo
# Determine c_t using euler equation
c = duinv(rhs_ee, gamma)
kp = expendables_t(k, z, A, alpha, delta) - c
VF = u(c, gamma) + beta*self.compute_EV(kp=kp)
return kp, VF
We have already defined functions which simulate and compute Euler errors for these solution methods.
We can now run a horse race to compare the methods in terms of both accuracy and speed.
ncgm = NeoclassicalGrowth()
# First guess
vp = IterateOnPolicy(ncgm, 2)
vp.solve(tol=1e-9)
np.random.seed(61089)
shocks = np.random.randn(10200)
for sol_method in [VFI, VFI_ECM, VFI_EGM, PFI, PFI_ECM, dVFI_ECM, EulEq]:
# Set prev sol as iterate on policy
new_sol = vp
print("Solution Method: {}\n".format(sol_method))
for d in range(2, 6):
new_sol = sol_method(ncgm, d, new_sol)
ts = time.time()
new_sol.solve(tol=1e-9, verbose=False, nskipprint=25)
time_took = time.time() - ts
print("\tDegree {} took {}\n".format(d, time_took))
# Compute Euler Errors
ks, zs = new_sol.simulate(10000, 200, shocks=shocks)
mean_ee, max_ee = new_sol.ee_residuals(ks, zs, Qn=10)
print("\t\tMean and Max EE are {} & {}\n".format(mean_ee, max_ee))
new_sol = new_sol
/home/chase/Programming/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:13: RuntimeWarning: divide by zero encountered in true_divide
Solution Method: <class '__main__.VFI'> Degree 2 took 26.38953447341919 Mean and Max EE are -3.8282272985404875 & -2.762117203856586 Degree 3 took 25.3867290019989 Mean and Max EE are -4.974626763230692 & -3.322176647973835 Degree 4 took 23.616097450256348 Mean and Max EE are -6.060502091280513 & -4.026228854101265 Degree 5 took 19.283337593078613 Mean and Max EE are -7.000247002015084 & -4.702989176950847 Solution Method: <class '__main__.VFI_ECM'> Degree 2 took 1.1627769470214844 Mean and Max EE are -3.828224462040953 & -2.7620824119928944 Degree 3 took 0.7431914806365967 Mean and Max EE are -4.974628189256603 & -3.3221833623376016 Degree 4 took 0.8929553031921387 Mean and Max EE are -6.060501451314666 & -4.026228224602689 Degree 5 took 0.4698905944824219 Mean and Max EE are -7.000246854330695 & -4.702989013989957 Solution Method: <class '__main__.VFI_EGM'> Degree 2 took 21.106467247009277 Mean and Max EE are -3.8282371764027534 & -2.762131223939431 Degree 3 took 15.00942611694336 Mean and Max EE are -4.974623148141401 & -3.3222165694632704 Degree 4 took 13.397120237350464 Mean and Max EE are -6.060493787884856 & -4.026220770555142 Degree 5 took 10.941481828689575 Mean and Max EE are -7.000269984161648 & -4.702994272397169 Solution Method: <class '__main__.PFI'> Degree 2 took 27.039164066314697 Mean and Max EE are -3.828227298733333 & -2.762117204203986 Degree 3 took 24.623878479003906 Mean and Max EE are -4.974626792857059 & -3.322176696671503 Degree 4 took 31.325947999954224 Mean and Max EE are -6.060502089951161 & -4.026228852900712 Degree 5 took 22.670735120773315 Mean and Max EE are -7.000246984163237 & -4.702989165344325 Solution Method: <class '__main__.PFI_ECM'> Degree 2 took 1.586700201034546 Mean and Max EE are -3.828224462146159 & -2.7620824121954635 Degree 3 took 1.2643656730651855 Mean and Max EE are -4.974628109204482 & -3.322183270022211 Degree 4 took 1.5040409564971924 Mean and Max EE are -6.060501451585295 & -4.026228224939704 Degree 5 took 0.9609415531158447 Mean and Max EE are -7.000246877345056 & -4.702989032703946 Solution Method: <class '__main__.dVFI_ECM'> Degree 2 took 2.5871691703796387 Mean and Max EE are -3.8282295848809236 & -2.7627926640390177 Degree 3 took 2.9016075134277344 Mean and Max EE are -4.974479798690568 & -3.322298622302441 Degree 4 took 2.8114852905273438 Mean and Max EE are -6.060474386137703 & -4.026206968896713 Degree 5 took 3.3714330196380615 Mean and Max EE are -7.000160583421296 & -4.702983153468082 Solution Method: <class '__main__.EulEq'> Degree 2 took 1.5268235206604004 Mean and Max EE are -3.8233744972605015 & -2.751850845246666 Degree 3 took 1.0108237266540527 Mean and Max EE are -4.973608649648261 & -3.3232579605621035 Degree 4 took 0.8354387283325195 Mean and Max EE are -6.060064535170601 & -4.025869675459791 Degree 5 took 0.44197678565979004 Mean and Max EE are -7.000370717683767 & -4.703035259874359