Projection Methods - Parametrized Expectations Algorithm

Author: Mohammed Aït Lahcen, University of Basel

In this notebook, I use the Parametrized Expectations Algorithm proposed by den Haan & Marcet (1990) to solve a slightly modified version of the discrete time neo-classical growth model. This notebook is heavily inspired by an assignment from the LSE macro tools summer school taught by Professor Wouter den Haan. The original code from the assignment was written in Matlab. This is a Python version of the code with some improvements.

In [1]:
import numpy as np
from scipy import optimize as opt
from numba import jit

# Graphics imports
import matplotlib.pyplot as plt
import seaborn as sns  # Better quality figures
%matplotlib notebook
# Displays figures inside the notebook
from matplotlib import rcParams
rcParams['figure.figsize'] = (12, 8)  # Sets the size of the figures in the notebook

from matplotlib import cm
from mpl_toolkits.mplot3d.axes3d import Axes3D


In this notebook, we look at a modified version of the neoclassical growth model where agents can invest in zero coupon bonds with maturity $j$ which can be bought for $q_{j,t}$ and pay off 1 unit in period $t + j$.

The first order conditions are as follows: $$ \begin{align} c_t^{-\nu} &= \mathbb{E}_t \left[ \beta c_{t+1}^{-\nu}(\alpha z_{t+1} k_{t+1}^{\alpha-1} +1 - \delta)\right] \\ q_{j,t} c_t^{-\nu} &= \mathbb{E}_t \left[ \beta q_{j-1,t+1} c_{t+1}^{-\nu} \right] \\ c_t + k_t + \sum_{j=1}^J q_{j,t}b_{j,t} &= z_t k_{t-1}^\alpha + (1 - \delta) k_{t-1} + \sum_{j=0}^{J-1} q_{j,t}b_{j,t-1} \\ \ln(z_{t}) &= \rho \ln(z_{t-1}) + \varepsilon_{t}, \quad \varepsilon_t \sim \mathcal{N}(0,\sigma^2) \end{align} $$


In [2]:
alpha   = 0.33         # Capital share 
beta = 0.99         # Time discount factor
nu      = 3.0          # Risk aversion parameter
delta   = 0.025        # Depreciation rate

sigma   = 0.02         # Standard Deviation for log noise in technology shock
rho     = 0.95         # Persistence of log technology shock

T       = 2000         # Total length of simulation
T1      = 501          # First observation used
In [3]:
n_herm  = 5            # number of Hermite nodes for numerical integration
porder_k =  1           # Order of Polynomial for k
porder_z =  1           # Order of Polynomial for z

maxiter = 2000         # Maximum Iterations to find coefficients of polynomial
psitol  = 1e-6         # Convergence criterion
lrate   = 0.7          # parameter to control updating of polynomial coefficients
                        # lrate = 1 means complete updating 
                        # lrate < 1 means partial updating

Stochastic PEA

As is suggested by its name, PEA is a projection method that focuses on parametrizing the expectation part of the stochastic Euler equation. As opposed to the simple projection method where we approximate the policy functions, the idea behind PEA is to approximate the whole expectation term using a (polynomial) function of the state variables.

Explain simulation PEA!!!

In this model, bonds are in zero net supply which means that asset prices do not affect the real side of the economy. Hence, we can solve for the consumption policy functions and use it to get the capital level from the budget constraint.

We use the following approximation to the conditional expectation: $$ \exp \left(\psi_0^0 + \psi_z^0 \ln z_t + \psi_k^0 \ln k_{t-1} + \psi_{zk}^0 \ln z_t \ln k_{t-1} \right) $$

The idea is to start with a guess for the expectation term using initial values for the coefficients of the polynomial and then keep iterat

Computing the steady state values

We compute the steady state values to be used to initialize the calculations:

In [4]:
k_ss = ((1-beta+delta*beta) / (alpha*beta))**(1/(alpha-1))
c_ss = k_ss**alpha - delta*k_ss

Simulating the exogenous process

In [5]:
np.random.seed(20110629)  # Set the random number generator seed

epsi = np.random.randn(T,1)*sigma

lnz = np.zeros(T)

lnz[0] = 0

for t in range(1,T):
    lnz[t]= rho*lnz[t-1] + epsi[t]
z = np.exp(lnz)

Building the approximating Hermite polynomial

Below, I build a function that takes the order of the polynomial in the state variables as well as the vectors of state variables to generate the Hermite polynomial terms $H_j(x)$: $$ P_n(x) = \sum_{j=0}^n a_j H_j(x) $$ with $$ \begin{align} H_0(x) &= 1 \\ H_1(x) &= x \\ H_{j+1}(x) &= x H_{j}(x) - j H_{j-1}(x) \quad \textit{for} \,\, j > 1 \end{align} $$

To build a polynomial in the two variables $k$ and $z$, we multiply the Hermite polynomial of each using tensor product (outer product).

In [148]:
def hermterms(porder,x):
    hx = np.ones((x.shape[0],porder+1))  # Initialize vector with ones so that H_0(x) = 1
    # the property shape[n-1] returns the size along dimension n
    hx[:,1] = x  # second term H_1(x) = x
    if porder >= 2:
        for j in range(2,porder+1):  # Last element in range is not used so do +1
            hx[:,j] = x * hx[:,j-1] - j * hx[:,j-2]   # third and higher terms H_j(x)
    return hx

def hermpoly(porder,x):
    # Takes a couple (k_j,z_j) and returns the corresponding Hermite polynomial  
    # Extracting order of polynomial in k and z
    porder_k = porder[0]
    porder_z = porder[1]
    # Generate the Hermite polynomial terms in k and z respectively
    hk = hermterms(porder_k,x[:,0])
    hz = hermterms(porder_z,x[:,1])
    # Build the Hermite polynomial matrix
    # Number of terms in polynomial = (order in k + 1) * (order in z + 1)
    polymat = np.zeros((x.shape[0],(porder_k + 1) * (porder_z + 1) ))
    # Do a tensor product (outer product) of the two vectors
    for jk in range(0,porder_k+1):
        for jz in range(0,porder_z+1):
            polymat[:,jk*(porder_z+1)+jz] = hk[:,jk] * hz[:,jz]
    # Alternative way using tensor product formula and hstacking twice
    # polymat = np.hstack(np.hstack(np.tensordot(hk,hz,axes=0)))
    return polymat

Calculating the RSS

The following function takes the coefficients of the regression, $X$ and $Y$ and returns the residual sum of squares (prediction error):

In [275]:
def rss(coef,Y,X):
    rss =,coef))).T,(Y-np.exp(,coef))))
    return rss    

Main loop

In [ ]:
# Initialize vectors

c = np.zeros(T)
k = np.zeros(T)
lnk = np.zeros(T)

# Initialize coefficients

# psi = np.array([0,0,0,0])
psi = np.array([0.933756226933222,0.284693110705828,-0.530763511673379,-0.178833679612226])

for psiter in range(maxiter):
    k[0] = k_ss
    lnk[0]   = np.log(k[0])
    # Generate c and k using the polynomial and the random series of z
    for t in range(1,T):  
        lnk[t-1] = np.log(k[t-1])
        x = np.array([[lnk[t-1],lnz[t-1]]])  # transform it into an array to pass to hermpoly
        polynomial = hermpoly([porder_k, porder_z],x)
        c[t]     = np.exp(,psi)/nu) # Use dot product to get scalar
        k[t]     = z[t]*k[t-1]**alpha + (1-delta)*k[t-1]-c[t]

    # Compute the RHS of the Euler equation (part inside the expectation)
    # using the whole vector of c, k and z
    Y = beta*c[T1:T]**(-nu)*(alpha*z[T1:T]*(k[T1-1:T-1]**(alpha-1))+1-delta)
    x = np.array([lnk[T1-2:T-2],lnz[T1-1:T-1]]).T
    X = hermpoly([porder_k, porder_z],x)
    results = opt.minimize(rss,psi,args=(Y,X),method='Nelder-Mead',tol=0.0000001,options={'maxiter':1000000})
    psi_out = results.x
    delpsi  = np.linalg.norm(psi-psi_out)
#     print('After {:3d} iterations the Difference in psi was {:6.4f}'.format(psiter,delpsi))
#     print('     new estimate for first four elements of psi',psi)

    if delpsi < psitol:
        psi = lrate * psi_out +(1-lrate)*psi
In [9]:

Quadrature simulation PEA