%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;}
# ABOVE CELL IS "NO SCROLLING SUBWINDOWS" SETUP
#
# keep output cells from shifting to autoscroll: little scrolling
# subwindows within the notebook are an annoyance...
# THIS CELL LOADS THE LIBRARIES
#
# set up the environment by reading in every library we might need:
# os... graphics... data manipulation... time... math... statistics...
import sys
import os
from urllib.request import urlretrieve
import matplotlib as mpl
import matplotlib.pyplot as plt
from IPython.display import Image
import pandas as pd
from pandas import DataFrame, Series
from datetime import datetime
import scipy as sp
import numpy as np
import math
import random
import seaborn as sns
import statsmodels
import statsmodels.api as sm
import statsmodels.formula.api as smf
/Users/delong/anaconda3/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead. from pandas.core import datetools
# PRETTIER GRAPHICS SETUP
#
# graphics setup: seaborn-whitegrid and figure size;
# graphs in the notebook itself...
%matplotlib inline
plt.style.use('seaborn-whitegrid')
figure_size = plt.rcParams["figure.figsize"]
figure_size[0] = 12
figure_size[1] = 10
plt.rcParams["figure.figsize"] = figure_size
# THIS CELL IS THE KEY TO THE OKPY.ORG AUTOGRADER SYSTEM
#
# Don't change this cell; just run it.
# The result will give you directions about how to log in to the submission system, called OK.
# Once you're logged in, you can run this cell again, but it won't ask you who you are because
# it remembers you. However, you will need to log in once per assignment.
!pip install -U okpy
from client.api.notebook import Notebook
ok = Notebook('ps8.ok')
_ = ok.auth(force=True, inline=True)
Requirement already up-to-date: okpy in /Users/delong/anaconda3/lib/python3.6/site-packages Requirement already up-to-date: requests==2.12.4 in /Users/delong/anaconda3/lib/python3.6/site-packages (from okpy) Requirement already up-to-date: coverage==3.7.1 in /Users/delong/anaconda3/lib/python3.6/site-packages (from okpy) ===================================================================== Assignment: PS8 Notebook OK, version v1.13.10 ===================================================================== Open the following URL: https://okpy.org/client/login/ After logging in, copy the code from the web page and paste it into the box. Then press the "Enter" key on your keyboard. Paste your code here: FmiyGZenS8AOBCpn8qN0cvidg4R52q Successfully logged in as jbdelong@berkeley.edu
The autograder, both in the tests you run along the way as you work on the problem set and in calculating the final score, looks in the same directory as the problem set notebook for an "ok.tests" directory, and then runs the tests in the "q**.py" files in that directory (where "**" denotes a two-digit number, possibly with a leading zero). Those tests take the form of comparing a variable that should be in your namespace and seeing if it is close to some desired value that we get when we do the problem set.
Thus while the problem set instructions ask you to run simulations and plot graphs, what you are tested on is whether the appropriate variables in your namespace have (close to the) right values. We do not care what code you use in order to get those variables to the right values.
You can run simulations and then pick appropriate values out by slicing a series in order to get the right number.
You can use your knowledge of the algebraic solution to the model to have Python calculate the answer, having first set the parameters to the right values, as in:
s = 0.24 # (say)
n = 0.01
g = 0.02
delta = 0.03
Delta_n = -0.01
KoverYinitial = s/(n+g+delta)
KoverYalternative = s/(n+Delta_n+g+delta)
You can even do all of the calculations on pen and paper, and simply code up:
KoverYinitial = 4
KoverYalternative = 4.8
Perhaps we should ask you to do all three—start with simulations, or with algebraic equations with set parameter values, or with full pen-and-paper calculations with only the final results entered into the notebook—and then ask you to check your results from one mode by doing the other two. But: ars longa, vita brevis. Focus on what works for you: the key is to get a sense of how economists' center-of-gravity analyses of long-run growth work, so that when you encounter such an analysis later, outside the university, you have the right intellectual panoply to evaluate it.
Thanks to: Rachel Grossberg, Christopher Hench, Meghana Krishnakumer, Seth Lloyd, Ronald Walker...
(Task A) Programming Practices
If it strikes you that anything should be added to this list of programming dos and don'ts, please email it to me at delong@econ.berkeley.edu
(Task B) Consumption Series I
(Task C) Consumption Series II
In the "# Task C Answers" code cell further below set the variables "Cons_Jan_2000" and "Cons_Jan_2010" to their correct values
# Consumption Series Import Cell
import pandas as pd
Source_URL = 'http://delong.typepad.com/pcec96.csv'
cons_df = pd.read_csv(
Source_URL,
converters = {'Source': str, 'Source_URL': str},
parse_dates = True,
index_col = 0)
# Examine Consumption Series
print(cons_df.head())
print(cons_df.PCEC96.head())
cons_df.PCEC96.plot()
PCEC96 DATE 1999-01-01 7582.0 1999-02-01 7620.0 1999-03-01 7654.1 1999-04-01 7697.8 1999-05-01 7731.6 DATE 1999-01-01 7582.0 1999-02-01 7620.0 1999-03-01 7654.1 1999-04-01 7697.8 1999-05-01 7731.6 Name: PCEC96, dtype: float64
<matplotlib.axes._subplots.AxesSubplot at 0x1102ba4a8>
# Task C Answers
cons_Jan_2000 = cons_df.PCEC96[12]
cons_Jan_2010 = cons_df.PCEC96[132]
print(cons_Jan_2000)
print(cons_Jan_2010)
7988.5 9881.7
import numpy as np
ok.grade('q01')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Running tests --------------------------------------------------------------------- Test summary Passed: 2 Failed: 0 [ooooooooook] 100.0% passed
{'failed': 0, 'locked': 0, 'passed': 2}
(Task D) GDP Series I
(Task E) GDP Series II
In the "# Task D Answers" code cell further below set the variables "Cons_Jan_2000" and "Cons_Jan_2010" to their correct values
# GDP Series Import Cell
import pandas as pd
Source_URL = 'http://delong.typepad.com/gdpc1.csv'
gdp_df = pd.read_csv(
Source_URL,
converters = {'Source': str, 'Source_URL': str},
parse_dates = True,
index_col = 0)
# Examine GDP Series
print(gdp_df.head())
print(gdp_df.GDPC1.head())
gdp_df.GDPC1.plot()
GDPC1 DATE 1999-01-01 11864.675 1999-04-01 11962.524 1999-07-01 12113.075 1999-10-01 12323.336 2000-01-01 12359.095 DATE 1999-01-01 11864.675 1999-04-01 11962.524 1999-07-01 12113.075 1999-10-01 12323.336 2000-01-01 12359.095 Name: GDPC1, dtype: float64
<matplotlib.axes._subplots.AxesSubplot at 0x1100e8208>
# Task D Answers
gdp_Jan_2000 = gdp_df.GDPC1[4]
gdp_Jan_2010 = gdp_df.GDPC1[44]
print(gdp_Jan_2000)
print(gdp_Jan_2010)
12359.095 14604.845
import numpy as np
ok.grade('q02')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Running tests --------------------------------------------------------------------- Test summary Passed: 2 Failed: 0 [ooooooooook] 100.0% passed
{'failed': 0, 'locked': 0, 'passed': 2}
(Task F) The Consumption Function I:
# Graph consumption vs. income
Source_URL = 'http://delong.typepad.com/gdp_cons.csv'
gdp_cons_df = pd.read_csv(
Source_URL,
converters = {'Source': str, 'Source_URL': str},
parse_dates = True
)
print(gdp_cons_df.head())
gdp_cons_df.plot()
DATE PCECC96 GDPC1 0 1947-01-01 1199.413 1934.471 1 1947-04-01 1219.329 1932.281 2 1947-07-01 1223.266 1930.315 3 1947-10-01 1223.649 1960.705 4 1948-01-01 1229.757 1989.535
<matplotlib.axes._subplots.AxesSubplot at 0x114530b38>
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
results = sm.OLS(gdp_cons_df.PCECC96, gdp_cons_df.GDPC1).fit()
print(results.summary())
OLS Regression Results ============================================================================== Dep. Variable: PCECC96 R-squared: 0.999 Model: OLS Adj. R-squared: 0.999 Method: Least Squares F-statistic: 1.972e+05 Date: Sat, 10 Mar 2018 Prob (F-statistic): 0.00 Time: 07:58:05 Log-Likelihood: -1949.7 No. Observations: 284 AIC: 3901. Df Residuals: 283 BIC: 3905. Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ GDPC1 0.6607 0.001 444.064 0.000 0.658 0.664 ============================================================================== Omnibus: 71.956 Durbin-Watson: 0.019 Prob(Omnibus): 0.000 Jarque-Bera (JB): 121.436 Skew: 1.476 Prob(JB): 4.27e-27 Kurtosis: 4.245 Cond. No. 1.00 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
# Task F answers
t = 0.25
c_y = 0.6607/(1-t)
print(c_y)
0.8809333333333332
import numpy as np
ok.grade('q03')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Running tests --------------------------------------------------------------------- Test summary Passed: 1 Failed: 0 [ooooooooook] 100.0% passed
{'failed': 0, 'locked': 0, 'passed': 1}
(Task G) Consumption Function II
Task F produced a very high estimate of the marginal propensity to consume, MPC, $ c_y $. It seemed to indicate that households do not smooth spending very much at all—that they take virtually all of a change in their income and spend it. This estimate was obtained because the overwhelming bulk of the identifying variance in the regression analysis of Task F consisted of permanent, long run changes in actual and expected income. Now let us look at shorter run, more temporary changes. We do this by performing a regression analysis in which the dependent and independent variables are not the simple levels of consumption spending C and national income Y, but rather one year changes in those variables scaled by the then-current level of real national income.
# Calculating one year changes
Delta_C = []
Delta_Y = []
for t in range(280):
Delta_C = Delta_C + [(gdp_cons_df.PCECC96[t+4]-
gdp_cons_df.PCECC96[t])/gdp_cons_df.GDPC1[t]]
Delta_Y = Delta_Y + [(gdp_cons_df.GDPC1[t+4]-
gdp_cons_df.GDPC1[t])/gdp_cons_df.GDPC1[t]]
Delta_C = Delta_C + [0] + [0] + [0] + [0]
Delta_Y = Delta_Y + [0] + [0] + [0] + [0]
gdp_cons_df['Delta_C'] = Delta_C
gdp_cons_df['Delta_Y'] = Delta_Y
print(gdp_cons_df.head())
plt.plot(gdp_cons_df.Delta_Y[:280], gdp_cons_df.Delta_C[:280], "o")
DATE PCECC96 GDPC1 Delta_C Delta_Y 0 1947-01-01 1199.413 1934.471 0.015686 0.028465 1 1947-04-01 1219.329 1932.281 0.012799 0.046355 2 1947-07-01 1223.266 1930.315 0.011722 0.053276 3 1947-10-01 1223.649 1960.705 0.016416 0.038060 4 1948-01-01 1229.757 1989.535 0.014130 0.009041
[<matplotlib.lines.Line2D at 0x114c23828>]
# 1 year changes regression: full sample
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
results = sm.OLS(gdp_cons_df.Delta_C[:280], gdp_cons_df.Delta_Y[:280]).fit()
print(results.summary())
OLS Regression Results ============================================================================== Dep. Variable: Delta_C R-squared: 0.846 Model: OLS Adj. R-squared: 0.845 Method: Least Squares F-statistic: 1531. Date: Sat, 10 Mar 2018 Prob (F-statistic): 2.66e-115 Time: 07:58:29 Log-Likelihood: 903.98 No. Observations: 280 AIC: -1806. Df Residuals: 279 BIC: -1802. Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Delta_Y 0.5452 0.014 39.130 0.000 0.518 0.573 ============================================================================== Omnibus: 149.401 Durbin-Watson: 0.567 Prob(Omnibus): 0.000 Jarque-Bera (JB): 1385.173 Skew: -1.959 Prob(JB): 1.63e-301 Kurtosis: 13.168 Cond. No. 1.00 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
# 1 year changes regression: early sample
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
results = sm.OLS(gdp_cons_df.Delta_C[:140], gdp_cons_df.Delta_Y[:140]).fit()
print(results.summary())
OLS Regression Results ============================================================================== Dep. Variable: Delta_C R-squared: 0.808 Model: OLS Adj. R-squared: 0.807 Method: Least Squares F-statistic: 585.5 Date: Sat, 10 Mar 2018 Prob (F-statistic): 1.10e-51 Time: 07:58:36 Log-Likelihood: 428.04 No. Observations: 140 AIC: -854.1 Df Residuals: 139 BIC: -851.1 Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Delta_Y 0.4933 0.020 24.198 0.000 0.453 0.534 ============================================================================== Omnibus: 69.393 Durbin-Watson: 0.600 Prob(Omnibus): 0.000 Jarque-Bera (JB): 320.868 Skew: -1.752 Prob(JB): 2.11e-70 Kurtosis: 9.537 Cond. No. 1.00 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
# 1 year changes regression: late sample
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
results = sm.OLS(gdp_cons_df.Delta_C[141:280], gdp_cons_df.Delta_Y[141:280]).fit()
print(results.summary())
OLS Regression Results ============================================================================== Dep. Variable: Delta_C R-squared: 0.932 Model: OLS Adj. R-squared: 0.931 Method: Least Squares F-statistic: 1881. Date: Sat, 10 Mar 2018 Prob (F-statistic): 2.80e-82 Time: 07:58:41 Log-Likelihood: 515.15 No. Observations: 139 AIC: -1028. Df Residuals: 138 BIC: -1025. Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Delta_Y 0.6445 0.015 43.368 0.000 0.615 0.674 ============================================================================== Omnibus: 6.614 Durbin-Watson: 0.601 Prob(Omnibus): 0.037 Jarque-Bera (JB): 6.176 Skew: -0.478 Prob(JB): 0.0456 Kurtosis: 3.389 Cond. No. 1.00 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
# Task G answers
t = 0.25
c_y_1_year = 0.5452/(1-t)
c_y_1_year_early_sample = 0.4933/(1-t)
c_y_1_year_late_sample = 0.6445/(1-t)
print(c_y_1_year)
print(c_y_1_year_early_sample)
print(c_y_1_year_late_sample)
0.7269333333333333 0.6577333333333334 0.8593333333333333
import numpy as np
ok.grade('q04')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Running tests --------------------------------------------------------------------- Test summary Passed: 3 Failed: 0 [ooooooooook] 100.0% passed
{'failed': 0, 'locked': 0, 'passed': 3}
(Task H) Consumption Function III
Task F produced a very high estimate of the marginal propensity to consume out of disposable income, MPC, $ c_y $, indicating that households do not smooth spending very much at all—that they take virtually all of a change in their income and spend it. This estimate captures the MPC in response to permanent, long run changes in actual and expected income. But the MPC that is relevant for business cycle analysis is a short run MPC: the response of consumption to surprise, transitory changes in national income.
Task G produed somewhat smaller estimates of the MPC by looking at the reaction of consumption spending to one-year changes in national income.
Now let us look at one quarter changes. We do this by performing a regression analysis in which the dependent and independent variables are oen quarter changes in those variables scaled by the then-current level of real national income.
# Calculating one quarter changes
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
Delta_C1 = []
Delta_Y1 = []
for t in range(280):
Delta_C1 = Delta_C1 + [(gdp_cons_df.PCECC96[t+1]-
gdp_cons_df.PCECC96[t])/gdp_cons_df.GDPC1[t]]
Delta_Y1 = Delta_Y1 + [(gdp_cons_df.GDPC1[t+1]-
gdp_cons_df.GDPC1[t])/gdp_cons_df.GDPC1[t]]
Delta_C1 = Delta_C1 + [0] + [0] + [0] + [0]
Delta_Y1 = Delta_Y1 + [0] + [0] + [0] + [0]
gdp_cons_df['Delta_C1'] = Delta_C1
gdp_cons_df['Delta_Y1'] = Delta_Y1
plt.plot(gdp_cons_df.Delta_Y1[:280], gdp_cons_df.Delta_C1[:280], "o")
[<matplotlib.lines.Line2D at 0x114e5b198>]
# 1 quarter changes regression: full sample
results = sm.OLS(gdp_cons_df.Delta_C1[:280], gdp_cons_df.Delta_Y1[:280]).fit()
print(results.summary())
OLS Regression Results ============================================================================== Dep. Variable: Delta_C1 R-squared: 0.596 Model: OLS Adj. R-squared: 0.595 Method: Least Squares F-statistic: 412.1 Date: Sat, 10 Mar 2018 Prob (F-statistic): 6.89e-57 Time: 07:58:59 Log-Likelihood: 1110.4 No. Observations: 280 AIC: -2219. Df Residuals: 279 BIC: -2215. Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Delta_Y1 0.4527 0.022 20.300 0.000 0.409 0.497 ============================================================================== Omnibus: 140.997 Durbin-Watson: 2.173 Prob(Omnibus): 0.000 Jarque-Bera (JB): 1853.783 Skew: -1.681 Prob(JB): 0.00 Kurtosis: 15.149 Cond. No. 1.00 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
# 1 quarter changes regression: earlier half of the sample
results = sm.OLS(gdp_cons_df.Delta_C1[:140], gdp_cons_df.Delta_Y1[:140]).fit()
print(results.summary())
OLS Regression Results ============================================================================== Dep. Variable: Delta_C1 R-squared: 0.541 Model: OLS Adj. R-squared: 0.538 Method: Least Squares F-statistic: 164.1 Date: Sat, 10 Mar 2018 Prob (F-statistic): 2.67e-25 Time: 07:59:03 Log-Likelihood: 528.18 No. Observations: 140 AIC: -1054. Df Residuals: 139 BIC: -1051. Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Delta_Y1 0.4113 0.032 12.811 0.000 0.348 0.475 ============================================================================== Omnibus: 73.432 Durbin-Watson: 2.290 Prob(Omnibus): 0.000 Jarque-Bera (JB): 532.818 Skew: -1.672 Prob(JB): 2.00e-116 Kurtosis: 11.953 Cond. No. 1.00 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
# 1 quarter changes regression: later half of the sample
results = sm.OLS(gdp_cons_df.Delta_C1[141:280], gdp_cons_df.Delta_Y1[141:280]).fit()
print(results.summary())
OLS Regression Results ============================================================================== Dep. Variable: Delta_C1 R-squared: 0.732 Model: OLS Adj. R-squared: 0.730 Method: Least Squares F-statistic: 376.9 Date: Sat, 10 Mar 2018 Prob (F-statistic): 2.74e-41 Time: 07:59:09 Log-Likelihood: 603.62 No. Observations: 139 AIC: -1205. Df Residuals: 138 BIC: -1202. Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Delta_Y1 0.5556 0.029 19.415 0.000 0.499 0.612 ============================================================================== Omnibus: 4.476 Durbin-Watson: 1.978 Prob(Omnibus): 0.107 Jarque-Bera (JB): 5.285 Skew: 0.157 Prob(JB): 0.0712 Kurtosis: 3.902 Cond. No. 1.00 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
# Task H answers
t = 0.25
c_y_1_quarter = 0.4527/(1-t)
c_y_1_quarter_early_sample = 0.4113/(1-t)
c_y_1_quarter_late_sample = 0.5556/(1-t)
print(c_y_1_quarter)
print(c_y_1_quarter_early_sample)
print(c_y_1_quarter_late_sample)
0.6036 0.5484 0.7408
import numpy as np
ok.grade('q05')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Running tests --------------------------------------------------------------------- Test summary Passed: 3 Failed: 0 [ooooooooook] 100.0% passed
{'failed': 0, 'locked': 0, 'passed': 3}
_ = ok.submit()
Saving notebook... Saved 'PS8.ipynb'. Submit... 100% complete Backup... 100% complete Submission successful for user: jbdelong@berkeley.edu URL: https://okpy.org/cal/econ101b/sp18/PS8/submissions/76Bx48