import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import scipy as sp
import scipy.stats
import statsmodels.api as sm
If you have studied statistics, you certainly know the famous Analysis of Variance (ANOVA), you can skip this section, but if you haven't, read on.
Simply speaking, the ANOVA is a technique of comparing means of multiple$(\geq 3)$ populations, the name derives from the way how calculations are performed.
For example, a common hypotheses of ANOVA are $$ H_0:\quad \mu_1=\mu_2=\mu_3=\cdots=\mu_n\\ H_1:\quad \text{At least two means differ} $$ In order to construct $F$-statistic, we need to introduce two more statistics, the first one is Mean Square for Treatments (MST), $\bar{\bar{x}}$ is the grand mean, $\bar{x}_i$ is the sample mean of sample $x_i$, $n_i$ is the number of observable in sample $i$ $$ MST=\frac{SST}{k-1},\qquad\text{where } SST=\sum_{i=1}^kn_i(\bar{x}_i-\bar{\bar{x}})^2 $$ And the second one is Mean Square for Error (MSE), $s_i$ is the sample variance of sample $i$ $$ MSE=\frac{SSE}{n-k},\qquad\text{where } SSE =(n_1-1)s_1^2+(n_2-1)s_2^2+\cdots+(n_k-1)s_k^2 $$ Join them together, an $F$-statistic is constructed $$ F=\frac{MST}{MSE} $$ If the $F$-statistic is larger than critical value with its corresponding degree of freedom, we reject null hypothesis.
Here's dataset with dummy variables, which are either $1$ or $0$.
df = pd.read_excel("Basic_Econometrics_practice_data.xlsx", sheet_name="Hight_ANOVA")
df
Height | NL_dummpy | DM_dummpy | FI_dummy | |
---|---|---|---|---|
0 | 161.783130 | 0 | 0 | 0 |
1 | 145.329934 | 0 | 0 | 0 |
2 | 174.569597 | 0 | 0 | 0 |
3 | 160.003162 | 0 | 0 | 0 |
4 | 162.242898 | 0 | 0 | 0 |
... | ... | ... | ... | ... |
83 | 180.962477 | 0 | 0 | 1 |
84 | 172.791579 | 0 | 0 | 1 |
85 | 174.951880 | 0 | 0 | 1 |
86 | 176.059861 | 0 | 0 | 1 |
87 | 182.802878 | 0 | 0 | 1 |
88 rows × 4 columns
The dataset has five columns, the first column $Height$ is a sample of $88$ male height, other columns are dummy variables indication its qualitative feature, here is the nationality.
There are $4$ countries in the sample, Japan, Netherlands, Denmark and Finland, however there are only $3$ dummies in the data set, this is to avoid perfect multicollinearity, which is also called dummy variable trap, because the height data is the perfect linear combination of four dummy variables.
If we use the model with only dummy variables as independent variable, we basically regressing a ANOVA model, i.e. $$ Y_{i}=\beta_{1}+\beta_{2} D_{2 i}+\beta_{3 i} \mathrm{D}_{3 i}+\beta_{3 i} \mathrm{D}_{3 i}+u_{i} $$
where $Y_i =$ the male height, $D_{2i}=1$ if the male is from Netherlands, $D_{3i}=1$ if the male is from Denmark and $D_{4i}=1$ if the male is from Finland. Japan doesn't have a dummy variable, so we are using it as reference, which will be clearer later.
Now we run the regression and print the result. And how do we interpret the estimated coefficients?
X = df[["NL_dummpy", "DM_dummpy", "FI_dummy"]]
Y = df["Height"]
X = sm.add_constant(X) # adding a constant
model = sm.OLS(Y, X).fit()
print_model = model.summary()
print(print_model)
OLS Regression Results ============================================================================== Dep. Variable: Height R-squared: 0.453 Model: OLS Adj. R-squared: 0.434 Method: Least Squares F-statistic: 23.20 Date: Wed, 25 Aug 2021 Prob (F-statistic): 4.93e-11 Time: 13:10:31 Log-Likelihood: -300.08 No. Observations: 88 AIC: 608.2 Df Residuals: 84 BIC: 618.1 Df Model: 3 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ const 163.0300 1.416 115.096 0.000 160.213 165.847 NL_dummpy 17.7116 2.453 7.219 0.000 12.833 22.590 DM_dummpy 12.2196 2.194 5.569 0.000 7.856 16.583 FI_dummy 12.8530 2.041 6.296 0.000 8.794 16.912 ============================================================================== Omnibus: 20.323 Durbin-Watson: 2.355 Prob(Omnibus): 0.000 Jarque-Bera (JB): 105.597 Skew: -0.355 Prob(JB): 1.17e-23 Kurtosis: 8.319 Cond. No. 4.40 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
First, all the $p$-value are significantly small, so our estimation is valid. Then we examine the coefficients one by one.
The estimated constant $b_1 = 163.03$ is the mean height of Japanese male. The mean of Dutch male height is $b_1+b_2 = 163.03+17.71=180.74$, the mean of Danish male height is $b_1+b_3=163.03+12.21=175.24$, the mean of Finnish male height is $b_1+b_4=163.03+12.85=175.88$.
As you can see, the Japanese male height is used as a reference, also called base category, rest are added onpon it.
This regression has the same effect as ANOVA test, all dummy coefficients are significant and so is $F$-statistic, which means we reject null height null hypothesis that all countries' male height are the same.
The example in the last section has only dummies in the independent variables, which is rare in practice. The more common situation is that independent variables have both quantitative and qualitative/dummy variables, and this is what we will do in this section.
The model with both quantitative and qualitative variables are called analysis of covariance (ANCOVA) models. We have another dataset that contains the individual's parents' height.
df = pd.read_excel("Basic_Econometrics_practice_data.xlsx", sheet_name="Hight_ANCOVA")
X = df[["Height of Father", "Height of Mother", "NL_dummy", "DM_dummy", "FI_dummy"]]
Y = df["Height"]
X = sm.add_constant(X) # adding a constant
model = sm.OLS(Y, X).fit()
print_model = model.summary()
print(print_model)
OLS Regression Results ============================================================================== Dep. Variable: Height R-squared: 0.679 Model: OLS Adj. R-squared: 0.660 Method: Least Squares F-statistic: 34.71 Date: Wed, 25 Aug 2021 Prob (F-statistic): 6.74e-19 Time: 13:24:31 Log-Likelihood: -276.61 No. Observations: 88 AIC: 565.2 Df Residuals: 82 BIC: 580.1 Df Model: 5 Covariance Type: nonrobust ==================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------ const 27.8737 17.839 1.563 0.122 -7.614 63.361 Height of Father 0.3305 0.097 3.417 0.001 0.138 0.523 Height of Mother 0.5028 0.109 4.630 0.000 0.287 0.719 NL_dummy 5.3669 2.527 2.124 0.037 0.339 10.395 DM_dummy 2.9093 2.097 1.388 0.169 -1.262 7.080 FI_dummy 1.0209 2.223 0.459 0.647 -3.402 5.443 ============================================================================== Omnibus: 5.467 Durbin-Watson: 2.121 Prob(Omnibus): 0.065 Jarque-Bera (JB): 6.176 Skew: -0.293 Prob(JB): 0.0456 Kurtosis: 4.158 Cond. No. 7.09e+03 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 7.09e+03. This might indicate that there are strong multicollinearity or other numerical problems.
In order to interpret the results, let's type the estimated model here $$ \hat{Y}= 27.87+.33 X_{f} + .5 X_{m} + 5.36 D_{NL} + 2.90 D_{DM} + 1.02 D_{FI} $$ where $X_{f}$ and $X_{m}$ are father's and mother's heights, $D$'s are dummy variables representing each country.
This is actually a function to predict a male's height if you input parents height, for instance if we set $D_{NL} = D_{DM}= D_{FI}=0 $, the function of height of Japanese male is $$ \hat{Y}= 27.87+.33 X_{f} + .5 X_{m} $$ Or the function of Dutch male height with $D_{NL} = 1$ and $ D_{DM}= D_{FI}=0$ $$ \hat{Y}= 27.87+.33 X_{f} + .5 X_{m} + 5.36 $$ With these results, we can define Python functions to predict male height
def jp_height(fh, mh):
return 27.87 + fh * 0.33 + mh * 0.5
def nl_height(fh, mh):
return 27.87 + fh * 0.33 + mh * 0.5 + 5.36
A function to predict a Japanese male's height
jp_height(175, 170)
170.62
And function to predict a Dutch male's height
nl_height(185, 185)
186.78000000000003
What we have discussed above are all intercept dummy variables which means the dummy variable only change the intercept term, however, there are dummies which imposed on slope variables too.
Back to the height example, what if we suspect that parents' height in NL could have more marginal effect on their sons' height, i.e. the model is $$ Y= \beta_1 + \beta_2D_{NL} + (\beta_3+ \beta_4D_{NL})X_{f} + (\beta_5+ \beta_6D_{NL})X_{m}+u $$ Rewrite for estimation purpose $$ Y= \beta_1 + \beta_2D_{NL} + \beta_3 X_f + \beta_4 D_{NL}X_f + \beta_5X_m + \beta_6 D_{NL}X_m+u $$
Take a look at our data, we need to reconstruct it
df.head()
Height | Height of Father | Height of Mother | NL_dummy | DM_dummy | FI_dummy | |
---|---|---|---|---|---|---|
0 | 161.783130 | 173 | 152 | 0 | 0 | 0 |
1 | 145.329934 | 166 | 145 | 0 | 0 | 0 |
2 | 174.569597 | 177 | 169 | 0 | 0 | 0 |
3 | 160.003162 | 167 | 160 | 0 | 0 | 0 |
4 | 162.242898 | 163 | 152 | 0 | 0 | 0 |
Drop the dummies of Denmark and Finland
df_NL = df.drop(["DM_dummy", "FI_dummy"], axis=1)
df_NL.head()
Height | Height of Father | Height of Mother | NL_dummy | |
---|---|---|---|---|
0 | 161.783130 | 173 | 152 | 0 |
1 | 145.329934 | 166 | 145 | 0 |
2 | 174.569597 | 177 | 169 | 0 |
3 | 160.003162 | 167 | 160 | 0 |
4 | 162.242898 | 163 | 152 | 0 |
Also create the column of multiplication of $D_{NL} \cdot X_f$ and $D_{NL}\cdot X_m$, then construct independent variable matrix and dependent variable
df_NL["D_NL_Xf"] = df_NL["Height of Father"] * df_NL["NL_dummy"]
df_NL["D_NL_Xm"] = df_NL["Height of Mother"] * df_NL["NL_dummy"]
X = df_NL[["NL_dummy", "Height of Father", "D_NL_Xf", "Height of Mother", "D_NL_Xm"]]
Y = df["Height"]
X = sm.add_constant(X) # adding a constant
model = sm.OLS(Y, X).fit()
print_model = model.summary()
print(print_model)
OLS Regression Results ============================================================================== Dep. Variable: Height R-squared: 0.691 Model: OLS Adj. R-squared: 0.672 Method: Least Squares F-statistic: 36.63 Date: Wed, 25 Aug 2021 Prob (F-statistic): 1.53e-19 Time: 13:25:18 Log-Likelihood: -275.00 No. Observations: 88 AIC: 562.0 Df Residuals: 82 BIC: 576.9 Df Model: 5 Covariance Type: nonrobust ==================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------ const 11.7747 13.230 0.890 0.376 -14.544 38.093 NL_dummy 120.9563 58.160 2.080 0.041 5.257 236.655 Height of Father 0.3457 0.094 3.665 0.000 0.158 0.533 D_NL_Xf -0.0946 0.377 -0.251 0.803 -0.845 0.656 Height of Mother 0.5903 0.098 6.000 0.000 0.395 0.786 D_NL_Xm -0.5746 0.328 -1.750 0.084 -1.228 0.079 ============================================================================== Omnibus: 4.383 Durbin-Watson: 2.055 Prob(Omnibus): 0.112 Jarque-Bera (JB): 3.950 Skew: -0.334 Prob(JB): 0.139 Kurtosis: 3.795 Cond. No. 2.37e+04 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 2.37e+04. This might indicate that there are strong multicollinearity or other numerical problems.
Here's the estimated regression model $$ \hat{Y}= 11.7747 + 120.9563D_{NL} + 0.3457 X_f - 0.0946 D_{NL}X_f + 0.5903X_m -0.5746 D_{NL}X_m $$ If $D_{NL}=1$ then $$ \hat{Y}= 132.731+0.2511X_f -0.01X_m $$
Again, we define a Python function to predict Dutch male height based on their parents' height
def nl_height_marginal(fh, mh):
return 132.731 + fh * 0.2511 + mh * 0.0157
nl_height_marginal(185, 180)
182.01049999999998
Prediction seem quite logical.
However, as you can see from the results, the hypotheses test rejects our theory that Dutch parents could influence their sons' marginal height, i.e. coefficients of $D_{NL} \cdot X_f$ and $D_{NL}\cdot X_m$ fail to reject null hypothesis with $5\%$ significance level.