Chapter 7: Analysis of Variance (Anova)
¶!pip install --upgrade scipy
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (1.7.3) Requirement already satisfied: numpy<1.23.0,>=1.16.5 in /usr/local/lib/python3.7/dist-packages (from scipy) (1.21.6)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from matplotlib import collections as mc
import seaborn as sns
import math
from scipy import stats
from scipy.stats import norm
from scipy.stats import chi2
from scipy.stats import t
from scipy.stats import f
from scipy.stats import bernoulli
from scipy.stats import binom
from scipy.stats import nbinom
from scipy.stats import geom
from scipy.stats import poisson
from scipy.stats import uniform
from scipy.stats import randint
from scipy.stats import expon
from scipy.stats import gamma
from scipy.stats import beta
from scipy.stats import weibull_min
from scipy.stats import hypergeom
from scipy.stats import shapiro
from scipy.stats import pearsonr
from scipy.stats import normaltest
from scipy.stats import anderson
from scipy.stats import spearmanr
from scipy.stats import kendalltau
from scipy.stats import chi2_contingency
from scipy.stats import ttest_ind
from scipy.stats import ttest_rel
from scipy.stats import mannwhitneyu
from scipy.stats import wilcoxon
from scipy.stats import kruskal
from scipy.stats import friedmanchisquare
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.stattools import kpss
from statsmodels.stats.weightstats import ztest
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression
from scipy.stats import f_oneway
from statsmodels.formula.api import ols
from statsmodels.stats.anova import anova_lm
from scipy.integrate import quad
from statsmodels.stats.outliers_influence import summary_table
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from statsmodels.stats.outliers_influence import variance_inflation_factor
from IPython.display import display, Latex
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter(action='ignore', category=FutureWarning)
This technique, which is rather general and can be used to make inferences about a multitude of parameters relating to population means, is known as the analysis of variance.
We suppose that we have been provided samples of size $n$ from $m$ distinct populations and that we want to use these data to test the hypothesis that the $m$ population means are equal.
Since the mean of a random variable depends only on a single factor, namely, the sample the variable is from, this scenario is said to constitute a one-way analysis of variance.
One way of thinking about this is to imagine that we have $m$ different treatments, where the result of applying treatment $i$ on an item is a normal random variable with mean $\mu_i$ and variance $\sigma^2$. We are then interested in testing the hypothesis that all treatments have the same effect, by applying each treatment to a (different) sample of $n$ items and then analyzing the result.
Consider $m$ independent samples, each of size $n$, where the members of the ith sample $X_{i1}, X_{i2}, . . . , X_{in}$ are normal random variables with unknown mean $\mu_i$ and unknown variance $\sigma^2$.
$X_{ij} \sim N(\mu_i, \sigma^2) \quad i=1,...,m,\ j=1,...,n$
$\\ $
$H_0: \mu_1=\mu_2=...=\mu_m$
$H_1:$ not all the means are equal (at least two of them differ.)
Within samples sum of squares:
Since there are a total of $nm$ independent normal random variables $X_{ij}$, it follows that the sum of the squares of their standardized versions will be a chi-square random variable with $nm$ degrees of freedom.
$\sum_{i=1}^m \sum_{j=1}^n \frac{(X_{ij}-E[X_{ij}])^2}{\sigma^2} = \sum_{i=1}^m \sum_{j=1}^n \frac{(X_{ij}- \mu_i)^2}{\sigma^2} \sim \chi^2_{nm}$
To obtain estimators for the $m$ unknown parameters $\mu_1, . . . ,\mu_m$, let $X_{i.}$ denote the average of all the elements in sample $i$:
$X_{i.} = \sum_{j=1}^n \frac{X_{ij}}{n}$
The variable $X_{i.}$ is the sample mean of the ith population, and as such is the estimator of the population mean $\mu_i$ for $i=1,...,m$.
Then if we substitute the $\mu$ with $X_{i.}$ the following variable will have chi-square distribution with $nm − m$ degrees of freedom. (Recall that 1 degree of freedom is lost for each parameter that is estimated.)
$\sum_{i=1}^m \sum_{j=1}^n \frac{(X_{ij}- X_{i.})^2}{\sigma^2} \sim \chi^2_{nm-m}$
$SS_W = \sum_{i=1}^m \sum_{j=1}^n (X_{ij}- X_{i.})^2$
$\frac{E[SS_W]}{\sigma^2} = nm-m \quad \rightarrow \quad \frac{E[SS_W]}{nm-m} = \sigma^2$
$\frac{SS_W}{nm-m}$ is an estimator of $\sigma^2$.
between samples sum of squares:
assume that $H_0$ is true and so all the population means $μ_i$ are equal, say, $μ_i = μ$ for all $i$. Under this condition it follows that the $m$ sample means $X_{1.}, X_{2.}, . . ., X_m$. will all be normally distributed with the same mean $\mu$ and the same variance $\frac{\sigma^2}{n}$. Hence, the sum of squares of the $m$ standardized variables $\frac{X_{i.}-\mu}{\sqrt{\frac{\sigma^2}{n}}} = \frac{\sqrt{n}(X_{i.}-\mu)}{\sigma}$ will be a chi-square random variable with $m$ degrees of freedoms.
$\sum_{i=1}^m \frac{n(X_{i.}-\mu)^2}{\sigma^2} \sim \chi_m^2$
Now, when all the population means are equal to $\mu$, then the estimator of $\mu$ is the average of all the nm data values. That is, the estimator of $\mu$ is $X_{..}$.
$X_{..} = \frac{\sum_{i=1}^m \sum_{j=1}^n X_{ij}}{nm} = \frac{\sum_{i=1}^m X_{i.}}{m}$
If we now substitute $X_{..}$ for the unknown parameter μ in expression $\sum_{i=1}^m \frac{n(X_{i.}-\mu)^2}{\sigma^2}$ it follows, when $H_0$ is true, that the resulting quantity will be a chi-square random variable with $m − 1$ degrees of freedom.
$\sum_{i=1}^m \frac{n(X_{i.}-X_{..})^2}{\sigma^2} \sim \chi_{m-1}^2$
$SS_b = n \sum_{i=1}^m (X_{i.}-X_{..})^2$
When $H_0$ is true:
$\frac{E[SS_b]}{\sigma^2} = m-1 \quad \rightarrow \quad \frac{E[SS_b]}{m-1} = \sigma^2$
$\frac{SS_b}{m-1}$ is an estimator of $\sigma^2$.
Estimators of $\sigma^2$ | Conditions |
---|---|
$\frac{SS_W}{nm-m}$ | Always true |
$\frac{SS_b}{m-1}$ | Only when $H_0$ is true |
Because it can be shown that $\frac{SS_b}{m-1}$ will tend to exceed $\sigma^2$ when $H_0$ is not true, the test statistic is:
$F_0 = \frac{\frac{SS_b}{m-1}}{\frac{SS_W}{nm-m}}$
$\\ $
Significance level = $\alpha$
$\\ $
We accept $H_0$ if:
$F_0 < F_{m-1,\ nm-m,\ \alpha}$
P_value = $P(F_{m-1,\ nm-m} \geq F_0) > \alpha$
The sum of squares identity:
$\sum_{i=1}^m \sum_{j=1}^n X_{ij}^2 = nmX_{..}^2 + SS_b + SS_W$
Summary:
--Source of Variation-- | ----------------------Sum of Squares---------------------- | --Degrees of Freedom-- | --Mean of Squares-- | --Value of Test Statistic-- |
---|---|---|---|---|
Between Samples | $SS_b=n\sum_{i=1}^m(X_{i.}-X_{..})^2$ | $m-1$ | $MS_b = \frac{SS_b}{m-1}$ | $F_0 = \frac{\frac{SS_b}{m-1}}{\frac{SS_W}{nm-m}}$ |
Within Samples | $SS_W = \sum_{i=1}^m \sum_{j=1}^n (X_{ij}- X_{i.})^2$ | $nm-m$ | $MS_W = \frac{SS_W}{nm-m}$ | |
Total | $SS_T = SS_W + SS_b = \sum_{i=1}^m \sum_{j=1}^n (X_{ij}- X_{..})^2$ | $nm-1$ |
You can do this test with f_oneway from scipy library.
Sample1 = [220, 251, 226, 246, 260]
Sample2 = [244, 235, 232, 242, 225]
Sample3 = [252, 272, 250, 238, 256]
alpha = 0.05
results = f_oneway(Sample1, Sample2, Sample3)
print(results, '\n')
if results[1] < alpha:
print(f'Since p_value < {alpha}, reject null hypothesis.')
else:
print(f'Since p_value > {alpha}, the null hypothesis cannot be rejected.')
F_onewayResult(statistic=2.6009238802972487, pvalue=0.11524892355706169) Since p_value > 0.05, the null hypothesis cannot be rejected.
Suppose that we have $m$ normal samples of respective sizes $n_1, n_2, ... , n_m$. That is, the data consist of the $\sum_{i=1}^m n_i$ independent random variables $Xij,\ j = 1, ... , n_i,\ i = 1, ... , m$, where $X_{ij} ∼ N (\mu_i, \sigma^2)$
$\\ $
$H_0: \mu_1=\mu_2=...=\mu_m$
$H_1:$ not all the means are equal (at least two of them differ.)
Within samples sum of squares:
Since there are a total of $\sum_{i=1}^m n_i$ independent normal random variables $X_{ij}$, it follows that the sum of the squares of their standardized versions will be a chi-square random variable with $\sum_{i=1}^m n_i$ degrees of freedom.
$\sum_{i=1}^m \sum_{j=1}^{n_i} \frac{(X_{ij}-E[X_{ij}])^2}{\sigma^2} = \sum_{i=1}^m \sum_{j=1}^{n_i} \frac{(X_{ij}- \mu_i)^2}{\sigma^2} \sim \chi^2_{\sum_{i=1}^m n_i}$
To obtain estimators for the $m$ unknown parameters $\mu_1, . . . ,\mu_m$, let $X_{i.}$ denote the average of all the elements in sample $i$:
$X_{i.} = \sum_{j=1}^{n_i} \frac{X_{ij}}{n}$
The variable $X_{i.}$ is the sample mean of the ith population, and as such is the estimator of the population mean $\mu_i$ for $i=1,...,m$.
Then if we substitute the $\mu$ with $X_{i.}$ the following variable will have chi-square distribution with $\sum_{i=1}^m n_i - m$ degrees of freedom. (Recall that 1 degree of freedom is lost for each parameter that is estimated.)
$\sum_{i=1}^m \sum_{j=1}^{n_i} \frac{(X_{ij}- X_{i.})^2}{\sigma^2} \sim \chi^2_{\sum_{i=1}^m n_i - m}$
$SS_W = \sum_{i=1}^m \sum_{j=1}^{n_i} (X_{ij}- X_{i.})^2$
$\frac{E[SS_W]}{\sigma^2} = \sum_{i=1}^m n_i - m \quad \rightarrow \quad \frac{E[SS_W]}{\sum_{i=1}^m n_i - m} = \sigma^2$
$\frac{SS_W}{\sum_{i=1}^m n_i - m}$ is an estimator of $\sigma^2$.
between samples sum of squares:
assume that $H_0$ is true and so all the population means $μ_i$ are equal, say, $μ_i = μ$ for all $i$. Under this condition it follows that the $m$ sample means $X_{1.}, X_{2.}, . . ., X_m$. will all be normally distributed with the same mean $\mu$ and the same variance $\frac{\sigma^2}{n_i}$. Hence, the sum of squares of the $m$ standardized variables $\frac{X_{i.}-\mu}{\sqrt{\frac{\sigma^2}{n_i}}} = \frac{\sqrt{n_i}(X_{i.}-\mu)}{\sigma}$ will be a chi-square random variable with $m$ degrees of freedoms.
$\sum_{i=1}^m \frac{n_i(X_{i.}-\mu)^2}{\sigma^2} \sim \chi_m^2$
Now, when all the population means are equal to $\mu$, then the estimator of $\mu$ is the average of all the nm data values. That is, the estimator of $\mu$ is $X_{..}$.
$X_{..} = \frac{\sum_{i=1}^m \sum_{j=1}^{n_i} X_{ij}}{\sum_{i=1}^m n_i} = \frac{\sum_{i=1}^m X_{i.}}{m}$
If we now substitute $X_{..}$ for the unknown parameter $\mu$ in expression $\sum_{i=1}^m \frac{n_i(X_{i.}-\mu)^2}{\sigma^2}$ it follows, when $H_0$ is true, that the resulting quantity will be a chi-square random variable with $m − 1$ degrees of freedom.
$\sum_{i=1}^m \frac{n_i(X_{i.}-X_{..})^2}{\sigma^2} \sim \chi_{m-1}^2$
$SS_b = n_i \sum_{i=1}^m (X_{i.}-X_{..})^2$
When $H_0$ is true:
$\frac{E[SS_b]}{\sigma^2} = m-1 \quad \rightarrow \quad \frac{E[SS_b]}{m-1} = \sigma^2$
$\frac{SS_b}{m-1}$ is an estimator of $\sigma^2$.
Estimators of $\sigma^2$ | Conditions |
---|---|
$\frac{SS_W}{\sum_{i=1}^m n_i - m}$ | Always true |
$\frac{SS_b}{m-1}$ | Only when $H_0$ is true |
Because it can be shown that $\frac{SS_b}{m-1}$ will tend to exceed $\sigma^2$ when $H_0$ is not true, the test statistic is:
$F_0 = \frac{\frac{SS_b}{m-1}}{\frac{SS_W}{\sum_{i=1}^m n_i - m}}$
$\\ $
Significance level = $\alpha$
$\\ $
We accept $H_0$ if:
$F_0 < F_{m-1,\ \sum_{i=1}^m n_i - m,\ \alpha}$
P_value = $P(F_{m-1,\ \sum_{i=1}^m n_i - m} \geq F_0) > \alpha$
Summary:
--Source of Variation-- | ----------------------Sum of Squares---------------------- | --Degrees of Freedom-- | ---Mean of Squares--- | --Value of Test Statistic-- |
---|---|---|---|---|
Between Samples | $SS_b = n_i \sum_{i=1}^m (X_{i.}-X_{..})^2$ | $m-1$ | $MS_b = \frac{SS_b}{m-1}$ | $F_0 = \frac{\frac{SS_b}{m-1}}{\frac{SS_W}{\sum_{i=1}^m n_i - m}}$ |
Within Samples | $SS_W = \sum_{i=1}^m \sum_{j=1}^{n_i} (X_{ij}- X_{i.})^2$ | $\sum_{i=1}^m n_i - m$ | $MS_W = \frac{SS_W}{\sum_{i=1}^m n_i - m}$ | |
Total | $SS_T = SS_W + SS_b = \sum_{i=1}^m \sum_{j=1}^{n_i} (X_{ij}- X_{..})^2$ | $\sum_{i=1}^m n_i-1$ |
Sample1 = [220, 251, 226, 246, 260]
Sample2 = [244, 235, 232, 242]
Sample3 = [252, 272, 250]
alpha = 0.05
results = f_oneway(Sample1, Sample2, Sample3)
print(results, '\n')
if results[1] < alpha:
print(f'Since p_value < {alpha}, reject null hypothesis.')
else:
print(f'Since p_value > {alpha}, the null hypothesis cannot be rejected.')
F_onewayResult(statistic=2.2667346740503254, pvalue=0.15949612861261475) Since p_value > 0.05, the null hypothesis cannot be rejected.
We suppose that each data value is affected by two factors. We will refer to the first factor as the "row" factor, and the second factor as the "column" factor. we will suppose that the data $X_{ij},\ i = 1, ... , m,\ j = 1, ... , n$ are independent normal random variables with a common variance $\sigma^2$ and we suppose that the mean value of data depends in an additive manner on both its row and its column.
We let $X_{ij}$ represent the value of the jth member of sample $i$, then that model could be symbolically represented as: $E[X_{ij}] = \mu_i$
However, if we let $\mu$ denote the average value of the $\mu_i$ $(\mu = \frac{\sum_{i=1}^m \mu_i}{m})$ then we can rewrite the model as $E[X_{ij}] = \mu + \alpha_i$ where $\alpha_i = \mu_i -\mu$
With this definition of $\alpha_i$ as the deviation of $\mu_i$ from the average mean value, it is easy to see that $\sum_{i=1}^m \alpha_i = 0$
A two-factor additive model can also be expressed in terms of row and column deviations.
If we let $\mu_{ij} = E[X_{ij}]$, then the additive model supposes that for some constants $a_i,\ i = 1, ... , m$ and $b_j,\ j = 1, ... , n$
$\mu_{ij} = \alpha_i + b_j$
Continuing our use of the "dot" (or averaging) notation, we let
$\mu_{i.} = \sum_{j=1}^n \frac{\mu_{ij}}{n} \qquad \mu_{.j} = \sum_{i=1}^m \frac{\mu_{ij}}{m} \qquad \mu_{..} = \sum_{i=1}^m \sum_{j=1}^n\frac{\mu_{ij}}{nm}$
$a_. = \sum_{i=1}^m \frac{a_i}{m} \qquad b_. = \sum_{j=1}^n \frac{b_j}{n}$
Note that:
$\mu_{i.} = \sum_{j=1}^n \frac{(a_i + b_j)}{n} = a_i + b_. \qquad \mu_{.j} = a_. + b_j \qquad \mu_{..} = a_. + b_.$
If we now set
$\mu = \mu_{..} = a_. + b_. \qquad \alpha_i = \mu_{i.}-\mu = \alpha_i -\alpha_. \qquad \beta_j = \mu_{.j}-\mu = b_j - b_.$
Then the model can be written as
$\mu_{ij} = E[X_{ij}] = \mu + \alpha_i + \beta_j$
The value $\mu$ is called the grand mean, $\alpha_i$ is the deviation from the grand mean due to row $i$, and $\beta_j$ is the deviation from the grand mean due to column $j$.
$X_{i.}=\frac{\sum_{j=1}^n X_{ij}}{n} \qquad X_{.j}=\frac{\sum_{i=1}^m X_{ij}}{m} \qquad X_{..}=\frac{\sum_{i=1}^m \sum_{j=1}^n X_{ij}}{nm}$
Unbiased estimators of $\mu, \alpha_i, \beta_j$ — call them $\widehat{\mu},\ \widehat{\alpha_i},\ \widehat{\beta_j}$ — are given by
$\widehat{\mu} = X_{..} \qquad \widehat{\alpha_i} = X_{i.} - X_{..} \qquad \widehat{\beta_j} = X_{.j} - X_{..}$
Hypothesis Tests:
Test 1:
$H_0:$ all $\alpha_i = 0$
$H_1:$ not all the $\alpha_i$ are equal to 0
This null hypothesis states that there is no row effect, in that the value of a data is not affected by its row factor level.
$\\ $
Test 2:
$H_0:$ all $\beta_i = 0$
$H_1:$ not all the $\beta_i$ are equal to 0
This null hypothesis states that there is no column effect, in that the value of a data is not affected by its column factor level.
Error Sum of Squares:
To obtain tests for the above null hypotheses, we will apply the analysis of variance approach in which two different estimators are derived for the variance $\sigma^2$. The first will always be a valid estimator, whereas the second will only be a valid estimator when the null hypothesis is true. In addition, the second estimator will tend to overestimate $\sigma^2$ when the null hypothesis is not true.
To obtain our first estimator of σ2, we start with the fact that: $\sum_{i=1}^m \sum_{j=1}^n \frac{(X_{ij}-E[X_{ij}])^2}{\sigma^2} = \sum_{i=1}^m \sum_{j=1}^n \frac{(X_{ij}-\mu-\alpha_i-\beta_j)^2}{\sigma^2} \sim \chi_{nm}^2$
If in the above expression we now replace the unknown parameters $\mu, \alpha_1, \alpha_2, ... , \alpha_m, \beta_1, \beta_2, ... , \beta_n$ by their estimators $\widehat{\mu}, \widehat{\alpha_1}, \widehat{\alpha_2}, ... , \widehat{\alpha_m}, \widehat{\beta_1}, \widehat{\beta_2}, ... , \widehat{\beta_n}$, then it turns out that the resulting expression will remain chi-square but will lose $1$ degree of freedom for each parameter that is estimated. Therefore,
$\sum_{i=1}^m \sum_{j=1}^n \frac{(X_{ij}-\widehat{\mu}-\widehat{\alpha_i}-\widehat{\beta_j})^2}{\sigma^2} = \sum_{i=1}^m \sum_{j=1}^n \frac{(X_{ij}-X_{i.}-X_{.j}+X_{..})^2}{\sigma^2} \sim \chi_{nm-(n+m-1)=(n-1)(m-1)}^2$
$SS_e = \sum_{i=1}^m \sum_{j=1}^n (X_{ij}-X_{i.}-X_{.j}+X_{..})^2$
$\frac{E[SS_e]}{\sigma^2} = (n-1)(m-1) \quad \rightarrow \quad \frac{E[SS_e]}{(n-1)(m-1)} = \sigma^2$
$\frac{SS_e}{(n-1)(m-1)}$ is an unbiased estimator of $\sigma^2$.
Row Sum of Squares:
Suppose now that we want to test the null hypothesis that there is no row effect.
To obtain a second estimator of $\sigma^2$, consider the row averages $X_{i.},\ i = 1, ... , m$. Note that, when $H_0$ is true, each $\alpha_i$ is equal to 0, and so $E[X_{i.}] = \mu+\alpha_i = \mu$. Because each $X_{i.}$ is the average of $n$ random variables, each having variance $\sigma^2$, it follows that $Var(X_{i.})=\frac{\sigma^2}{n}$.
Thus, we see that when $H_0$ is true:
$\sum_{i=1}^m \frac{(X_{i.}-E[X_{i.}])^2}{Var(X_{i.})} = n \sum_{i=1}^m \frac{(X_{i.}-\mu)^2}{\sigma^2} \sim \chi_m^2$
$SS_r = n \sum_{i=1}^m (X_{i.}-X_{..})^2$
$\frac{E[SS_r]}{\sigma^2} = m-1 \quad \rightarrow \quad \frac{E[SS_r]}{m-1} = \sigma^2$
$\frac{SS_r}{m-1}$ is an estimator of $\sigma^2$.
Estimators of $\sigma^2$ | Conditions |
---|---|
$\frac{SS_e}{(n−1)(m−1)}$ | Always true |
$\frac{SS_r}{m-1}$ | Only when $H_0$ is true |
$\frac{SS_c}{n-1}$ | Only when $H_0$ is true |
Summary:
----------------------Sum of Squares---------------------- | --Degrees of Freedom-- | |
---|---|---|
Row | $SS_r = n \sum_{i=1}^m (X_{i.}-X_{..})^2$ | $m-1$ |
Column | $SS_c = \sum_{j=1}^n (X_{.j}-X_{..})^2$ | $m-1$ |
Error | $\sum_{i=1}^m \sum_{j=1}^n (X_{ij}-X_{i.}-X_{.j}+X_{..})^2$ | $(n-1)(m-1)$ |
$\\ $
--Null Hypothesis-- | --Test Statistics-- | ----Significance Level $\alpha$ Test---- | ------------P_Value------------ |
---|---|---|---|
All $\alpha_i = 0$ | $\frac{\frac{SS_r}{m-1}}{\frac{SS_e}{(n-1)(m-1)}}$ | Reject if $F_0 \geq F_{m-1,(n-1)(m-1),\alpha}$ | $P(F_{m-1,(n-1)(m-1)} \geq F_0)$ |
All $\beta_j = 0$ | $\frac{\frac{SS_c}{n-1}}{\frac{SS_e}{(n-1)(m-1)}}$ | Reject if $F_0 \geq F_{n-1,(n-1)(m-1),\alpha}$ | $P(F_{n-1,(n-1)(m-1)} \geq F_0)$ |
A = [9,6,8,4,6]
B = [10,4,4,6,5]
C = [1,2,2,3,1]
data = pd.DataFrame()
data['A'] = A
data['B'] = B
data['C'] = C
model = ols('C ~ A + B + A:B', data=data).fit()
aov_table = anova_lm(model, type=2)
print(aov_table.round(4))
df sum_sq mean_sq F PR(>F) A 1.0 1.2737 1.2737 1.1071 0.4838 B 1.0 0.0253 0.0253 0.0220 0.9062 A:B 1.0 0.3506 0.3506 0.3047 0.6789 Residual 1.0 1.1504 1.1504 NaN NaN
The p-values for A and B turn out to be greater than 0.05 which implies that the means of both the factors don't possess a statistically significant effect on C.