Objective and Focus:
Our primary goal is to enhance marketing strategies among existing clients who are not currently making purchases, aiming to stimulate their interest in buying. This initiative emphasizes addressing false positives to effectively convert potential clients.
In contrast, if our aim were to encourage repeat purchases from existing buyers, our strategy would shift. We would focus on understanding:
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv(r"C:\Users\Teni\Desktop\Git-Github\Datasets\Logistic Regression\car_data.csv")
df
User ID | Gender | Age | AnnualSalary | Purchased | |
---|---|---|---|---|---|
0 | 385 | Male | 35 | 20000 | 0 |
1 | 681 | Male | 40 | 43500 | 0 |
2 | 353 | Male | 49 | 74000 | 0 |
3 | 895 | Male | 40 | 107500 | 1 |
4 | 661 | Male | 25 | 79000 | 0 |
... | ... | ... | ... | ... | ... |
995 | 863 | Male | 38 | 59000 | 0 |
996 | 800 | Female | 47 | 23500 | 0 |
997 | 407 | Female | 28 | 138500 | 1 |
998 | 299 | Female | 48 | 134000 | 1 |
999 | 687 | Female | 44 | 73500 | 0 |
1000 rows × 5 columns
df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1000 entries, 0 to 999 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 User ID 1000 non-null int64 1 Gender 1000 non-null object 2 Age 1000 non-null int64 3 AnnualSalary 1000 non-null int64 4 Purchased 1000 non-null int64 dtypes: int64(4), object(1) memory usage: 39.2+ KB
General Observation about the data
# take the User ID, since we have our unique IDs
df.drop('User ID', axis=1, inplace=True)
df
Gender | Age | AnnualSalary | Purchased | |
---|---|---|---|---|
0 | Male | 35 | 20000 | 0 |
1 | Male | 40 | 43500 | 0 |
2 | Male | 49 | 74000 | 0 |
3 | Male | 40 | 107500 | 1 |
4 | Male | 25 | 79000 | 0 |
... | ... | ... | ... | ... |
995 | Male | 38 | 59000 | 0 |
996 | Female | 47 | 23500 | 0 |
997 | Female | 28 | 138500 | 1 |
998 | Female | 48 | 134000 | 1 |
999 | Female | 44 | 73500 | 0 |
1000 rows × 4 columns
# convert gender to a dummy variable
df = pd.get_dummies(df, columns=['Gender'])
df
Age | AnnualSalary | Purchased | Gender_Female | Gender_Male | |
---|---|---|---|---|---|
0 | 35 | 20000 | 0 | 0 | 1 |
1 | 40 | 43500 | 0 | 0 | 1 |
2 | 49 | 74000 | 0 | 0 | 1 |
3 | 40 | 107500 | 1 | 0 | 1 |
4 | 25 | 79000 | 0 | 0 | 1 |
... | ... | ... | ... | ... | ... |
995 | 38 | 59000 | 0 | 0 | 1 |
996 | 47 | 23500 | 0 | 1 | 0 |
997 | 28 | 138500 | 1 | 1 | 0 |
998 | 48 | 134000 | 1 | 1 | 0 |
999 | 44 | 73500 | 0 | 1 | 0 |
1000 rows × 5 columns
df = df.drop('Gender_Male', axis=1)
df
Age | AnnualSalary | Purchased | Gender_Female | |
---|---|---|---|---|
0 | 35 | 20000 | 0 | 0 |
1 | 40 | 43500 | 0 | 0 |
2 | 49 | 74000 | 0 | 0 |
3 | 40 | 107500 | 1 | 0 |
4 | 25 | 79000 | 0 | 0 |
... | ... | ... | ... | ... |
995 | 38 | 59000 | 0 | 0 |
996 | 47 | 23500 | 0 | 1 |
997 | 28 | 138500 | 1 | 1 |
998 | 48 | 134000 | 1 | 1 |
999 | 44 | 73500 | 0 | 1 |
1000 rows × 4 columns
df = df.rename(columns={'Gender_Female':'female_gender'})
df
Age | AnnualSalary | Purchased | female_gender | |
---|---|---|---|---|
0 | 35 | 20000 | 0 | 0 |
1 | 40 | 43500 | 0 | 0 |
2 | 49 | 74000 | 0 | 0 |
3 | 40 | 107500 | 1 | 0 |
4 | 25 | 79000 | 0 | 0 |
... | ... | ... | ... | ... |
995 | 38 | 59000 | 0 | 0 |
996 | 47 | 23500 | 0 | 1 |
997 | 28 | 138500 | 1 | 1 |
998 | 48 | 134000 | 1 | 1 |
999 | 44 | 73500 | 0 | 1 |
1000 rows × 4 columns
df.describe()
Age | AnnualSalary | Purchased | female_gender | |
---|---|---|---|---|
count | 1000.000000 | 1000.000000 | 1000.000000 | 1000.000000 |
mean | 40.106000 | 72689.000000 | 0.402000 | 0.516000 |
std | 10.707073 | 34488.341867 | 0.490547 | 0.499994 |
min | 18.000000 | 15000.000000 | 0.000000 | 0.000000 |
25% | 32.000000 | 46375.000000 | 0.000000 | 0.000000 |
50% | 40.000000 | 72000.000000 | 0.000000 | 1.000000 |
75% | 48.000000 | 90000.000000 | 1.000000 | 1.000000 |
max | 63.000000 | 152500.000000 | 1.000000 | 1.000000 |
# checking for null data.
df.isnull().sum()
Age 0 AnnualSalary 0 Purchased 0 female_gender 0 dtype: int64
age and purchaese, gender-f,
# checjk if the data is ballanced
df['Purchased'].value_counts()
0 598 1 402 Name: Purchased, dtype: int64
sns.countplot(data=df, x='Purchased')
plt.show()
plt.figure(figsize=(12, 8), dpi=200)
sns.scatterplot(data=df, x='Age', y='AnnualSalary', hue='Purchased')
plt.tight_layout()
plt.show()
plt.figure(figsize=(8,6), dpi=200)
sns.scatterplot(data=df, x='Age', y='AnnualSalary', hue='female_gender')
# plt.tight_layout()
plt.show()
sns.countplot(x='female_gender',data=df, hue='Purchased')
plt.title('Purchases for the Male (0) and the Female(1) Gender')
plt.legend(title='Purchased', loc='upper right')
plt.show()
Notes from the above:
sns.heatmap(df.corr(), annot=True)
plt.show()
Notes from the analysis
X = df.drop('Purchased', axis=1)
y = df.Purchased
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=101)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_x_train = scaler.fit_transform(X_train)
scaled_x_test = scaler.transform(X_test)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(scaled_x_train, y_train)
LogisticRegression()
y_pred= model.predict(scaled_x_test)
y_pred
array([0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0], dtype=int64)
model.predict_proba(scaled_x_test)
array([[0.99868814, 0.00131186], [0.4578815 , 0.5421185 ], [0.21962047, 0.78037953], [0.67531156, 0.32468844], [0.41424985, 0.58575015], [0.58345303, 0.41654697], [0.88087378, 0.11912622], [0.91304898, 0.08695102], [0.0363067 , 0.9636933 ], [0.68232584, 0.31767416], [0.70901873, 0.29098127], [0.99010544, 0.00989456], [0.30142507, 0.69857493], [0.41049669, 0.58950331], [0.10548753, 0.89451247], [0.99798686, 0.00201314], [0.62449971, 0.37550029], [0.04277203, 0.95722797], [0.86710804, 0.13289196], [0.94882396, 0.05117604], [0.85168651, 0.14831349], [0.93124916, 0.06875084], [0.45823822, 0.54176178], [0.94570206, 0.05429794], [0.92132013, 0.07867987], [0.58310375, 0.41689625], [0.21186749, 0.78813251], [0.62189261, 0.37810739], [0.96706985, 0.03293015], [0.97900077, 0.02099923], [0.86897349, 0.13102651], [0.97264627, 0.02735373], [0.93593479, 0.06406521], [0.50887591, 0.49112409], [0.71231788, 0.28768212], [0.99009763, 0.00990237], [0.84136965, 0.15863035], [0.74075443, 0.25924557], [0.18373342, 0.81626658], [0.02972873, 0.97027127], [0.88915712, 0.11084288], [0.758928 , 0.241072 ], [0.00967984, 0.99032016], [0.06732073, 0.93267927], [0.56011819, 0.43988181], [0.00178187, 0.99821813], [0.78585194, 0.21414806], [0.97867166, 0.02132834], [0.98893697, 0.01106303], [0.91309316, 0.08690684], [0.07234919, 0.92765081], [0.89958057, 0.10041943], [0.00357407, 0.99642593], [0.08628479, 0.91371521], [0.59483428, 0.40516572], [0.55679062, 0.44320938], [0.97865501, 0.02134499], [0.79190042, 0.20809958], [0.99240978, 0.00759022], [0.96819729, 0.03180271], [0.68280998, 0.31719002], [0.09981866, 0.90018134], [0.81720505, 0.18279495], [0.11304552, 0.88695448], [0.92793174, 0.07206826], [0.99602699, 0.00397301], [0.15723429, 0.84276571], [0.01546607, 0.98453393], [0.16648082, 0.83351918], [0.02747839, 0.97252161], [0.95257535, 0.04742465], [0.27591275, 0.72408725], [0.47325596, 0.52674404], [0.94251796, 0.05748204], [0.68567355, 0.31432645], [0.95399031, 0.04600969], [0.98509682, 0.01490318], [0.99644734, 0.00355266], [0.96176422, 0.03823578], [0.25148116, 0.74851884], [0.4648858 , 0.5351142 ], [0.7970236 , 0.2029764 ], [0.03559032, 0.96440968], [0.78648175, 0.21351825], [0.93970015, 0.06029985], [0.0077571 , 0.9922429 ], [0.99309785, 0.00690215], [0.99418662, 0.00581338], [0.77269891, 0.22730109], [0.98664968, 0.01335032], [0.90637647, 0.09362353], [0.04500582, 0.95499418], [0.4375525 , 0.5624475 ], [0.74351973, 0.25648027], [0.6954827 , 0.3045173 ], [0.07037575, 0.92962425], [0.91892638, 0.08107362], [0.36095152, 0.63904848], [0.04676752, 0.95323248], [0.81418103, 0.18581897], [0.9876371 , 0.0123629 ], [0.91166373, 0.08833627], [0.9681038 , 0.0318962 ], [0.74389973, 0.25610027], [0.24018896, 0.75981104], [0.09799778, 0.90200222], [0.54804731, 0.45195269], [0.92240704, 0.07759296], [0.13361612, 0.86638388], [0.71195855, 0.28804145], [0.54440597, 0.45559403], [0.76196736, 0.23803264], [0.65851906, 0.34148094], [0.95607775, 0.04392225], [0.00269075, 0.99730925], [0.97581327, 0.02418673], [0.11672222, 0.88327778], [0.7919401 , 0.2080599 ], [0.74960686, 0.25039314], [0.37579215, 0.62420785], [0.58721234, 0.41278766], [0.50550292, 0.49449708], [0.99866716, 0.00133284], [0.94500493, 0.05499507], [0.36924082, 0.63075918], [0.98840042, 0.01159958], [0.99556454, 0.00443546], [0.32937099, 0.67062901], [0.51224809, 0.48775191], [0.31837644, 0.68162356], [0.61052723, 0.38947277], [0.94962052, 0.05037948], [0.93109583, 0.06890417], [0.7892439 , 0.2107561 ], [0.80453756, 0.19546244], [0.00704674, 0.99295326], [0.72571862, 0.27428138], [0.01062508, 0.98937492], [0.00601307, 0.99398693], [0.78438248, 0.21561752], [0.89347987, 0.10652013], [0.34307197, 0.65692803], [0.11676333, 0.88323667], [0.4895573 , 0.5104427 ], [0.68241226, 0.31758774], [0.84966846, 0.15033154], [0.66136856, 0.33863144], [0.99666057, 0.00333943], [0.17675084, 0.82324916], [0.98910685, 0.01089315], [0.95180485, 0.04819515], [0.73471456, 0.26528544], [0.95185246, 0.04814754], [0.85171692, 0.14828308], [0.11151748, 0.88848252], [0.92151079, 0.07848921], [0.86519206, 0.13480794], [0.78648175, 0.21351825], [0.99454823, 0.00545177], [0.01406308, 0.98593692], [0.77276893, 0.22723107], [0.91181145, 0.08818855], [0.74699006, 0.25300994], [0.73838565, 0.26161435], [0.64417965, 0.35582035], [0.9988068 , 0.0011932 ], [0.97586028, 0.02413972], [0.50887591, 0.49112409], [0.59173234, 0.40826766], [0.96905381, 0.03094619], [0.89067443, 0.10932557], [0.9592641 , 0.0407359 ], [0.66468612, 0.33531388], [0.01501908, 0.98498092], [0.80963328, 0.19036672], [0.93109583, 0.06890417], [0.55659382, 0.44340618], [0.56318497, 0.43681503], [0.20473145, 0.79526855], [0.08495971, 0.91504029], [0.01673647, 0.98326353], [0.09383084, 0.90616916], [0.69979236, 0.30020764], [0.08973859, 0.91026141], [0.74402155, 0.25597845], [0.00531604, 0.99468396], [0.27890586, 0.72109414], [0.96667084, 0.03332916], [0.86680432, 0.13319568], [0.04842461, 0.95157539], [0.93010575, 0.06989425], [0.04543352, 0.95456648], [0.91287142, 0.08712858], [0.99097701, 0.00902299], [0.57535981, 0.42464019], [0.76742593, 0.23257407], [0.46498499, 0.53501501], [0.79448061, 0.20551939], [0.94891302, 0.05108698], [0.94802736, 0.05197264], [0.9481647 , 0.0518353 ], [0.05211272, 0.94788728], [0.20988726, 0.79011274], [0.02213708, 0.97786292], [0.01200981, 0.98799019], [0.03471552, 0.96528448], [0.9902383 , 0.0097617 ], [0.35793624, 0.64206376], [0.20676704, 0.79323296], [0.12999832, 0.87000168], [0.72540104, 0.27459896], [0.97696692, 0.02303308], [0.93104465, 0.06895535], [0.78900478, 0.21099522], [0.02111203, 0.97888797], [0.01014198, 0.98985802], [0.04351013, 0.95648987], [0.07008934, 0.92991066], [0.52111817, 0.47888183], [0.36224013, 0.63775987], [0.88267115, 0.11732885], [0.85578363, 0.14421637], [0.29173214, 0.70826786], [0.64702969, 0.35297031], [0.93704567, 0.06295433], [0.99455341, 0.00544659], [0.50927451, 0.49072549], [0.5243854 , 0.4756146 ], [0.00652919, 0.99347081], [0.9841183 , 0.0158817 ], [0.72540104, 0.27459896], [0.64353964, 0.35646036], [0.1279004 , 0.8720996 ], [0.8598533 , 0.1401467 ], [0.33923222, 0.66076778], [0.56783351, 0.43216649], [0.95269043, 0.04730957], [0.943125 , 0.056875 ], [0.31155501, 0.68844499], [0.45685274, 0.54314726], [0.83937776, 0.16062224], [0.60986336, 0.39013664], [0.92908173, 0.07091827], [0.76454987, 0.23545013], [0.99286703, 0.00713297], [0.92488957, 0.07511043], [0.01061923, 0.98938077], [0.99670649, 0.00329351], [0.95859128, 0.04140872], [0.99299087, 0.00700913], [0.98907243, 0.01092757], [0.07451091, 0.92548909], [0.38084153, 0.61915847], [0.75284446, 0.24715554], [0.87241597, 0.12758403], [0.97935231, 0.02064769], [0.38845651, 0.61154349], [0.88594632, 0.11405368], [0.22606228, 0.77393772], [0.61764172, 0.38235828], [0.87238916, 0.12761084], [0.65133835, 0.34866165], [0.73176305, 0.26823695], [0.0447665 , 0.9552335 ], [0.7913482 , 0.2086518 ], [0.82837723, 0.17162277], [0.9398445 , 0.0601555 ], [0.98727316, 0.01272684], [0.95408465, 0.04591535], [0.71870433, 0.28129567], [0.24914362, 0.75085638], [0.81684741, 0.18315259], [0.97617055, 0.02382945], [0.7403251 , 0.2596749 ], [0.00934296, 0.99065704], [0.04219068, 0.95780932], [0.15110327, 0.84889673], [0.42129957, 0.57870043], [0.38112363, 0.61887637], [0.67553917, 0.32446083], [0.79461079, 0.20538921], [0.65088547, 0.34911453], [0.22899472, 0.77100528], [0.52052112, 0.47947888], [0.10840788, 0.89159212], [0.06739587, 0.93260413], [0.79418071, 0.20581929], [0.81911756, 0.18088244], [0.94147565, 0.05852435], [0.94570206, 0.05429794], [0.40298395, 0.59701605], [0.00907348, 0.99092652], [0.43335287, 0.56664713], [0.64262446, 0.35737554], [0.05696321, 0.94303679], [0.61444976, 0.38555024], [0.92483415, 0.07516585], [0.36238789, 0.63761211], [0.92155105, 0.07844895], [0.60632465, 0.39367535]])
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix, plot_confusion_matrix
accuracy_score(y_pred, y_test)
# the truly predicted values of the model make up 84% of the matrix
0.8366666666666667
confusion_matrix(y_test, y_pred)
array([[167, 19], [ 30, 84]], dtype=int64)
plot_confusion_matrix(model, scaled_x_test, y_test, cmap=plt.cm.Blues)
C:\Users\Teni\anaconda3\lib\site-packages\sklearn\utils\deprecation.py:87: FutureWarning: Function plot_confusion_matrix is deprecated; Function `plot_confusion_matrix` is deprecated in 1.0 and will be removed in 1.2. Use one of the class methods: ConfusionMatrixDisplay.from_predictions or ConfusionMatrixDisplay.from_estimator. warnings.warn(msg, category=FutureWarning)
<sklearn.metrics._plot.confusion_matrix.ConfusionMatrixDisplay at 0x1af9e493640>
Notes:
print(classification_report(y_test, y_pred))
precision recall f1-score support 0 0.85 0.90 0.87 186 1 0.82 0.74 0.77 114 accuracy 0.84 300 macro avg 0.83 0.82 0.82 300 weighted avg 0.84 0.84 0.83 300
from sklearn.linear_model import LogisticRegression
lr_model = LogisticRegression(max_iter=100)
lr_model.fit(scaled_x_train, y_train)
LogisticRegression()
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer, precision_score
from sklearn.preprocessing import StandardScaler
param_grid = {
'C': [0.1, 1, 10, 100],
'solver': [ 'liblinear'],
'penalty': ['l1', 'l2'],
}
scaler = StandardScaler()
scaled_x_train = scaler.fit_transform(X_train)
scaled_x_test = scaler.transform(X_test)
scorer = make_scorer(precision_score, average='weighted')
grid_search = GridSearchCV(lr_model, param_grid, cv=7, scoring=scorer)
grid_search.fit(scaled_x_train, y_train)
print("Best hyperparameters: ", grid_search.best_params_)
print("Best cross_validation precision score: ",grid_search.best_score_)
Best hyperparameters: {'C': 0.1, 'penalty': 'l1', 'solver': 'liblinear'} Best cross_validation precision score: 0.8392065150308922
lr_model.fit(scaled_x_train, y_train)
y_pred = lr_model.predict(scaled_x_test)
print(classification_report(y_test, y_pred))
precision recall f1-score support 0 0.85 0.90 0.87 186 1 0.82 0.74 0.77 114 accuracy 0.84 300 macro avg 0.83 0.82 0.82 300 weighted avg 0.84 0.84 0.83 300
Got the same result with GridSearchV. So, a different model will be used
from sklearn.ensemble import RandomForestClassifier
# Example of using Random Forest for potentially better performance
rf_model = RandomForestClassifier(n_estimators=100, random_state=42)
scaler = StandardScaler()
rf_scaled_x_train = scaler.fit_transform(X_train)
rf_scaled_x_test = scaler.transform(X_test)
rf_model.fit(rf_scaled_x_train, y_train)
y_pred_rf = rf_model.predict(rf_scaled_x_test)
# Evaluate performance metrics
from sklearn.metrics import classification_report
print("Random Forest Classification Report:")
print(classification_report(y_test, y_pred_rf))
Random Forest Classification Report: precision recall f1-score support 0 0.92 0.92 0.92 186 1 0.87 0.88 0.87 114 accuracy 0.90 300 macro avg 0.90 0.90 0.90 300 weighted avg 0.90 0.90 0.90 300
plot_confusion_matrix(rf_model, scaled_x_test, y_test)
C:\Users\Teni\anaconda3\lib\site-packages\sklearn\utils\deprecation.py:87: FutureWarning: Function plot_confusion_matrix is deprecated; Function `plot_confusion_matrix` is deprecated in 1.0 and will be removed in 1.2. Use one of the class methods: ConfusionMatrixDisplay.from_predictions or ConfusionMatrixDisplay.from_estimator. warnings.warn(msg, category=FutureWarning)
<sklearn.metrics._plot.confusion_matrix.ConfusionMatrixDisplay at 0x1af9d7f6640>
Note
confusion_matrix(y_test, y_pred_rf)
array([[171, 15], [ 14, 100]], dtype=int64)
final_model = RandomForestClassifier(n_estimators=100, random_state=42)
final_model.fit(X, y)
y_pred = final_model.predict(X)
from joblib import dump, load
dump(final_model, 'car_sales_pred.joblib')
['car_sales_pred.joblib']
model = load('car_sales_pred.joblib')
Conclusion and Recommendations for Marketers:
Key Observations:
Insights for Marketers:
Recommendations:
Targeting Strategies: Develop targeted campaigns that resonate with the preferences and priorities of different demographic groups. For instance, focus on convenience and value propositions for older buyers, while emphasizing lifestyle benefits for younger demographics.
Financial Accessibility: Consider flexible financing options or promotional offers that address potential financial constraints identified among non-purchasing clients.
Retention Strategies: Implement loyalty programs and personalized marketing to retain existing clients and encourage repeat purchases.