工作机制:
工作机制:
工作机制:
Chess. Here, the agent decides upon a series of moves depending on the state of the board (the environment), and the
reward can be defined as win or lose at the end of the game:
%matplotlib inline
import sklearn
from sklearn import datasets
from sklearn import linear_model
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
from sklearn.preprocessing import scale
# boston data
boston = datasets.load_boston()
y = boston.target
X = boston.data
boston['feature_names']
array(['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT'], dtype='<U7')
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
# Fit regression model (using the natural log of one of the regressors)
results = smf.ols('boston.target ~ boston.data', data=boston).fit()
print(results.summary())
OLS Regression Results ============================================================================== Dep. Variable: boston.target R-squared: 0.741 Model: OLS Adj. R-squared: 0.734 Method: Least Squares F-statistic: 108.1 Date: Mon, 22 Apr 2019 Prob (F-statistic): 6.72e-135 Time: 16:22:28 Log-Likelihood: -1498.8 No. Observations: 506 AIC: 3026. Df Residuals: 492 BIC: 3085. Df Model: 13 Covariance Type: nonrobust =================================================================================== coef std err t P>|t| [0.025 0.975] ----------------------------------------------------------------------------------- Intercept 36.4595 5.103 7.144 0.000 26.432 46.487 boston.data[0] -0.1080 0.033 -3.287 0.001 -0.173 -0.043 boston.data[1] 0.0464 0.014 3.382 0.001 0.019 0.073 boston.data[2] 0.0206 0.061 0.334 0.738 -0.100 0.141 boston.data[3] 2.6867 0.862 3.118 0.002 0.994 4.380 boston.data[4] -17.7666 3.820 -4.651 0.000 -25.272 -10.262 boston.data[5] 3.8099 0.418 9.116 0.000 2.989 4.631 boston.data[6] 0.0007 0.013 0.052 0.958 -0.025 0.027 boston.data[7] -1.4756 0.199 -7.398 0.000 -1.867 -1.084 boston.data[8] 0.3060 0.066 4.613 0.000 0.176 0.436 boston.data[9] -0.0123 0.004 -3.280 0.001 -0.020 -0.005 boston.data[10] -0.9527 0.131 -7.283 0.000 -1.210 -0.696 boston.data[11] 0.0093 0.003 3.467 0.001 0.004 0.015 boston.data[12] -0.5248 0.051 -10.347 0.000 -0.624 -0.425 ============================================================================== Omnibus: 178.041 Durbin-Watson: 1.078 Prob(Omnibus): 0.000 Jarque-Bera (JB): 783.126 Skew: 1.521 Prob(JB): 8.84e-171 Kurtosis: 8.281 Cond. No. 1.51e+04 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 1.51e+04. This might indicate that there are strong multicollinearity or other numerical problems.
regr = linear_model.LinearRegression()
lm = regr.fit(boston.data, y)
lm.intercept_, lm.coef_, lm.score(boston.data, y)
(36.45948838508965, array([-1.08011358e-01, 4.64204584e-02, 2.05586264e-02, 2.68673382e+00, -1.77666112e+01, 3.80986521e+00, 6.92224640e-04, -1.47556685e+00, 3.06049479e-01, -1.23345939e-02, -9.52747232e-01, 9.31168327e-03, -5.24758378e-01]), 0.7406426641094095)
predicted = regr.predict(boston.data)
fig, ax = plt.subplots()
ax.scatter(y, predicted)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('$Measured$', fontsize = 20)
ax.set_ylabel('$Predicted$', fontsize = 20)
plt.show()
boston.data
array([[6.3200e-03, 1.8000e+01, 2.3100e+00, ..., 1.5300e+01, 3.9690e+02, 4.9800e+00], [2.7310e-02, 0.0000e+00, 7.0700e+00, ..., 1.7800e+01, 3.9690e+02, 9.1400e+00], [2.7290e-02, 0.0000e+00, 7.0700e+00, ..., 1.7800e+01, 3.9283e+02, 4.0300e+00], ..., [6.0760e-02, 0.0000e+00, 1.1930e+01, ..., 2.1000e+01, 3.9690e+02, 5.6400e+00], [1.0959e-01, 0.0000e+00, 1.1930e+01, ..., 2.1000e+01, 3.9345e+02, 6.4800e+00], [4.7410e-02, 0.0000e+00, 1.1930e+01, ..., 2.1000e+01, 3.9690e+02, 7.8800e+00]])
from sklearn.model_selection import train_test_split
Xs_train, Xs_test, y_train, y_test = train_test_split(boston.data,
boston.target,
test_size=0.2,
random_state=42)
regr = linear_model.LinearRegression()
lm = regr.fit(Xs_train, y_train)
lm.intercept_, lm.coef_, lm.score(Xs_train, y_train)
(30.24675099392396, array([-1.13055924e-01, 3.01104641e-02, 4.03807204e-02, 2.78443820e+00, -1.72026334e+01, 4.43883520e+00, -6.29636221e-03, -1.44786537e+00, 2.62429736e-01, -1.06467863e-02, -9.15456240e-01, 1.23513347e-02, -5.08571424e-01]), 0.7508856358979673)
predicted = regr.predict(Xs_test)
fig, ax = plt.subplots()
ax.scatter(y_test, predicted)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('$Measured$', fontsize = 20)
ax.set_ylabel('$Predicted$', fontsize = 20)
plt.show()
k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:
from sklearn.model_selection import cross_val_score
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, boston.data , boston.target, cv = 3)
scores.mean()
-1.5841985220997412
scores = [cross_val_score(regr, boston.data,\
boston.target,\
cv = int(i)).mean() \
for i in range(3, 50)]
plt.plot(range(3, 50), scores,'r-o')
plt.show()
data_X_scale = scale(boston.data)
scores = cross_val_score(regr,data_X_scale, boston.target,\
cv = 7)
scores.mean()
0.45059442471362826
import pandas as pd
df = pd.read_csv('../data/tianya_bbs_threads_list.txt', sep = "\t", header=None)
df=df.rename(columns = {0:'title', 1:'link', 2:'author',3:'author_page', 4:'click', 5:'reply', 6:'time'})
df[:2]
title | link | author | author_page | click | reply | time | |
---|---|---|---|---|---|---|---|
0 | 【民间语文第161期】宁波px启示:船进港湾人应上岸 | /post-free-2849477-1.shtml | 贾也 | http://www.tianya.cn/50499450 | 194675 | 2703 | 2012-10-29 07:59 |
1 | 宁波镇海PX项目引发群体上访 当地政府发布说明(转载) | /post-free-2839539-1.shtml | 无上卫士ABC | http://www.tianya.cn/74341835 | 88244 | 1041 | 2012-10-24 12:41 |
# 定义这个函数的目的是让读者感受到:
# 抽取不同的样本,得到的结果完全不同。
def randomSplit(dataX, dataY, num):
dataX_train = []
dataX_test = []
dataY_train = []
dataY_test = []
import random
test_index = random.sample(range(len(df)), num)
for k in range(len(dataX)):
if k in test_index:
dataX_test.append([dataX[k]])
dataY_test.append(dataY[k])
else:
dataX_train.append([dataX[k]])
dataY_train.append(dataY[k])
return dataX_train, dataX_test, dataY_train, dataY_test,
import numpy as np
# Use only one feature
data_X = df.reply
# Split the data into training/testing sets
data_X_train, data_X_test, data_y_train, data_y_test = randomSplit(np.log(df.click+1),
np.log(df.reply+1), 20)
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(data_X_train, data_y_train)
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(data_X_test, data_y_test))
Variance score: 0.42
data_X_train[:3]
[[12.179091917198399], [11.387872315966666], [11.323941765302724]]
y_true, y_pred = data_y_test, regr.predict(data_X_test)
plt.scatter(y_pred, y_true, color='black')
plt.show()
# Plot outputs
plt.scatter(data_X_test, data_y_test, color='black')
plt.plot(data_X_test, regr.predict(data_X_test), color='blue', linewidth=3)
plt.show()
# The coefficients
'Coefficients: \n', regr.coef_
('Coefficients: \n', array([0.68623605]))
# The mean square error
"Residual sum of squares: %.2f" % np.mean((regr.predict(data_X_test) - data_y_test) ** 2)
'Residual sum of squares: 0.98'
df.click_log = [[np.log(df.click[i]+1)] for i in range(len(df))]
df.reply_log = [[np.log(df.reply[i]+1)] for i in range(len(df))]
/Users/datalab/Applications/anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:1: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access if __name__ == '__main__': /Users/datalab/Applications/anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:2: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access from ipykernel import kernelapp as app
from sklearn.model_selection import train_test_split
Xs_train, Xs_test, y_train, y_test = train_test_split(df.click_log, df.reply_log,test_size=0.2, random_state=0)
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(Xs_train, y_train)
# Explained variance score: 1 is perfect prediction
'Variance score: %.2f' % regr.score(Xs_test, y_test)
'Variance score: 0.62'
# Plot outputs
plt.scatter(Xs_test, y_test, color='black')
plt.plot(Xs_test, regr.predict(Xs_test), color='blue', linewidth=3)
plt.show()
from sklearn.model_selection import cross_val_score
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, df.click_log, \
df.reply_log, cv = 3)
scores.mean()
-0.6837007391943056
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, df.click_log,
df.reply_log, cv =5)
scores.mean()
-0.7188149722820985
repost = []
for i in df.title:
if u'转载' in i:
repost.append(1)
else:
repost.append(0)
data_X = [[df.click[i], df.reply[i]] for i in range(len(df))]
data_X[:3]
[[194675, 2703], [88244, 1041], [82779, 625]]
from sklearn.linear_model import LogisticRegression
df['repost'] = repost
model = LogisticRegression()
model.fit(data_X,df.repost)
model.score(data_X,df.repost)
0.61241970021413272
def randomSplitLogistic(dataX, dataY, num):
dataX_train = []
dataX_test = []
dataY_train = []
dataY_test = []
import random
test_index = random.sample(range(len(df)), num)
for k in range(len(dataX)):
if k in test_index:
dataX_test.append(dataX[k])
dataY_test.append(dataY[k])
else:
dataX_train.append(dataX[k])
dataY_train.append(dataY[k])
return dataX_train, dataX_test, dataY_train, dataY_test,
# Split the data into training/testing sets
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
# Create logistic regression object
log_regr = LogisticRegression()
# Train the model using the training sets
log_regr.fit(data_X_train, data_y_train)
# Explained variance score: 1 is perfect prediction
'Variance score: %.2f' % log_regr.score(data_X_test, data_y_test)
'Variance score: 0.45'
y_true, y_pred = data_y_test, log_regr.predict(data_X_test)
y_true, y_pred
([1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], array([0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]))
print(classification_report(y_true, y_pred))
precision recall f1-score support 0 0.50 0.17 0.25 6 1 0.72 0.93 0.81 14 avg / total 0.66 0.70 0.64 20
from sklearn.cross_validation import train_test_split
Xs_train, Xs_test, y_train, y_test = train_test_split(data_X, df.repost, test_size=0.2, random_state=42)
# Create logistic regression object
log_regr = LogisticRegression()
# Train the model using the training sets
log_regr.fit(Xs_train, y_train)
# Explained variance score: 1 is perfect prediction
'Variance score: %.2f' % log_regr.score(Xs_test, y_test)
'Variance score: 0.60'
print('Logistic score for test set: %f' % log_regr.score(Xs_test, y_test))
print('Logistic score for training set: %f' % log_regr.score(Xs_train, y_train))
y_true, y_pred = y_test, log_regr.predict(Xs_test)
print(classification_report(y_true, y_pred))
Logistic score for test set: 0.595745 Logistic score for training set: 0.613941 precision recall f1-score support 0 1.00 0.03 0.05 39 1 0.59 1.00 0.74 55 avg / total 0.76 0.60 0.46 94
logre = LogisticRegression()
scores = cross_val_score(logre, data_X, df.repost, cv = 3)
scores.mean()
0.53333333333333333
logre = LogisticRegression()
data_X_scale = scale(data_X)
# The importance of preprocessing in data science and the machine learning pipeline I:
scores = cross_val_score(logre, data_X_scale, df.repost, cv = 3)
scores.mean()
0.62948717948717947
It is a classification technique based on Bayes’ Theorem with an assumption of independence among predictors.
In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature.
why it is known as ‘Naive’? For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, all of these properties independently contribute to the probability that this fruit is an apple.
贝叶斯定理为使用$p(c)$, $p(x)$, $p(x|c)$ 计算后验概率$P(c|x)$提供了方法:
Step 1: Convert the data set into a frequency table
Step 2: Create Likelihood table by finding the probabilities like:
Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each class. The class with the highest posterior probability is the outcome of prediction.
We can solve it using above discussed method of posterior probability.
$P(Yes | Sunny) = \frac{P( Sunny | Yes) * P(Yes) } {P (Sunny)}$
Here we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P( Yes)= 9/14 = 0.64
Now, $P (No | Sunny) = \frac{0.33 * 0.64}{0.36} = 0.60$, which has higher probability.
$P(No | Sunny) = \frac{P( Sunny | No) * P(No) } {P (Sunny)}$
Here we have P (Sunny |No) = 2/5 = 0.4, P(Sunny) = 5/14 = 0.36, P( No)= 5/14 = 0.36
Now, $P (Yes | Sunny) = \frac{0.4 * 0.36}{0.36} = 0.4$, which has lower probability.
from sklearn import naive_bayes
' '.join(dir(naive_bayes))
'ABCMeta BaseDiscreteNB BaseEstimator BaseNB BernoulliNB ClassifierMixin GaussianNB LabelBinarizer MultinomialNB __all__ __builtins__ __doc__ __file__ __name__ __package__ _check_partial_fit_first_call abstractmethod binarize check_X_y check_array check_is_fitted in1d issparse label_binarize logsumexp np safe_sparse_dot six'
#Import Library of Gaussian Naive Bayes model
from sklearn.naive_bayes import GaussianNB
import numpy as np
#assigning predictor and target variables
x= np.array([[-3,7],[1,5], [1,2], [-2,0], [2,3], [-4,0], [-1,1], [1,1], [-2,2], [2,7], [-4,1], [-2,7]])
Y = np.array([3, 3, 3, 3, 4, 3, 3, 4, 3, 4, 4, 4])
#Create a Gaussian Classifier
model = GaussianNB()
# Train the model using the training sets
model.fit(x[:8], Y[:8])
#Predict Output
predicted= model.predict([[1,2],[3,4]])
predicted
array([4, 3])
k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:
data_X_train, data_X_test, data_y_train, data_y_test = randomSplit(df.click, df.reply, 20)
# Train the model using the training sets
model.fit(data_X_train, data_y_train)
#Predict Output
predicted= model.predict(data_X_test)
predicted
array([41, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
model.score(data_X_test, data_y_test)
0.65000000000000002
from sklearn.cross_validation import cross_val_score
model = GaussianNB()
scores = cross_val_score(model, [[c] for c in df.click],\
df.reply, cv = 7)
scores.mean()
/Users/datalab/Applications/anaconda/lib/python3.5/site-packages/sklearn/cross_validation.py:516: Warning: The least populated class in y has only 1 members, which is too few. The minimum number of labels for any class cannot be less than n_folds=7. % (min_labels, self.n_folds)), Warning)
0.53413410073295453
from sklearn import tree
model = tree.DecisionTreeClassifier(criterion='gini')
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
model.fit(data_X_train,data_y_train)
model.score(data_X_train,data_y_train)
0.91275167785234901
# Predict
model.predict(data_X_test)
array([0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0])
# crossvalidation
scores = cross_val_score(model, data_X, df.repost, cv = 3)
scores.mean()
0.33461538461538459
from sklearn import svm
# Create SVM classification object
model=svm.SVC()
' '.join(dir(svm))
'LinearSVC LinearSVR NuSVC NuSVR OneClassSVM SVC SVR __all__ __builtins__ __cached__ __doc__ __file__ __loader__ __name__ __package__ __path__ __spec__ base bounds classes l1_min_c liblinear libsvm libsvm_sparse'
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
model.fit(data_X_train,data_y_train)
model.score(data_X_train,data_y_train)
0.90380313199105144
# Predict
model.predict(data_X_test)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1])
# crossvalidation
scores = []
cvs = [3, 5, 10, 25, 50, 75, 100]
for i in cvs:
score = cross_val_score(model, data_X, df.repost,
cv = i)
scores.append(score.mean() ) # Try to tune cv
plt.plot(cvs, scores, 'b-o')
plt.xlabel('$cv$', fontsize = 20)
plt.ylabel('$Score$', fontsize = 20)
plt.show()
import numpy as np
from sklearn import tree
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
train = pd.read_csv('../data/tatanic_train.csv',
sep = ",")
train.head()
Unnamed: 0 | PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | 3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
train["Age"] = train["Age"].fillna(train["Age"].median())
train["Fare"] = train["Fare"].fillna(train["Fare"].median())
#Convert the male and female groups to integer form
train["Sex"][train["Sex"] == "male"] = 0
train["Sex"][train["Sex"] == "female"] = 1
#Impute the Embarked variable
train["Embarked"] = train["Embarked"].fillna('S')
#Convert the Embarked classes to integer form
train["Embarked"][train["Embarked"] == "S"] = 0
train["Embarked"][train["Embarked"] == "C"] = 1
train["Embarked"][train["Embarked"] == "Q"] = 2
#Create the target and features numpy arrays: target, features_one
target = train['Survived'].values
features_one = train[["Pclass", "Sex", "Age", "Fare"]].values
#Fit your first decision tree: my_tree_one
my_tree_one = tree.DecisionTreeClassifier()
my_tree_one = my_tree_one.fit(features_one, target)
#Look at the importance of the included features and print the score
print(my_tree_one.feature_importances_)
print(my_tree_one.score(features_one, target))
[ 0.12294397 0.31274009 0.23680307 0.32751287] 0.977553310887
test = pd.read_csv('../data/tatanic_test.csv', sep = ",")
# Impute the missing value with the median
test.Fare[152] = test.Fare.median()
test["Age"] = test["Age"].fillna(test["Age"].median())
#Convert the male and female groups to integer form
test["Sex"][test["Sex"] == "male"] = 0
test["Sex"][test["Sex"] == "female"] = 1
#Impute the Embarked variable
test["Embarked"] = test["Embarked"].fillna('S')
#Convert the Embarked classes to integer form
test["Embarked"][test["Embarked"] == "S"] = 0
test["Embarked"][test["Embarked"] == "C"] = 1
test["Embarked"][test["Embarked"] == "Q"] = 2
# Extract the features from the test set: Pclass, Sex, Age, and Fare.
test_features = test[["Pclass","Sex", "Age", "Fare"]].values
# Make your prediction using the test set
my_prediction = my_tree_one.predict(test_features)
# Create a data frame with two columns: PassengerId & Survived. Survived contains your predictions
PassengerId =np.array(test['PassengerId']).astype(int)
my_solution = pd.DataFrame(my_prediction, PassengerId, columns = ["Survived"])
my_solution[:3]
Survived | |
---|---|
892 | 0 |
893 | 0 |
894 | 1 |
# Check that your data frame has 418 entries
my_solution.shape
(418, 1)
# Write your solution to a csv file with the name my_solution.csv
my_solution.to_csv("../data/tatanic_solution_one.csv",
index_label = ["PassengerId"])
# Create a new array with the added features: features_two
features_two = train[["Pclass","Age","Sex","Fare",\
"SibSp", "Parch", "Embarked"]].values
#Control overfitting by setting "max_depth" to 10 and "min_samples_split" to 5 : my_tree_two
max_depth = 10
min_samples_split = 5
my_tree_two = tree.DecisionTreeClassifier(max_depth = max_depth,
min_samples_split = min_samples_split,
random_state = 1)
my_tree_two = my_tree_two.fit(features_two, target)
#Print the score of the new decison tree
print(my_tree_two.score(features_two, target))
0.905723905724
# create a new train set with the new variable
train_two = train
train_two['family_size'] = train.SibSp + train.Parch + 1
# Create a new decision tree my_tree_three
features_three = train[["Pclass", "Sex", "Age", \
"Fare", "SibSp", "Parch", "family_size"]].values
my_tree_three = tree.DecisionTreeClassifier()
my_tree_three = my_tree_three.fit(features_three, target)
# Print the score of this decision tree
print(my_tree_three.score(features_three, target))
0.979797979798
#Import the `RandomForestClassifier`
from sklearn.ensemble import RandomForestClassifier
#We want the Pclass, Age, Sex, Fare,SibSp, Parch, and Embarked variables
features_forest = train[["Pclass", "Age", "Sex", "Fare", "SibSp", "Parch", "Embarked"]].values
#Building the Forest: my_forest
n_estimators = 100
forest = RandomForestClassifier(max_depth = 10, min_samples_split=2,
n_estimators = n_estimators, random_state = 1)
my_forest = forest.fit(features_forest, target)
#Print the score of the random forest
print(my_forest.score(features_forest, target))
#Compute predictions and print the length of the prediction vector:test_features, pred_forest
test_features = test[["Pclass", "Age", "Sex", "Fare", "SibSp", "Parch", "Embarked"]].values
pred_forest = my_forest.predict(test_features)
print(len(test_features))
print(pred_forest[:3])
0.939393939394 418 [0 0 0]
#Request and print the `.feature_importances_` attribute
print(my_tree_two.feature_importances_)
print(my_forest.feature_importances_)
#Compute and print the mean accuracy score for both models
print(my_tree_two.score(features_two, target))
print(my_forest.score(features_two, target))
[ 0.14130255 0.17906027 0.41616727 0.17938711 0.05039699 0.01923751 0.0144483 ] [ 0.10384741 0.20139027 0.31989322 0.24602858 0.05272693 0.04159232 0.03452128] 0.905723905724 0.939393939394
机器学习算法的要点(附 Python 和 R 代码)http://blog.csdn.net/a6225301/article/details/50479672
The "Python Machine Learning" book code repository and info resource https://github.com/rasbt/python-machine-learning-book
An Introduction to Statistical Learning (James, Witten, Hastie, Tibshirani, 2013) : Python code https://github.com/JWarmenhoven/ISLR-python
BuildingMachineLearningSystemsWithPython https://github.com/luispedro/BuildingMachineLearningSystemsWithPython