Finally, in this chapter, you will work with unstructured text data, understanding ways in which you can engineer columnar features out of a text corpus. You will compare how different approaches may impact how much context is being extracted from a text, and how to balance the need for context, without too many features being created. This is the Summary of lecture "Feature Engineering for Machine Learning in Python", via datacamp.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['figure.figsize'] = (8, 8)
Unstructured text data cannot be directly used in most analyses. Multiple steps need to be taken to go from a long free form string to a set of numeric columns in the right format that can be ingested by a machine learning model. The first step of this process is to standardize the data and eliminate any characters that could cause problems later on in your analytic pipeline.
In this chapter you will be working with a new dataset containing the inaugural speeches of the presidents of the United States loaded as speech_df
, with the speeches stored in the text
column.
speech_df = pd.read_csv('./dataset/inaugural_speeches.csv')
# Print the first 5 rows of the text column
speech_df['text'].head()
0 Fellow-Citizens of the Senate and of the House... 1 Fellow Citizens: I AM again called upon by th... 2 WHEN it was first perceived, in early times, t... 3 Friends and Fellow-Citizens: CALLED upon to u... 4 PROCEEDING, fellow-citizens, to that qualifica... Name: text, dtype: object
# Replace all non letter characters with a whitespace
speech_df['text_clean'] = speech_df['text'].str.replace('[^a-zA-Z]', ' ')
# Change to lower case
speech_df['text_clean'] = speech_df['text_clean'].str.lower()
# Print the first 5 rows of text_clean column
print(speech_df['text_clean'].head())
0 fellow citizens of the senate and of the house... 1 fellow citizens i am again called upon by th... 2 when it was first perceived in early times t... 3 friends and fellow citizens called upon to u... 4 proceeding fellow citizens to that qualifica... Name: text_clean, dtype: object
Once the text has been cleaned and standardized you can begin creating features from the data. The most fundamental information you can calculate about free form text is its size, such as its length and number of words. In this exercise (and the rest of this chapter), you will focus on the cleaned/transformed text column (text_clean
) you created in the last exercise.
# Find the length of each text
speech_df['char_cnt'] = speech_df['text_clean'].str.len()
# Count the number of words in each text
speech_df['word_cnt'] = speech_df['text_clean'].str.split().str.len()
# Find the average length of word
speech_df['avg_word_length'] = speech_df['char_cnt'] / speech_df['word_cnt']
# Print the first 5 rows of these columns
speech_df[['text_clean', 'char_cnt', 'word_cnt', 'avg_word_length']].head()
text_clean | char_cnt | word_cnt | avg_word_length | |
---|---|---|---|---|
0 | fellow citizens of the senate and of the house... | 8616 | 1432 | 6.016760 |
1 | fellow citizens i am again called upon by th... | 787 | 135 | 5.829630 |
2 | when it was first perceived in early times t... | 13871 | 2323 | 5.971158 |
3 | friends and fellow citizens called upon to u... | 10144 | 1736 | 5.843318 |
4 | proceeding fellow citizens to that qualifica... | 12902 | 2169 | 5.948363 |
Once high level information has been recorded you can begin creating features based on the actual content of each text. One way to do this is to approach it in a similar way to how you worked with categorical variables in the earlier lessons.
These "count"
columns can then be used to train machine learning models.
from sklearn.feature_extraction.text import CountVectorizer
# Instantiate CountVectorizer
cv = CountVectorizer()
# Fit the vectorizer
cv.fit(speech_df['text_clean'])
# Print feature names
print(cv.get_feature_names()[:10])
['abandon', 'abandoned', 'abandonment', 'abate', 'abdicated', 'abeyance', 'abhorring', 'abide', 'abiding', 'abilities']
Once the vectorizer has been fit to the data, it can be used to transform the text to an array representing the word counts. This array will have a row per block of text and a column for each of the features generated by the vectorizer that you observed in the last exercise.
# Apply the vectorizer
cv_transformed = cv.transform(speech_df['text_clean'])
# Print the full array
cv_array = cv_transformed.toarray()
print(cv_array)
[[0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 1 0 ... 0 0 0] ... [0 1 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]]
# Print the shape of cv_array
print(cv_array.shape)
(58, 9043)
As you have seen, using the CountVectorizer
with its default settings creates a feature for every single word in your corpus. This can create far too many features, often including ones that will provide very little analytical value.
For this purpose CountVectorizer
has parameters that you can set to reduce the number of features:
min_df
: Use only words that occur in more than this percentage of documents. This can be used to remove outlier words that will not generalize across texts.max_df
: Use only words that occur in less than this percentage of documents. This is useful to eliminate very common words that occur in every corpus without adding value such as "and" or "the".from sklearn.feature_extraction.text import CountVectorizer
# Specify arguments to limit the number of features generated
cv = CountVectorizer(min_df=0.2, max_df=0.8)
# Fit, transform, and convert into array
cv_transformed = cv.fit_transform(speech_df['text_clean'])
cv_array = cv_transformed.toarray()
# Print the array shape
print(cv_array.shape)
(58, 818)
Now that you have generated these count based features in an array you will need to reformat them so that they can be combined with the rest of the dataset. This can be achieved by converting the array into a pandas DataFrame, with the feature names you found earlier as the column names, and then concatenate it with the original DataFrame.
# Create a DataFrame with these features
cv_df = pd.DataFrame(cv_array, columns = cv.get_feature_names()).add_prefix('Counts_')
# Add the new columns to the original DataFrame
speech_df_new = pd.concat([speech_df, cv_df], axis=1, sort=False)
speech_df_new.head()
Name | Inaugural Address | Date | text | text_clean | char_cnt | word_cnt | avg_word_length | Counts_abiding | Counts_ability | ... | Counts_women | Counts_words | Counts_work | Counts_wrong | Counts_year | Counts_years | Counts_yet | Counts_you | Counts_young | Counts_your | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | George Washington | First Inaugural Address | Thursday, April 30, 1789 | Fellow-Citizens of the Senate and of the House... | fellow citizens of the senate and of the house... | 8616 | 1432 | 6.016760 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 | 0 | 9 |
1 | George Washington | Second Inaugural Address | Monday, March 4, 1793 | Fellow Citizens: I AM again called upon by th... | fellow citizens i am again called upon by th... | 787 | 135 | 5.829630 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2 | John Adams | Inaugural Address | Saturday, March 4, 1797 | WHEN it was first perceived, in early times, t... | when it was first perceived in early times t... | 13871 | 2323 | 5.971158 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 2 | 3 | 0 | 0 | 0 | 1 |
3 | Thomas Jefferson | First Inaugural Address | Wednesday, March 4, 1801 | Friends and Fellow-Citizens: CALLED upon to u... | friends and fellow citizens called upon to u... | 10144 | 1736 | 5.843318 | 0 | 0 | ... | 0 | 0 | 1 | 2 | 0 | 0 | 2 | 7 | 0 | 7 |
4 | Thomas Jefferson | Second Inaugural Address | Monday, March 4, 1805 | PROCEEDING, fellow-citizens, to that qualifica... | proceeding fellow citizens to that qualifica... | 12902 | 2169 | 5.948363 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 2 | 2 | 2 | 4 | 0 | 4 |
5 rows × 826 columns
While counts of occurrences of words can be useful to build models, words that occur many times may skew the results undesirably. To limit these common words from overpowering your model a form of normalization can be used. In this lesson you will be using Term frequency-inverse document frequency (Tf-idf) as was discussed in the video. Tf-idf has the effect of reducing the value of common words, while increasing the weight of words that do not occur in many documents.
from sklearn.feature_extraction.text import TfidfVectorizer
# Instantiate TfidfVectorizer
tv = TfidfVectorizer(max_features=100, stop_words='english')
# Fit the vectorizer and transform the data
tv_transformed = tv.fit_transform(speech_df['text_clean'])
# Create a DataFrame with these features
tv_df = pd.DataFrame(tv_transformed.toarray(),
columns=tv.get_feature_names()).add_prefix('TFIDF_')
tv_df.head()
TFIDF_action | TFIDF_administration | TFIDF_america | TFIDF_american | TFIDF_americans | TFIDF_believe | TFIDF_best | TFIDF_better | TFIDF_change | TFIDF_citizens | ... | TFIDF_things | TFIDF_time | TFIDF_today | TFIDF_union | TFIDF_united | TFIDF_war | TFIDF_way | TFIDF_work | TFIDF_world | TFIDF_years | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.000000 | 0.133415 | 0.000000 | 0.105388 | 0.0 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.229644 | ... | 0.000000 | 0.045929 | 0.0 | 0.136012 | 0.203593 | 0.000000 | 0.060755 | 0.000000 | 0.045929 | 0.052694 |
1 | 0.000000 | 0.261016 | 0.266097 | 0.000000 | 0.0 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.179712 | ... | 0.000000 | 0.000000 | 0.0 | 0.000000 | 0.199157 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
2 | 0.000000 | 0.092436 | 0.157058 | 0.073018 | 0.0 | 0.000000 | 0.026112 | 0.060460 | 0.000000 | 0.106072 | ... | 0.032030 | 0.021214 | 0.0 | 0.062823 | 0.070529 | 0.024339 | 0.000000 | 0.000000 | 0.063643 | 0.073018 |
3 | 0.000000 | 0.092693 | 0.000000 | 0.000000 | 0.0 | 0.090942 | 0.117831 | 0.045471 | 0.053335 | 0.223369 | ... | 0.048179 | 0.000000 | 0.0 | 0.094497 | 0.000000 | 0.036610 | 0.000000 | 0.039277 | 0.095729 | 0.000000 |
4 | 0.041334 | 0.039761 | 0.000000 | 0.031408 | 0.0 | 0.000000 | 0.067393 | 0.039011 | 0.091514 | 0.273760 | ... | 0.082667 | 0.164256 | 0.0 | 0.121605 | 0.030338 | 0.094225 | 0.000000 | 0.000000 | 0.054752 | 0.062817 |
5 rows × 100 columns
After creating Tf-idf features you will often want to understand what are the most highest scored words for each corpus. This can be achieved by isolating the row you want to examine and then sorting the the scores from high to low.
# Isolate the row to be examined
sample_row = tv_df.iloc[0]
# Print the top 5 words of the sorted output
print(sample_row.sort_values(ascending=False).head())
TFIDF_government 0.367430 TFIDF_public 0.333237 TFIDF_present 0.315182 TFIDF_duty 0.238637 TFIDF_citizens 0.229644 Name: 0, dtype: float64
When creating vectors from text, any transformations that you perform before training a machine learning model, you also need to apply on the new unseen (test) data. To achieve this follow the same approach from the last chapter: fit the vectorizer only on the training data, and apply it to the test data.
For this exercise the speech_df
DataFrame has been split in two:
train_speech_df
: The training set consisting of the first 45 speeches.test_speech_df
: The test set consisting of the remaining speeches.train_speech_df = speech_df.iloc[:45]
test_speech_df = speech_df.iloc[45:]
# Instantiate TfidfVectorizer
tv = TfidfVectorizer(max_features=100, stop_words='english')
# Fit the vectorizer and transform the data
tv_transformed = tv.fit_transform(train_speech_df['text_clean'])
# Transform test data
test_tv_transformed = tv.transform(test_speech_df['text_clean'])
# Create new features for the test set
test_tv_df = pd.DataFrame(test_tv_transformed.toarray(),
columns=tv.get_feature_names()).add_prefix('TFIDF_')
test_tv_df.head()
TFIDF_action | TFIDF_administration | TFIDF_america | TFIDF_american | TFIDF_authority | TFIDF_best | TFIDF_business | TFIDF_citizens | TFIDF_commerce | TFIDF_common | ... | TFIDF_subject | TFIDF_support | TFIDF_time | TFIDF_union | TFIDF_united | TFIDF_war | TFIDF_way | TFIDF_work | TFIDF_world | TFIDF_years | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.000000 | 0.029540 | 0.233954 | 0.082703 | 0.000000 | 0.000000 | 0.000000 | 0.022577 | 0.0 | 0.000000 | ... | 0.0 | 0.000000 | 0.115378 | 0.000000 | 0.024648 | 0.079050 | 0.033313 | 0.000000 | 0.299983 | 0.134749 |
1 | 0.000000 | 0.000000 | 0.547457 | 0.036862 | 0.000000 | 0.036036 | 0.000000 | 0.015094 | 0.0 | 0.000000 | ... | 0.0 | 0.019296 | 0.092567 | 0.000000 | 0.000000 | 0.052851 | 0.066817 | 0.078999 | 0.277701 | 0.126126 |
2 | 0.000000 | 0.000000 | 0.126987 | 0.134669 | 0.000000 | 0.131652 | 0.000000 | 0.000000 | 0.0 | 0.046997 | ... | 0.0 | 0.000000 | 0.075151 | 0.000000 | 0.080272 | 0.042907 | 0.054245 | 0.096203 | 0.225452 | 0.043884 |
3 | 0.037094 | 0.067428 | 0.267012 | 0.031463 | 0.039990 | 0.061516 | 0.050085 | 0.077301 | 0.0 | 0.000000 | ... | 0.0 | 0.098819 | 0.210690 | 0.000000 | 0.056262 | 0.030073 | 0.038020 | 0.235998 | 0.237026 | 0.061516 |
4 | 0.000000 | 0.000000 | 0.221561 | 0.156644 | 0.028442 | 0.087505 | 0.000000 | 0.109959 | 0.0 | 0.023428 | ... | 0.0 | 0.023428 | 0.187313 | 0.131913 | 0.040016 | 0.021389 | 0.081124 | 0.119894 | 0.299701 | 0.153133 |
5 rows × 100 columns
So far you have created features based on individual words in each of the texts. This can be quite powerful when used in a machine learning model but you may be concerned that by looking at words individually a lot of the context is being ignored. To deal with this when creating models you can use n-grams which are sequence of n words grouped together. For example:
These can be automatically created in your dataset by specifying the ngram_range argument as a tuple (n1, n2)
where all n-grams in the n1
to n2
range are included.
# Instantiate a trigram vectorizer
cv_trigram_vec = CountVectorizer(max_features=100,
stop_words='english',
ngram_range=(3, 3))
# Fit and apply trigram vectorizer
cv_trigram = cv_trigram_vec.fit_transform(speech_df['text_clean'])
# Print the trigram features
cv_trigram_vec.get_feature_names()[:10]
['ability preserve protect', 'agriculture commerce manufactures', 'america ideal freedom', 'amity mutual concession', 'anchor peace home', 'ask bow heads', 'best ability preserve', 'best interests country', 'bless god bless', 'bless united states']
Its always advisable once you have created your features to inspect them to ensure that they are as you would expect. This will allow you to catch errors early, and perhaps influence what further feature engineering you will need to do.
# Create a DataFrame of the features
cv_tri_df = pd.DataFrame(cv_trigram.toarray(),
columns = cv_trigram_vec.get_feature_names()).add_prefix('Counts_')
# Print the top 5 words in the sorted output
cv_tri_df.sum().sort_values(ascending=False).head()
Counts_constitution united states 20 Counts_people united states 13 Counts_preserve protect defend 10 Counts_mr chief justice 10 Counts_president united states 8 dtype: int64