In [1]:
%reload_ext autoreload
%autoreload 2
In [2]:
from fastai.basics import *

Rossmann

Data preparation / Feature engineering

In addition to the provided data, we will be using external datasets put together by participants in the Kaggle competition. You can download all of them here. Then you shold untar them in the directory to which PATH is pointing below.

For completeness, the implementation used to put them together is included below.

In [7]:
!mkdir data/rossmann
In [8]:
!wget http://files.fast.ai/part2/lesson14/rossmann.tgz -O data/rossmann/rossmann.tgz
--2019-01-07 08:37:32--  http://files.fast.ai/part2/lesson14/rossmann.tgz
Resolving files.fast.ai (files.fast.ai)... 67.205.15.147
Connecting to files.fast.ai (files.fast.ai)|67.205.15.147|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7730448 (7.4M) [text/plain]
Saving to: ‘data/rossmann/rossmann.tgz’

data/rossmann/rossm 100%[===================>]   7.37M  3.53MB/s    in 2.1s    

2019-01-07 08:37:34 (3.53 MB/s) - ‘data/rossmann/rossmann.tgz’ saved [7730448/7730448]

In [13]:
!tar -xzf data/rossmann/rossmann.tgz -C data/rossmann
In [16]:
!rm data/rossmann/rossmann.tgz
In [3]:
PATH = Path('data/rossmann/')
PATH.ls()
Out[3]:
[PosixPath('data/rossmann/joined'),
 PosixPath('data/rossmann/state_names.csv'),
 PosixPath('data/rossmann/googletrend.csv'),
 PosixPath('data/rossmann/sample_submission.csv'),
 PosixPath('data/rossmann/test.csv'),
 PosixPath('data/rossmann/df'),
 PosixPath('data/rossmann/store.csv'),
 PosixPath('data/rossmann/train.csv'),
 PosixPath('data/rossmann/joined_test'),
 PosixPath('data/rossmann/weather.csv'),
 PosixPath('data/rossmann/store_states.csv')]
In [18]:
table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test']
tables = [pd.read_csv(PATH / f'{fname}.csv', low_memory=False) for fname in table_names]
train, store, store_states, state_names, googletrend, weather, test = tables
In [19]:
len(train), len(test)
Out[19]:
(1017209, 41088)

We turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy.

In [20]:
train.head()
Out[20]:
Store DayOfWeek Date Sales Customers Open Promo StateHoliday SchoolHoliday
0 1 5 2015-07-31 5263 555 1 1 0 1
1 2 5 2015-07-31 6064 625 1 1 0 1
2 3 5 2015-07-31 8314 821 1 1 0 1
3 4 5 2015-07-31 13995 1498 1 1 0 1
4 5 5 2015-07-31 4822 559 1 1 0 1
In [21]:
test.head()
Out[21]:
Id Store DayOfWeek Date Open Promo StateHoliday SchoolHoliday
0 1 1 4 2015-09-17 1.0 1 0 0
1 2 3 4 2015-09-17 1.0 1 0 0
2 3 7 4 2015-09-17 1.0 1 0 0
3 4 8 4 2015-09-17 1.0 1 0 0
4 5 9 4 2015-09-17 1.0 1 0 0
In [22]:
train.StateHoliday = train.StateHoliday != '0'
test.StateHoliday = test.StateHoliday != '0'

join_df is a function for joining tables on specific fields. By default, we'll be doing a left outer join of right on the left argument using the given fields for each table.

Pandas does joins using the merge method. The suffixes argument describes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left untouched, and append a "_y" to those on the right.

In [12]:
def join_df(left, right, left_on, right_on=None, suffix='_y'):
    if right_on is None: right_on = left_on
    return left.merge(right, how='left', left_on=left_on, right_on=right_on,
                      suffixes=('', suffix))

Join weather/state names.

In [28]:
weather = join_df(weather, state_names, 'file', 'StateName')

In Pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.

We're also going to replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'. This is a good opportunity to highlight Pandas indexing. We can use .loc[rows, cols] to select a list of rows and a list of columns from the dataframe. In this case, we're selecting rows w/ statename 'NI' by using a boolean list googletrend.State=='NI' and selecting "State".

In [31]:
googletrend.head()
Out[31]:
file week trend
0 Rossmann_DE_SN 2012-12-02 - 2012-12-08 96
1 Rossmann_DE_SN 2012-12-09 - 2012-12-15 95
2 Rossmann_DE_SN 2012-12-16 - 2012-12-22 91
3 Rossmann_DE_SN 2012-12-23 - 2012-12-29 48
4 Rossmann_DE_SN 2012-12-30 - 2013-01-05 67
In [32]:
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State == 'NI', 'State'] = 'HB,NI'

The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.

You should always consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.

In [36]:
def add_datepart(df, fldname, drop=True, time=False):
    "Helper function that adds columns relevant to a date."
    fld = df[fldname]
    fld_dtype = fld.dtype
    if isinstance(fld_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):
        fld_dtype = np.datetime64
        
    if not np.issubdtype(fld_dtype, np.datetime64):
        df[fldname] = fld = pd.to_datetime(fld, infer_datetime_format=True)
    targ_pre = re.sub('[Dd]ate$', '', fldname)
    attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',
            'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
    if time: attr = attr + ['Hour', 'Minute', 'Second']
    for n in attr: df[targ_pre + n] = getattr(fld.dt, n.lower())
    df[targ_pre + 'Elapsed'] = fld.astype(np.int64) // 10 ** 9
    if drop: df.drop(fldname, axis=1, inplace=True)
In [40]:
add_datepart(weather, 'Date', drop=False)
add_datepart(googletrend, 'Date', drop=False)
add_datepart(train, "Date", drop=False)
add_datepart(test, "Date", drop=False)

The Google trends data has a special category for the whole of the Germany - we'll pull that out so we can use it explicitly.

In [45]:
trend_de = googletrend[googletrend.file == 'Rossmann_DE']

Now we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.

Aside: Why not just do an inner join? If you are assuming that all records are complete and match on the field you desire, an inner join will do the same thing as an outer join. However, in the event you are wrong or a mistake is made, an outer join followed by a null-check will catch it. (Comparing before/after # of rows for inner join is equivalent, but requires keeping track of before/after row #'s. Outer join is easier.)

In [49]:
store = join_df(store, store_states, 'Store')
len(store[store.State.isnull()])
Out[49]:
0
In [52]:
joined = join_df(train, store, 'Store')
joined_test = join_df(test, store, 'Store')
len(joined[joined.StoreType.isnull()]), len(joined_test[joined_test.StoreType.isnull()])
Out[52]:
(0, 0)
In [53]:
joined = join_df(joined, googletrend, ['State', 'Year', 'Week'])
joined_test = join_df(joined_test, googletrend, ['State', 'Year', 'Week'])
len(joined[joined.trend.isnull()]), len(joined_test[joined_test.trend.isnull()])
Out[53]:
(0, 0)
In [54]:
joined = joined.merge(trend_de, 'left', ['Year', 'Week'], suffixes=('', '_DE'))
joined_test = joined_test.merge(trend_de, 'left', ['Year', 'Week'], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()]), len(joined_test[joined_test.trend_DE.isnull()])
Out[54]:
(0, 0)
In [55]:
joined = join_df(joined, weather, ["State","Date"])
joined_test = join_df(joined_test, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()]),len(joined_test[joined_test.Mean_TemperatureC.isnull()])
Out[55]:
(0, 0)
In [56]:
for df in (joined, joined_test):
    for c in df.columns:
        if c.endswith('_y'):
            if c in df.columns: df.drop(c, inplace=True, axis=1)

Next we'll fill in missing values to avoid complications with NA's. NA (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we are picking an arbitrary signal value that doesn't otherwise appear in the data.

In [62]:
# environment settings for Pandas to widen output display to see more columns
pd.set_option('display.max_column',None)
pd.set_option('display.max_rows',None)
pd.set_option('display.max_seq_items',None)
pd.set_option('display.max_colwidth', 500)
pd.set_option('expand_frame_repr', True)
In [63]:
joined.head()
Out[63]:
Store DayOfWeek Date Sales Customers Open Promo StateHoliday SchoolHoliday Year Month Week Day Dayofweek Dayofyear Is_month_end Is_month_start Is_quarter_end Is_quarter_start Is_year_end Is_year_start Elapsed StoreType Assortment CompetitionDistance CompetitionOpenSinceMonth CompetitionOpenSinceYear Promo2 Promo2SinceWeek Promo2SinceYear PromoInterval State file week trend file_DE week_DE trend_DE Date_DE State_DE Month_DE Day_DE Dayofweek_DE Dayofyear_DE Is_month_end_DE Is_month_start_DE Is_quarter_end_DE Is_quarter_start_DE Is_year_end_DE Is_year_start_DE Elapsed_DE Max_TemperatureC Mean_TemperatureC Min_TemperatureC Dew_PointC MeanDew_PointC Min_DewpointC Max_Humidity Mean_Humidity Min_Humidity Max_Sea_Level_PressurehPa Mean_Sea_Level_PressurehPa Min_Sea_Level_PressurehPa Max_VisibilityKm Mean_VisibilityKm Min_VisibilitykM Max_Wind_SpeedKm_h Mean_Wind_SpeedKm_h Max_Gust_SpeedKm_h Precipitationmm CloudCover Events WindDirDegrees StateName
0 1 5 2015-07-31 5263 555 1 1 False 1 2015 7 31 31 4 212 True False False False False False 1438300800 c a 1270.0 9.0 2008.0 0 NaN NaN NaN HE Rossmann_DE_HE 2015-08-02 - 2015-08-08 85 Rossmann_DE 2015-08-02 - 2015-08-08 83 2015-08-02 None 8 2 6 214 False False False False False False 1438473600 23 16 8 9 6 3 98 54 18 1021 1018 1015 31.0 15.0 10.0 24 11 NaN 0.0 1.0 Fog 13 Hessen
1 2 5 2015-07-31 6064 625 1 1 False 1 2015 7 31 31 4 212 True False False False False False 1438300800 a a 570.0 11.0 2007.0 1 13.0 2010.0 Jan,Apr,Jul,Oct TH Rossmann_DE_TH 2015-08-02 - 2015-08-08 80 Rossmann_DE 2015-08-02 - 2015-08-08 83 2015-08-02 None 8 2 6 214 False False False False False False 1438473600 19 13 7 9 6 3 100 62 25 1021 1019 1017 10.0 10.0 10.0 14 11 NaN 0.0 4.0 Fog 309 Thueringen
2 3 5 2015-07-31 8314 821 1 1 False 1 2015 7 31 31 4 212 True False False False False False 1438300800 a a 14130.0 12.0 2006.0 1 14.0 2011.0 Jan,Apr,Jul,Oct NW Rossmann_DE_NW 2015-08-02 - 2015-08-08 86 Rossmann_DE 2015-08-02 - 2015-08-08 83 2015-08-02 None 8 2 6 214 False False False False False False 1438473600 21 13 6 10 7 4 100 61 24 1022 1019 1017 31.0 14.0 10.0 14 5 NaN 0.0 2.0 Fog 354 NordrheinWestfalen
3 4 5 2015-07-31 13995 1498 1 1 False 1 2015 7 31 31 4 212 True False False False False False 1438300800 c c 620.0 9.0 2009.0 0 NaN NaN NaN BE Rossmann_DE_BE 2015-08-02 - 2015-08-08 74 Rossmann_DE 2015-08-02 - 2015-08-08 83 2015-08-02 None 8 2 6 214 False False False False False False 1438473600 19 14 9 9 7 4 94 61 30 1019 1017 1014 10.0 10.0 10.0 23 16 NaN 0.0 6.0 NaN 282 Berlin
4 5 5 2015-07-31 4822 559 1 1 False 1 2015 7 31 31 4 212 True False False False False False 1438300800 a a 29910.0 4.0 2015.0 0 NaN NaN NaN SN Rossmann_DE_SN 2015-08-02 - 2015-08-08 82 Rossmann_DE 2015-08-02 - 2015-08-08 83 2015-08-02 None 8 2 6 214 False False False False False False 1438473600 20 15 10 8 6 5 82 55 26 1020 1018 1016 10.0 10.0 10.0 14 11 NaN 0.0 4.0 NaN 290 Sachsen
In [64]:
for df in (joined, joined_test):
    df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
    df['CompetitionOpenSinceMonth'] = df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
    df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32)
    df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)

Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of apply() in mapping a function across dataframe values.

In [65]:
for df in (joined, joined_test):
    df['CompetitionOpenSince'] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear,
                                                     month=df.CompetitionOpenSinceMonth, day=15))
    df['CompetitionDaysOpen'] = df.Date.subtract(df.CompetitionOpenSince).dt.days

We'll replace some erroneous / outlying data.

In [66]:
for df in (joined, joined_test):
    df.loc[df.CompetitionDaysOpen < 0, 'CompetitionDaysOpen'] = 0
    df.loc[df.CompetitionOpenSinceYear < 1900, 'CompetitionDaysOpen'] = 0

We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.

In [67]:
for df in (joined, joined_test):
    df['CompetitionMonthsOpen'] = df['CompetitionDaysOpen']//30
    df.loc[df.CompetitionMonthsOpen > 24, 'CompetitionMonthsOpen'] = 24
joined.CompetitionMonthsOpen.unique()
Out[67]:
array([24,  3, 19,  9, 16, 17,  7, 15, 22, 11, 13,  2, 23,  0, 12,  4, 10,  1, 14, 20,  8, 18,  6, 21,  5])

Same process for Promo dates. You may need to install the isoweek package first.

In [69]:
# If needed, uncomment:
! pip install isoweek
Collecting isoweek
  Using cached https://files.pythonhosted.org/packages/c2/d4/fe7e2637975c476734fcbf53776e650a29680194eb0dd21dbdc020ca92de/isoweek-1.3.3-py2.py3-none-any.whl
Installing collected packages: isoweek
Successfully installed isoweek-1.3.3
In [70]:
from isoweek import Week
In [71]:
for df in (joined, joined_test):
    df['Promo2Since'] = pd.to_datetime(df.apply(lambda x: Week(
        x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1).astype(pd.datetime))
    df['Promo2Days'] = df.Date.subtract(df['Promo2Since']).dt.days
In [72]:
%%time

for df in (joined, joined_test):
    df.loc[df.Promo2Days < 0, 'Promo2Days'] = 0
    df.loc[df.Promo2SinceYear < 1990, 'Promo2Days'] = 0
    df['Promo2Weeks'] = df['Promo2Days']//7
    df.loc[df.Promo2Weeks < 0, 'Promo2Weeks'] = 0
    df.loc[df.Promo2Weeks > 25, 'Promo2Weeks'] = 25
    df.Promo2Weeks.unique()
CPU times: user 54.3 s, sys: 1.54 s, total: 55.8 s
Wall time: 50.8 s
In [73]:
joined.to_pickle(PATH / 'joined')
joined_test.to_pickle(PATH / 'joined_test')

Durations

It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.:

  • Running averages
  • Time until next event
  • Time since last event

This is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data.

We'll define a function get_elapsed for cumulative counting across a sorted dataframe. Given a particular field fld to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.

Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly.

In [74]:
def get_elapsed(fld, pre):
    day1 = np.timedelta64(1, 'D')
    last_date = np.datetime64()
    last_store = 0
    res = []
    
    for s, v, d in zip(df.Store.values, df[fld].values, df.Date.values):
        if s != last_store:
            last_date = np.datetime64()
            last_store = s
        if v: last_date = d
        res.append(((d - last_date).astype('timedelta64[D]') / day1))
    df[pre + fld] = res

We'll be applying this to a subset of columns:

In [75]:
columns = ['Date', 'Store', 'Promo', 'StateHoliday', 'SchoolHoliday']
In [78]:
#df = train[columns]
df = train[columns].append(test[columns])

Let's walk through an example.

Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call add_elapsed('SchoolHoliday', 'After'): This will apply to each row with School Holiday:

  • A applied to every row of the dataframe in order of store and date
  • Will add to the dataframe the days since seeing a School Holiday
  • If we sort in the other direction, this will count the days until another holiday.
In [80]:
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')

We'll do this for two more fields.

In [81]:
%%time

fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
CPU times: user 24.9 s, sys: 132 ms, total: 25.1 s
Wall time: 22.8 s
In [82]:
%%time

fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
CPU times: user 25.2 s, sys: 112 ms, total: 25.3 s
Wall time: 23 s

We're going to set the active index to Date.

In [86]:
df = df.set_index('Date')

Then set null values from elapsed field calculations to 0.

In [87]:
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
In [88]:
for o in ['Before', 'After']:
    for p in columns:
        a = o + p
        df[a] = df[a].fillna(0).astype(int)

Next we'll demonstrate window functions in Pandas to calculate rolling quantities.

Here we're sorting by date (sort_index()) and counting the number of events of interest (sum()) defined in columns in the following week (rolling()), grouped by Store (groupby()). We do the same in the opposite direction.

In [89]:
%%time

bwd = df[['Store'] + columns].sort_index().groupby('Store').rolling(7, min_periods=1).sum()
CPU times: user 3.54 s, sys: 48 ms, total: 3.59 s
Wall time: 2.66 s
In [90]:
fwd = df[['Store'] + columns].sort_index(ascending=False
                                        ).groupby('Store').rolling(7, min_periods=1).sum()

Next we want to drop the Store indices grouped together in the window function.

Often in Pandas, there is an option to do this in place. This is time and memory efficient when working with large datasets.

In [91]:
bwd.drop('Store', 1, inplace=True)
bwd.reset_index(inplace=True)
In [92]:
%%time

fwd.drop('Store', 1, inplace=True)
fwd.reset_index(inplace=True)
CPU times: user 48 ms, sys: 4 ms, total: 52 ms
Wall time: 20.7 ms
In [93]:
df.reset_index(inplace=True)

Now we'll merge these values onto the df.

In [94]:
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
In [96]:
%%time

df.drop(columns, 1, inplace=True)
CPU times: user 324 ms, sys: 92 ms, total: 416 ms
Wall time: 105 ms
In [97]:
df.head()
Out[97]:
Date Store AfterSchoolHoliday BeforeSchoolHoliday AfterStateHoliday BeforeStateHoliday AfterPromo BeforePromo SchoolHoliday_bw StateHoliday_bw Promo_bw SchoolHoliday_fw StateHoliday_fw Promo_fw
0 2015-09-17 1 13 0 105 0 0 0 0.0 0.0 4.0 0.0 0.0 1.0
1 2015-09-16 1 12 0 104 0 0 0 0.0 0.0 3.0 0.0 0.0 2.0
2 2015-09-15 1 11 0 103 0 0 0 0.0 0.0 2.0 0.0 0.0 3.0
3 2015-09-14 1 10 0 102 0 0 0 0.0 0.0 1.0 0.0 0.0 4.0
4 2015-09-13 1 9 0 101 0 9 -1 0.0 0.0 0.0 0.0 0.0 4.0

It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.

In [98]:
df.to_pickle(PATH / 'df')
In [4]:
# df = pd.read_pickle(PATH / 'df')
In [5]:
df['Date'] = pd.to_datetime(df.Date)
In [6]:
df.columns
Out[6]:
Index(['Date', 'Store', 'AfterSchoolHoliday', 'BeforeSchoolHoliday',
       'AfterStateHoliday', 'BeforeStateHoliday', 'AfterPromo', 'BeforePromo',
       'SchoolHoliday_bw', 'StateHoliday_bw', 'Promo_bw', 'SchoolHoliday_fw',
       'StateHoliday_fw', 'Promo_fw'],
      dtype='object')
In [7]:
joined = pd.read_pickle(PATH / 'joined')
joined_test = pd.read_pickle(PATH / f'joined_test')
In [10]:
# Sanity check
len(joined), len(joined_test), len(df)
Out[10]:
(1017209, 41088, 1058297)
In [13]:
joined = join_df(joined, df, ['Store', 'Date'])
In [14]:
joined_test = join_df(joined_test, df, ['Store', 'Date'])

The authors also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little exploratory data analysis reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.

In [15]:
joined = joined[joined.Sales != 0]

We'll back this up as well.

In [16]:
%%time

joined.reset_index(inplace=True)
CPU times: user 8 ms, sys: 0 ns, total: 8 ms
Wall time: 2.23 ms
In [17]:
%%time

joined_test.reset_index(inplace=True)
CPU times: user 0 ns, sys: 4 ms, total: 4 ms
Wall time: 858 µs
In [18]:
# Sanity check
len(joined), len(joined_test)
Out[18]:
(844338, 41088)
In [19]:
joined.to_pickle(PATH / 'train_clean')
joined_test.to_pickle(PATH / 'test_clean')
In [ ]: