This module defines the main class to handle tabular data in the fastai library: TabularDataset
. As always, there is also a helper function to quickly get your data.
To allow you to easily create a Learner
for your data, it provides tabular_learner
.
from fastai.gen_doc.nbdoc import *
from fastai.tabular import *
from fastai import *
show_doc(TabularDataBunch, doc_string=False)
class
TabularDataBunch
[source]
TabularDataBunch
(train_dl
:DataLoader
,valid_dl
:DataLoader
,test_dl
:Optional
[DataLoader
]=None
,device
:device
=None
,tfms
:Optional
[Collection
[Callable
]]=None
,path
:PathOrStr
='.'
,collate_fn
:Callable
='data_collate'
) ::DataBunch
The best way to quickly get your data in a DataBunch
suitable for tabular data is to organize it in two (or three) dataframes. One for training, one for validation, and if you have it, one for testing. Here we are interested in a subsample of the adult dataset.
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
valid_idx = range(len(df)-2000, len(df))
df.head()
age | workclass | fnlwgt | education | education-num | marital-status | occupation | relationship | race | sex | capital-gain | capital-loss | hours-per-week | native-country | >=50k | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 49 | Private | 101320 | Assoc-acdm | 12.0 | Married-civ-spouse | NaN | Wife | White | Female | 0 | 1902 | 40 | United-States | 1 |
1 | 44 | Private | 236746 | Masters | 14.0 | Divorced | Exec-managerial | Not-in-family | White | Male | 10520 | 0 | 45 | United-States | 1 |
2 | 38 | Private | 96185 | HS-grad | NaN | Divorced | NaN | Unmarried | Black | Female | 0 | 0 | 32 | United-States | 0 |
3 | 38 | Self-emp-inc | 112847 | Prof-school | 15.0 | Married-civ-spouse | Prof-specialty | Husband | Asian-Pac-Islander | Male | 0 | 0 | 40 | United-States | 1 |
4 | 42 | Self-emp-not-inc | 82297 | 7th-8th | NaN | Married-civ-spouse | Other-service | Wife | Black | Female | 0 | 0 | 50 | United-States | 0 |
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
dep_var = '>=50k'
show_doc(TabularDataBunch.from_df, doc_string=False)
from_df
[source]
from_df
(path
,df
:DataFrame
,dep_var
:str
,valid_idx
:Collection
[int
],procs
:Optional
[Collection
[TabularProc
]]=None
,cat_names
:OptStrList
=None
,cont_names
:OptStrList
=None
,classes
:Collection
=None
,kwargs
) →DataBunch
Creates a DataBunch
in path
from train_df
, valid_df
and optionally test_df
. The dependent variable is dep_var
, while the categorical and continuous variables are in the cat_names
columns and cont_names
columns respectively. If cont_names
is None then we assume all variables that aren't dependent or categorical are continuous. The TabularTransform
in tfms
are applied to the dataframes as preprocessing, then the categories are replaced by their codes+1 (leaving 0 for nan
) and the continuous variables are normalized. You can pass the stats
to use for that step. If log_output
is True, the dependant variable is replaced by its log.
Note that the transforms should be passed as Callable
: the actual initialization with cat_names
and cont_names
is done inside.
procs = [FillMissing, Categorify, Normalize]
data = TabularDataBunch.from_df(path, df, dep_var, valid_idx=valid_idx, procs=procs, cat_names=cat_names)
You can then easily create a Learner
for this data with tabular_learner
.
show_doc(tabular_learner)
tabular_learner
[source]
tabular_learner
(data
:DataBunch
,layers
:Collection
[int
],emb_szs
:Dict
[str
,int
]=None
,metrics
=None
,ps
:Collection
[float
]=None
,emb_drop
:float
=0.0
,y_range
:OptRange
=None
,use_bn
:bool
=True
,kwargs
)
Get a Learner
using data
, with metrics
, including a TabularModel
created using the remaining params.
emb_szs
is a dict
mapping categorical column names to embedding sizes; you only need to pass sizes for columns where you want to override the default behaviour of the model.
show_doc(TabularList)
Basic class to create a list of inputs in items
for tabular data. cat_names
and cont_names
are the names of the categorical and the continuous variables respectively. processor
will be applied to the inputs or one will be created from the transforms in procs
.
show_doc(TabularList.from_df)
from_df
[source]
from_df
(df
:DataFrame
,cat_names
:OptStrList
=None
,cont_names
:OptStrList
=None
,procs
=None
,kwargs
) →ItemList
Get the list of inputs in the col
of path/csv_name
.
show_doc(TabularList.get_emb_szs)
get_emb_szs
[source]
get_emb_szs
(sz_dict
)
Return the default embedding sizes suitable for this data or takes the ones in sz_dict
.
show_doc(TabularList.show_xys)
show_doc(TabularList.show_xyzs)
show_doc(TabularLine, doc_string=False)
An object that will contain the encoded cats
, the continuous variables conts
, the classes
and the names
of the columns. This is the basic input for a dataset dealing with tabular data.
show_doc(TabularProcessor)
class
TabularProcessor
[source]
TabularProcessor
(ds
:ItemBase
=None
,procs
=None
) ::PreProcessor
Create a PreProcessor
from procs
.
show_doc(TabularProcessor.process_one)
process_one
[source]
process_one
(item
)
show_doc(TabularList.new)
new
[source]
new
(items
:Iterator
,kwargs
) →TabularList
show_doc(TabularList.get)
get
[source]
get
(o
)
show_doc(TabularProcessor.process)
process
[source]
process
(ds
)
show_doc(TabularList.reconstruct)