from fastai import *
from fastai.gen_doc.nbdoc import *
from fastai.tabular import *
from fastai.text import *
from fastai.vision import *
np.random.seed(42)
The data block API lets you customize how to create a DataBunch
by isolating the underlying parts of that process in separate blocks, mainly:
DataBunch
?For each of those questions, you can have multiple possible blocks: your inputs might be in a folder, a csv file, a dataframe. You may want to split them randomly, by certain indexes or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may or may not have data augmentation to deal with. Or a test set. Finally you have to set the arguments to put the data together in a DataBunch
(batch size, collate function...)
The data block API is called as such because you can mix and match each one of those blocks with the others, allowing you total flexibility to create your customized DataBunch
for training. The factory methods of the various DataBunch
are great for beginners but you can't always make your data fit in the tracks they require.
As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts.
Let's begin by our traditional MNIST example.
path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms(do_flip=False)
path.ls()
[PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/train'), PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/test'), PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/export.pkl'), PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/labels.csv'), PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/valid'), PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/history.csv'), PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/models')]
(path/'train').ls()
[PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/train/7'), PosixPath('/home/ubuntu/.fastai/data/mnist_tiny/train/3')]
In vision.data
, we create an easy DataBunch
suitable for classification by simply typing:
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24)
This is aimed at data that is in folders following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. There is also a test set containing unlabelled pictures. With the data block API, we can group everything together like this:
data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders
.split_by_folder() #How to split in train/valid? -> use the folders
.label_from_folder() #How to label? -> depending on the folder of the filenames
.add_test_folder() #Optionally add a test set (here default name is test)
.transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64
.databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch
data.show_batch(3, figsize=(6,6), hide_axis=False)
data.train_ds[0], data.test_ds.classes
((Image (3, 64, 64), Category 7), ['7', '3'])
Let's look at another example from vision.data
with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is:
planet = untar_data(URLs.PLANET_TINY)
planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms)
With the data block API we can rewrite this like that:
data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg')
#Where to find the data? -> in planet 'train' folder
.random_split_by_pct()
#How to split in train/valid? -> randomly with the default 20% in valid
.label_from_df(sep=' ')
#How to label? -> use the csv file
.transform(planet_tfms, size=128)
#Data augmentation? -> use tfms with a size of 128
.databunch())
#Finally -> use the defaults for conversion to databunch
data.show_batch(rows=2, figsize=(9,7))
The data block API also allows you to get your data together in problems for which there is no direct ImageDataBunch
factory method. For a segmentation task, for instance, we can use it to quickly get a DataBunch
. Let's take the example of the camvid dataset. The images are in an 'images' folder and their corresponding mask is in a 'labels' folder.
camvid = untar_data(URLs.CAMVID_TINY)
path_lbl = camvid/'labels'
path_img = camvid/'images'
We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...)
codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes
array(['Animal', 'Archway', 'Bicyclist', 'Bridge', 'Building', 'Car', 'CartLuggagePram', 'Child', 'Column_Pole', 'Fence', 'LaneMkgsDriv', 'LaneMkgsNonDriv', 'Misc_Text', 'MotorcycleScooter', 'OtherMoving', 'ParkingBlock', 'Pedestrian', 'Road', 'RoadShoulder', 'Sidewalk', 'SignSymbol', 'Sky', 'SUVPickupTruck', 'TrafficCone', 'TrafficLight', 'Train', 'Tree', 'Truck_Bus', 'Tunnel', 'VegetationMisc', 'Void', 'Wall'], dtype='<U17')
And we define the following function that infers the mask filename from the image filename.
get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'
Then we can easily define a DataBunch
using the data block API. Here we need to use tfm_y=True
in the transform call because we need the same transforms to be applied to the target mask as were applied to the image.
data = (SegmentationItemList.from_folder(path_img)
.random_split_by_pct()
.label_from_func(get_y_fn, classes=codes)
.transform(get_transforms(), tfm_y=True, size=128)
.databunch())
data.show_batch(rows=2, figsize=(7,5))
Another example for object detection. We use our tiny sample of the COCO dataset here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename.
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
img2bbox = dict(zip(images, lbl_bbox))
get_y_func = lambda o:img2bbox[o.name]
The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes.
data = (ObjectItemList.from_folder(coco)
#Where are the images? -> in coco
.random_split_by_pct()
#How to split in train/valid? -> randomly with the default 20% in valid
.label_from_func(get_y_func)
#How to find the labels? -> use get_y_func
.transform(get_transforms(), tfm_y=True)
#Data augmentation? -> Standard transforms with tfm_y=True
.databunch(bs=16, collate_fn=bb_pad_collate))
#Finally we convert to a DataBunch and we use bb_pad_collate
data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6))
But vision isn't the only application where the data block API works, it can also be used for text or tabular data. With ouy sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model.
imdb = untar_data(URLs.IMDB_SAMPLE)
data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text')
#Where are the inputs? Column 'text' of this csv
.random_split_by_pct()
#How to split it? Randomly with the default 20%
.label_for_lm()
#Label it for a language model
.databunch())
data_lm.show_batch()
idx | text |
---|---|
0 | xxbos xxmaj old xxmaj jane 's mannered tale seems very popular these days . i have lost count of the number of versions going around . xxmaj probably the reason is that her " xxunk " are our " xxunk " even at this late date . xxmaj this xxup tv mini - series gives it a mannered telling suitable to the novel . xxmaj xxunk , xxunk xxmaj emma |
1 | directed the chilling and disturbing xxmaj capote 's book about the reasons that xxunk these kids to the crime ( xxmaj are they xxmaj natural xxmaj born xxmaj killers ? ) . xxmaj the crime scenes are very brutal and haunting because of the lack of senses and reasons for what we witnessed . xxmaj stunning black & white cinematography from xxmaj xxunk xxmaj hall , excellent country - road |
2 | sisters get the idea of pushing xxmaj precious into the path of a drunken xxmaj hungarian count , xxunk the two gold - xxunk women into thinking he is one of the xxunk men in xxmaj europe . xxmaj but a case of mistaken identity makes the girls think the count is good - looking xxmaj ray xxmaj xxunk , who goes along with the scheme xxunk he has a |
3 | no xxunk the first xxmaj azumi film was a commercial product ; it was an adaptation of a popular manga and had cast of young , attractive actors and certainly was n't lacking in the budget department . xxmaj yet it more than entertained for what it was , and i ca n't xxunk i enjoyed it immensely . \n\n " xxmaj azumi 2 " lacks just about everything that |
4 | long flashback . xxmaj the xxunk of the brother and the sister , from a family of rich xxmaj xxunk oil owners , is brought to the xxunk by xxunk clothes , and xxunk cars that go at top speed in a xxunk landscape . xxmaj malone 's xxunk at the end of the movie is stunning : suit and xxunk , xxunk with a small xxunk : she 's |
For a classification problem, we just have to change the way labelling is done. Here we use the column 'label' of our csv.
data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text')
.split_from_df(col='is_valid')
.label_from_df(cols='label')
.databunch())
data_clas.show_batch()
text | target |
---|---|
xxbos xxmaj raising xxmaj victor xxmaj vargas : a xxmaj review \n\n xxmaj you know , xxmaj raising xxmaj victor xxmaj vargas is like sticking your hands into a big , xxunk bowl of xxunk . xxmaj it 's warm and gooey , but you 're not sure if it feels right . xxmaj try as i might , no matter how warm and gooey xxmaj raising xxmaj victor xxmaj | negative |
xxbos xxup the xxup shop xxup around xxup the xxup corner is one of the xxunk and most feel - good romantic comedies ever made . xxmaj there 's just no getting around that , and it 's hard to actually put one 's feeling for this film into words . xxmaj it 's not one of those films that tries too hard , nor does it come up with | positive |
xxbos xxmaj now that xxmaj che(2008 ) has finished its relatively short xxmaj australian cinema run ( extremely limited xxunk screen in xxmaj xxunk , after xxunk ) , i can xxunk join both xxunk of " xxmaj at xxmaj the xxmaj movies " in taking xxmaj steven xxmaj soderbergh to task . \n\n xxmaj it 's usually satisfying to watch a film director change his style / subject , | negative |
xxbos xxmaj this film sat on my xxmaj xxunk for weeks before i watched it . i xxunk a self - indulgent xxunk flick about relationships gone bad . i was wrong ; this was an xxunk xxunk into the screwed - up xxunk of xxmaj new xxmaj xxunk . \n\n xxmaj the format is the same as xxmaj max xxmaj xxunk ' " xxmaj la xxmaj xxunk , " | positive |
xxbos xxmaj many neglect that this is n't just a classic due to the fact that it 's the first xxup 3d game , or even the first xxunk - up . xxmaj it 's also one of the first xxunk games , one of the xxunk definitely the first ) truly claustrophobic games , and just a pretty well - xxunk gaming experience in general . xxmaj with graphics | positive |
Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some PreProcessor
s that are going to be applied to our data once the splitting and the labelling is done.
adult = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(adult/'adult.csv')
dep_var = '>=50k'
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain']
procs = [FillMissing, Categorify, Normalize]
data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs)
.split_by_idx(valid_idx=range(800,1000))
.label_from_df(cols=dep_var)
.databunch())
data.show_batch()
workclass | education | marital-status | occupation | relationship | race | sex | native-country | education-num_na | education-num | hours-per-week | age | capital-loss | fnlwgt | capital-gain | target |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Local-gov | HS-grad | Divorced | Craft-repair | Not-in-family | White | Male | United-States | False | -0.4224 | -0.0356 | 0.0303 | 4.4430 | -0.9781 | -0.1459 | 0 |
Self-emp-not-inc | 10th | Married-civ-spouse | Craft-repair | Husband | White | Male | United-States | False | -1.5958 | -2.6276 | 2.3758 | -0.2164 | 0.4623 | -0.1459 | 0 |
Private | HS-grad | Divorced | Transport-moving | Not-in-family | White | Male | United-States | False | -0.4224 | -0.0356 | 0.6899 | -0.2164 | -0.4378 | -0.1459 | 0 |
Private | Bachelors | Married-civ-spouse | Prof-specialty | Husband | White | Male | United-States | False | 1.1422 | 0.2884 | -1.0692 | -0.2164 | 1.6128 | -0.1459 | 1 |
Self-emp-not-inc | Some-college | Never-married | Other-service | Own-child | White | Male | United-States | False | -0.0312 | -0.8456 | -1.2891 | -0.2164 | 1.2244 | -0.1459 | 0 |
The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name ItemList
).
show_doc(ItemList, title_level=3, doc_string=False)
class
ItemList
[source]
ItemList
(items
:Iterator
,path
:PathOrStr
='.'
,label_cls
:Callable
=None
,xtra
:Any
=None
,processor
:PreProcessor
=None
,x
:ItemList
=None
,kwargs
)
This class regroups the inputs for our model in items
and saves a path
attribute which is where it will look for any files (image files, csv file with labels...) create_func
is applied to items
to get the final output. label_cls
will be called to create the labels from the result of the label function, xtra
contains additional information (usually an underlying dataframe) and processor
is to be applied to the inputs after the splitting and labelling.
It has multiple subclasses depending on the type of data you're handling. Here is a quick list:
CategoryList
for labels in classificationMultiCategoryList
for labels in a multi classification problemFloatList
for float labels in a regression problemImageItemList
for data that are imagesSegmentationItemList
like ImageItemList
but will default labels to SegmentationLabelList
SegmentationLabelList
for segmentation masksObjectItemList
like ImageItemList
but will default labels to ObjectLabelList
ObjectLabelList
for object detectionPointsItemList
for points (of the type ImagePoints
)TextList
for text dataTextFilesList
for text data stored in filesTabularList
for tabular dataCollabList
for collaborative filteringOnce you have selected the class that is suitable, you can instantiate it with one of the following factory methods
show_doc(ItemList.from_folder)
from_folder
[source]
from_folder
(path
:PathOrStr
,extensions
:StrList
=None
,recurse
=True
,kwargs
) →ItemList
Get the list of files in path
that have a suffix in extensions
. recurse
determines if we search subfolders.
show_doc(ItemList.from_df)
show_doc(ItemList.from_csv)
The factory method may have grabbed too many items. For instance, if you were searching sub folders with the from_folder
method, you may have gotten files you don't want. To remove those, you can use one of the following methods.
show_doc(ItemList.filter_by_func)
filter_by_func
[source]
filter_by_func
(func
:Callable
) →ItemList
Only keeps elements for which func
returns True
.
show_doc(ItemList.filter_by_folder)
filter_by_folder
[source]
filter_by_folder
(include
=None
,exclude
=None
)
Only keep filenames in include
folder or reject the ones in exclude
.
show_doc(ItemList.filter_by_rand)
filter_by_rand
[source]
filter_by_rand
(p
:float
,seed
:int
=None
)
Keep random sample of items
with probability p
First check if you can't easily customize one of the existing subclass by:
get
method (or the open
method if you're dealing with images)processor
(see step 4)label_cls
for the label creationPreProcessor
with the _processor
class variableIf this isn't the case and you really need to write your own class, there is a full tutorial that explains how to proceed.
show_doc(ItemList.analyze_pred)
analyze_pred
[source]
analyze_pred
(pred
:Tensor
)
Called on pred
before reconstruct
for additional preprocessing.
show_doc(ItemList.reconstruct)
reconstruct
[source]
reconstruct
(t
:Tensor
,x
:Tensor
=None
)
Reconstuct one of the underlying item for its data t
.
This step is normally straightforward, you just have to pick oe of the following functions depending on what you need.
show_doc(ItemList.random_split_by_pct)
random_split_by_pct
[source]
random_split_by_pct
(valid_pct
:float
=0.2
,seed
:int
=None
) →ItemLists
Split the items randomly by putting valid_pct
in the validation set. Set the seed
in numpy if passed.
show_doc(ItemList.split_by_files)
split_by_files
[source]
split_by_files
(valid_names
:ItemList
) →ItemLists
Split the data by using the names in valid_names
for validation.
show_doc(ItemList.split_by_fname_file)
split_by_fname_file
[source]
split_by_fname_file
(fname
:PathOrStr
,path
:PathOrStr
=None
) →ItemLists
Split the data by using the file names in fname
for the validation set. path
will override self.path
.
show_doc(ItemList.split_by_folder)
split_by_folder
[source]
split_by_folder
(train
:str
='train'
,valid
:str
='valid'
) →ItemLists
Split the data depending on the folder (train
or valid
) in which the filenames are.
jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.")
show_doc(ItemList.split_by_idx)
split_by_idx
[source]
split_by_idx
(valid_idx
:Collection
[int
]) →ItemLists
Split the data according to the indexes in valid_idx
.
show_doc(ItemList.split_by_idxs)
split_by_idxs
[source]
split_by_idxs
(train_idx
,valid_idx
)
Split the data between train_idx
and valid_idx
.
show_doc(ItemList.split_by_list)
show_doc(ItemList.split_by_valid_func)
split_by_valid_func
[source]
split_by_valid_func
(func
:Callable
) →ItemLists
Split the data by result of func
(which returns True
for validation set)
show_doc(ItemList.split_from_df)
split_from_df
[source]
split_from_df
(col
:Union
[int
,Collection
[int
],str
,StrList
]=2
)
Split the data from the col
in the dataframe in self.xtra
.
jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.")
To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a label_cls
that will be used to create those labels (the default is the one from your input ItemList
, and if there is none, it will go to CategoryList
, MultiCategoryList
or FloatList
depending on the type of the labels).
The first example in these docs created labels as follows:
path = untar_data(URLs.MNIST_TINY)
ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train
If you want to save the data necessary to recreate your LabelList
(not including saving the actual image/text/etc files), you can use to_df
or to_csv
:
ll.train.to_csv('tmp.csv')
Or just grab a pd.DataFrame
directly:
ll.to_df().head()
x | y | |
---|---|---|
0 | train/7/7994.png | 7 |
1 | train/7/8437.png | 7 |
2 | train/7/9767.png | 7 |
3 | train/7/7236.png | 7 |
4 | train/7/9445.png | 7 |
show_doc(ItemList.label_from_list)
label_from_list
[source]
label_from_list
(labels
:Iterator
,kwargs
) →LabelList
Label self.items
with labels
using label_cls
show_doc(ItemList.label_from_df)
label_from_df
[source]
label_from_df
(cols
:Union
[int
,Collection
[int
],str
,StrList
]=1
,kwargs
)
Label self.items
from the values in cols
in self.xtra
.
jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.")
show_doc(ItemList.label_const)
show_doc(ItemList.label_from_folder)
label_from_folder
[source]
label_from_folder
(kwargs
) →LabelList
Give a label to each filename depending on its folder.
jekyll_note("This method looks at the last subfolder in the path to determine the classes.")
show_doc(ItemList.label_from_func)
label_from_func
[source]
label_from_func
(func
:Callable
,kwargs
) →LabelList
Apply func
to every input to get its label.
show_doc(ItemList.label_from_re)
label_from_re
[source]
label_from_re
(pat
:str
,full_path
:bool
=False
,kwargs
) →LabelList
Apply the re in pat
to determine the label of every filename. If full_path
, search in the full name.
show_doc(CategoryList, title_level=3)
class
CategoryList
[source]
CategoryList
(items
:Iterator
,classes
:Collection
=None
,kwargs
) ::CategoryListBase
ItemList
suitable for storing labels in items
belonging to classes
. If None
are passed, classes
will be determined by the unique different labels. processor
will default to CategoryProcessor
.
show_doc(MultiCategoryList, title_level=3)
class
MultiCategoryList
[source]
MultiCategoryList
(items
:Iterator
,classes
:Collection
=None
,sep
:str
=None
,kwargs
) ::CategoryListBase
ItemList
suitable for storing list of labels in items
belonging to classes
. If None
are passed, classes
will be determined by the unique different labels. sep
is used to split the content of items
in a list of labels.
show_doc(FloatList, title_level=3)
ItemList
suitable for storing the floats in items for regression. Will add a log
if this flag is True
.
This isn't seen here in the API, but if you passed a processor
(or a list of them) in your initial ItemList
during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the _processor
variable of your class of items (this can be a list of PreProcessor
classes).
A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.
Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the PreProcessor
and applied on the validation set.
This is the generic class for all processors.
show_doc(PreProcessor, title_level=3)
class
PreProcessor
[source]
PreProcessor
(ds
:Collection
=None
)
show_doc(PreProcessor.process_one)
process_one
[source]
process_one
(item
:Any
)
Process one item
. This method needs to be written in any subclass.
show_doc(PreProcessor.process)
process
[source]
process
(ds
:Collection
)
Process a dataset. This default to apply process_one
on every item
of ds
.
show_doc(CategoryProcessor, title_level=3)
class
CategoryProcessor
[source]
CategoryProcessor
(ds
:ItemList
) ::PreProcessor
PreProcessor
that will convert labels to codes usings classes
(if passed) in a single classificatio problem.
show_doc(MultiCategoryProcessor, title_level=3)
class
MultiCategoryProcessor
[source]
MultiCategoryProcessor
(ds
:ItemList
) ::CategoryProcessor
PreProcessor
that will convert labels to codes usings classes
(if passed) in a single multi-classificatio problem.
Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms.
show_doc(LabelLists.transform)
transform
[source]
transform
(tfms
:Optional
[Tuple
[Union
[Callable
,Collection
[Callable
]],Union
[Callable
,Collection
[Callable
]]]]=(None, None)
,kwargs
)
Set tfms
to be applied to the train and validation set.
This is primary for the vision application. The kwargs
are the one expected by the type of transforms you pass. tfm_y
is among them and if set to True
, the transforms will be applied to input and target.
To add a test set, you can use one of the two following methods.
show_doc(LabelLists.add_test)
add_test
[source]
add_test
(items
:Iterator
,label
:Any
=None
)
Add test set containing items from items
and an arbitrary label
jekyll_note("Here `items` can be an `ItemList` or a collection.")
show_doc(LabelLists.add_test_folder)
add_test_folder
[source]
add_test_folder
(test_folder
:str
='test'
,label
:Any
=None
)
Add test set containing items from folder test_folder
and an arbitrary label
.
This last step is usually pretty straightforward. You just have to include all the arguments we pass to DataBunch.create
(bs
, num_workers
, collate_fn
). The class called to create a DataBunch
is set in the _bunch
attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you.
show_doc(LabelLists.databunch)
databunch
[source]
databunch
(path
:PathOrStr
=None
,kwargs
) →ImageDataBunch
Create an DataBunch
from self, path
will override self.path
, kwargs
are passed to DataBunch.create
.
show_doc(LabelList, title_level=3, doc_string=False)
The basic dataset in fastai. Inputs are in x
, targets in y
. Optionally apply tfms
to x
and also y
if tfm_y
is True
.
show_doc(LabelList.from_lists)
show_doc(ItemLists, doc_string=False, title_level=3)
Data in path
split between several streams of inputs, train
, valid
and maybe test
.
show_doc(ItemLists.label_from_lists)
label_from_lists
[source]
label_from_lists
(train_labels
:Iterator
,valid_labels
:Iterator
,label_cls
:Callable
=None
,kwargs
) →LabelList
Use the labels in train_labels
and valid_labels
to label the data. label_cls
will overwrite the default.
show_doc(LabelLists, title_level=3, doc_string=False)
show_doc(get_files)
get_files
[source]
get_files
(path
:PathOrStr
,extensions
:StrList
=None
,recurse
:bool
=False
) →FilePathList
Return list of files in c
that have a suffix in extensions
. recurse
determines if we search subfolders.
show_doc(ItemList.get)
get
[source]
get
(i
) →Any
show_doc(CategoryList.new)
new
[source]
new
(items
,classes
=None
,kwargs
)
show_doc(LabelLists.get_processors)
get_processors
[source]
get_processors
()
show_doc(LabelList.from_lists)
show_doc(LabelList.set_item)
set_item
[source]
set_item
(item
)
show_doc(LabelList.new)
new
[source]
new
(x
,y
,kwargs
) →LabelList
show_doc(CategoryList.get)
get
[source]
get
(i
)
show_doc(LabelList.predict)
predict
[source]
predict
(res
)
show_doc(ItemList.new)
new
[source]
new
(items
:Iterator
,processor
:PreProcessor
=None
,kwargs
) →ItemList
show_doc(LabelList.clear_item)
clear_item
[source]
clear_item
()
show_doc(ItemList.process_one)
process_one
[source]
process_one
(item
,processor
=None
)
show_doc(ItemList.process)
process
[source]
process
(processor
=None
)
show_doc(LabelLists.process)
process
[source]
process
()
show_doc(ItemLists.transform)
transform
[source]
transform
(tfms
:Optional
[Tuple
[Union
[Callable
,Collection
[Callable
]],Union
[Callable
,Collection
[Callable
]]]]=(None, None)
,kwargs
)
Set tfms
to be applied to the train and validation set.
show_doc(LabelList.process)
process
[source]
process
(xp
=None
,yp
=None
,filter_missing_y
:bool
=False
)
Launch the preprocessing on xp
and yp
.
show_doc(LabelList.transform)
transform
[source]
transform
(tfms
:Union
[Callable
,Collection
[Callable
]],tfm_y
:bool
=None
,kwargs
)
Set the tfms
and `` tfm_y` value to be applied to the inputs and targets.
show_doc(MultiCategoryProcessor.process_one)
process_one
[source]
process_one
(item
)
show_doc(FloatList.get)
get
[source]
get
(i
)
show_doc(CategoryProcessor.process_one)
process_one
[source]
process_one
(item
)
show_doc(CategoryProcessor.create_classes)
create_classes
[source]
create_classes
(classes
)
show_doc(CategoryProcessor.process)
process
[source]
process
(ds
)
show_doc(MultiCategoryList.get)
get
[source]
get
(i
)
show_doc(FloatList.new)
new
[source]
new
(items
,kwargs
)
show_doc(MultiCategoryProcessor.generate_classes)
generate_classes
[source]
generate_classes
(items
)
show_doc(CategoryProcessor.generate_classes)
generate_classes
[source]
generate_classes
(items
)
show_doc(ItemList.get_label_cls)
get_label_cls
[source]
get_label_cls
(labels
,label_cls
:Callable
=None
,sep
:str
=None
,kwargs
)
show_doc(ItemLists.transform_y)
transform_y
[source]
transform_y
(tfms
:Optional
[Tuple
[Union
[Callable
,Collection
[Callable
]],Union
[Callable
,Collection
[Callable
]]]]=(None, None)
,kwargs
)
show_doc(LabelList.to_df)
show_doc(FloatList.reconstruct)
show_doc(LabelList.to_csv)
show_doc(LabelList.export)
export
[source]
export
(fn
:PathOrStr
)
Export the minimal state and save it in fn
to load an empty version for inference.
show_doc(MultiCategoryList.analyze_pred)
analyze_pred
[source]
analyze_pred
(pred
,thresh
:float
=0.5
)
Called on pred
before reconstruct
for additional preprocessing.
show_doc(MultiCategoryList.reconstruct)
show_doc(LabelList.load_empty)
show_doc(CategoryList.reconstruct)
show_doc(LabelList.transform_y)
transform_y
[source]
transform_y
(tfms
:Union
[Callable
,Collection
[Callable
]]=None
,kwargs
)
show_doc(ItemList.to_text)
show_doc(CategoryList.analyze_pred)
analyze_pred
[source]
analyze_pred
(pred
,thresh
:float
=0.5
)
Called on pred
before reconstruct
for additional preprocessing.