from fastai.gen_doc.nbdoc import *
from fastai.text import *
from fastai import *
The main thing here is RNNLearner
. There are also some utility functions to help create and update text models.
show_doc(language_model_learner, doc_string=False)
language_model_learner
[source]
language_model_learner
(data
:DataBunch
,bptt
:int
=70
,emb_sz
:int
=400
,nh
:int
=1150
,nl
:int
=3
,pad_token
:int
=1
,drop_mult
:float
=1.0
,tie_weights
:bool
=True
,bias
:bool
=True
,qrnn
:bool
=False
,pretrained_model
=None
,pretrained_fnames
:OptStrTuple
=None
,kwargs
) →LanguageLearner
Create an RNNLearner with a language model from data
of a certain bptt
. The model used is an AWD-LSTM that is built with embeddings of size emb_sz
, a hidden size of nh
, and nl
layers (the vocab_size
is inferred from the data
). All the dropouts are put to values that we found worked pretty well and you can control their strength by adjusting drop_mult
. If qrnn
is True, the model uses QRNN cells instead of LSTMs. The flag tied_weights
control if we should use the same weights for the encoder and the decoder, the flag bias
controls if the last linear layer (the decoder) has bias or not.
You can specify pretrained_model
if you want to use the weights of a pretrained model. If you have your own set of weights and the corrsesponding dictionary, you can pass them in pretrained_fnames
. This should be a list of the name of the weight file and the name of the corresponding dictionary. The dictionary is needed because the function will internally convert the embeddings of the pretrained models to match the dictionary of the data
passed (a word may have a different id for the pretrained model). Those two files should be in the models directory of data.path
.
path = untar_data(URLs.IMDB_SAMPLE)
data = TextLMDataBunch.from_csv(path, 'texts.csv')
learn = language_model_learner(data, pretrained_model=URLs.WT103, drop_mult=0.5)
show_doc(text_classifier_learner, doc_string=False)
Create an RNNLearner with a classifier model from data
. The model used is the encoder of an AWD-LSTM that is built with embeddings of size emb_sz
, a hidden size of nh
, and nl
layers (the vocab_size
is inferred from the data
). All the dropouts are put to values that we found worked pretty well and you can control their strength by adjusting drop_mult
. If qrnn
is True, the model uses QRNN cells instead of LSTMs.
The input texts are fed into that model by bunch of bptt
and only the last max_len
activations are considerated. This gives us the backbone of our model. The head then consists of:
nn.BatchNorm1d
, nn.Dropout
, nn.Linear
, nn.ReLU
) layers.The blocks are defined by the lin_ftrs
and drops
arguments. Specifically, the first block will have a number of inputs inferred from the backbone arch and the last one will have a number of outputs equal to data.c (which contains the number of classes of the data) and the intermediate blocks have a number of inputs/outputs determined by lin_ftrs
(of course a block has a number of inputs equal to the number of outputs of the previous block). The dropouts all have a the same value ps if you pass a float, or the corresponding values if you pass a list. Default is to have an intermediate hidden size of 50 (which makes two blocks model_activation -> 50 -> n_classes) with a dropout of 0.1.
path = untar_data(URLs.IMDB_SAMPLE)
data = TextClasDataBunch.from_csv(path, 'texts.csv')
learn = text_classifier_learner(data, drop_mult=0.5)
show_doc(RNNLearner, doc_string=False)
Handles the whole creation of a Learner
from data
and a model
with a text data using a certain bptt
. The split_func
is used to properly split the model in different groups for gradual unfreezing and differential learning rates. Gradient clipping of clip
is optionally applied. adjust
, alpha
and beta
are all passed to create an instance of RNNTrainer
. Can be used for a language model or an RNN classifier. It also handles the conversion of weights from a pretrained model as well as saving or loading the encoder.
show_doc(RNNLearner.get_preds)
get_preds
[source]
get_preds
(ds_type
:DatasetType
=<DatasetType.Valid: 2>
,with_loss
:bool
=False
,n_batch
:Optional
[int
]=None
,pbar
:Union
[MasterBar
,ProgressBar
,NoneType
]=None
,ordered
:bool
=False
) →List
[Tensor
]
Return predictions and targets on the valid, train, or test set, depending on ds_type
.
show_doc(RNNLearner.load_encoder)
show_doc(RNNLearner.save_encoder)
show_doc(RNNLearner.load_pretrained, doc_string=False)
load_pretrained
[source]
load_pretrained
(wgts_fname
:str
,itos_fname
:str
)
Opens the weights in the wgts_fname
of self.model_dir
and the dictionary in itos_fname
then adapts the pretrained weights to the vocabulary of the data
. The two files should be in the models directory of the learner.path
.
show_doc(lm_split)
show_doc(rnn_classifier_split)
show_doc(convert_weights, doc_string=False)
convert_weights
[source]
convert_weights
(wgts
:Weights
,stoi_wgts
:Dict
[str
,int
],itos_new
:StrList
) →Weights
Convert the wgts
from an dictionary stoi_wgts
(mapping of word to id) to a new dictionary itos_new
(correspondans id to word).
show_doc(LanguageLearner, doc_string=False, title_level=3)
class
LanguageLearner
[source]
LanguageLearner
(data
:DataBunch
,model
:Module
,bptt
:int
=70
,split_func
:OptSplitFunc
=None
,clip
:float
=None
,adjust
:bool
=False
,alpha
:float
=2.0
,beta
:float
=1.0
,kwargs
) ::RNNLearner
Subclass RNNLearner
to have a custom predict
method.
show_doc(LanguageLearner.predict)
predict
[source]
predict
(text
:str
,n_words
:int
=1
,no_unk
:bool
=True
,temperature
:float
=1.0
,min_p
:float
=None
)
Return the n_words
that come after text
.
show_doc(RNNLearner.get_preds)
get_preds
[source]
get_preds
(ds_type
:DatasetType
=<DatasetType.Valid: 2>
,with_loss
:bool
=False
,n_batch
:Optional
[int
]=None
,pbar
:Union
[MasterBar
,ProgressBar
,NoneType
]=None
,ordered
:bool
=False
) →List
[Tensor
]
Return predictions and targets on the valid, train, or test set, depending on ds_type
.