from fastai.gen_doc.nbdoc import *
from fastai.text import *
from fastai.text.models import *
text.models
module fully implements the encoder for an AWD-LSTM, the transformer model and the transformer XL model. They can then plugged in with a decoder to make a language model, or some classifying layers to make a text classifier.
show_doc(AWD_LSTM, title_level=3)
class
AWD_LSTM
[source][test]
AWD_LSTM
(vocab_sz
:int
,emb_sz
:int
,n_hid
:int
,n_layers
:int
,pad_token
:int
=*1
,hidden_p
:float
=0.2
,input_p
:float
=0.6
,embed_p
:float
=0.1
,weight_p
:float
=0.5
,qrnn
:bool
=False
,bidir
:bool
=False
*) ::Module
No tests found for AWD_LSTM
. To contribute a test please refer to this guide and this discussion.
AWD-LSTM/QRNN inspired by https://arxiv.org/abs/1708.02182.
The main idea of the article is to use a RNN with dropout everywhere, but in an intelligent way. There is a difference with the usual dropout, which is why you’ll see a RNNDropout
module: we zero things, as is usual in dropout, but we always zero the same thing according to the sequence dimension (which is the first dimension in pytorch). This ensures consistency when updating the hidden state through the whole sentences/articles.
This being given, there are a total four different dropouts in the encoder of the AWD-LSTM:
embed_p
parameter.input_p
parameter.weight_p
parameter.hidden_p
parameter.The other attributes are vocab_sz
for the number of tokens in your vocabulary, emb_sz
for the embedding size, n_hid
for the hidden size of your inner LSTMs (or QRNNs), n_layers
the number of layers and pad_token
for the index of an eventual padding token (1 by default in fastai).
The flag qrnn=True
replace the inner LSTMs by QRNNs.
show_doc(AWD_LSTM.reset)
reset
[source][test]
reset
()
No tests found for reset
. To contribute a test please refer to this guide and this discussion.
Reset the hidden states.
show_doc(Transformer, title_level=3)
class
Transformer
[source][test]
Transformer
(vocab_sz
:int
,ctx_len
:int
,n_layers
:int
,n_heads
:int
,d_model
:int
,d_head
:int
,d_inner
:int
,resid_p
:float
=*0.0
,attn_p
:float
=0.0
,ff_p
:float
=0.0
,embed_p
:float
=0.0
,bias
:bool
=True
,scale
:bool
=True
,act
:Activation
=<Activation.ReLU: 1>
,double_drop
:bool
=True
,attn_cls
:Callable
='MultiHeadAttention'
,learned_pos_enc
:bool
=True
,mask
:bool
=True
*) ::Module
No tests found for Transformer
. To contribute a test please refer to this guide and this discussion.
Transformer model: https://arxiv.org/abs/1706.03762.
The main idea of this article is to use regular neural net for NLP instead of an RNN, but with lots of attention layers. Intuitively, those attention layers tell the model to pay more interest to this or that world when trying to predict its output.
It starts from embeddings from vocab_sz
(number of tokens) to d_model
(which is basically the hidden size throughout the model), and it will look at inputs of size batch_size by ctx_len
(for context length). We add a positional encoding to the embeddings (since a regular neural net has no idea of the order of words), either learned or coming from PositionalEncoding
depending on learned_pos_enc
. We then have a dropout of embed_p
followed by n_layers
blocks of MultiHeadAttention
followed by feed_forward
.
In the attention we use n_heads
with each a hidden state of d_head
(will default to d_model//n_heads
). If mask=True
, a mask will make sure no attention is paid to future tokens (which would be cheating when training a language model). If scale=True
, the attention scores are scaled by a factor 1 / math.sqrt(d_head)
. A dropout of attn_p
is applied to the attention scores, then the final result get applied a dropout of resid_p
before being summed to the original input (residual connection before the layer norm).
In feed forward, we have two linear layers from d_model
to d_inner
and then back. Those have bias
if that flag is True
and a dropout of ff_p
is applied, after each if double_drop=True
, or just at the end otherwise. act
is used in the middle as a non-linearity.
show_doc(TransformerXL, title_level=3)
class
TransformerXL
[source][test]
TransformerXL
(vocab_sz
:int
,ctx_len
:int
,n_layers
:int
,n_heads
:int
,d_model
:int
,d_head
:int
,d_inner
:int
,resid_p
:float
=*0.0
,attn_p
:float
=0.0
,ff_p
:float
=0.0
,embed_p
:float
=0.0
,bias
:bool
=False
,scale
:bool
=True
,act
:Activation
=<Activation.ReLU: 1>
,double_drop
:bool
=True
,attn_cls
:Callable
='MultiHeadRelativeAttention'
,learned_pos_enc
:bool
=False
,mask
:bool
=True
,mem_len
:int
=0
*) ::Module
No tests found for TransformerXL
. To contribute a test please refer to this guide and this discussion.
TransformerXL model: https://arxiv.org/abs/1901.02860.
TransformerXL is a transformer architecture with a sort of hidden state formed by the results of the intermediate layers on previous tokens. Its size is determined by mem_len
. By using this context, those models are capable of learning longer dependencies and can also be used for faster text generation at inference: a regular transformer model would have to reexamine the whole of sequence of indexes generated so far, whereas we can feed the new tokens one by one to a transformer XL (like we do with a regular RNN).
show_doc(TransformerXL.reset)
reset
[source][test]
reset
()
No tests found for reset
. To contribute a test please refer to this guide and this discussion.
Reset the internal memory.
show_doc(LinearDecoder, title_level=3)
class
LinearDecoder
[source][test]
LinearDecoder
(n_out
:int
,n_hid
:int
,output_p
:float
,tie_encoder
:Module
=*None
,bias
:bool
=True
*) ::Module
No tests found for LinearDecoder
. To contribute a test please refer to this guide and this discussion.
To go on top of a RNNCore module and create a Language Model.
Create a the decoder to go on top of an RNNCore
encoder and create a language model. n_hid
is the dimension of the last hidden state of the encoder, n_out
the size of the output. Dropout of output_p
is applied. If a tie_encoder
is passed, it will be used for the weights of the linear layer, that will have bias
or not.
show_doc(PoolingLinearClassifier, title_level=3)
class
PoolingLinearClassifier
[source][test]
PoolingLinearClassifier
(layers
:Collection
[int
],drops
:Collection
[float
]) ::Module
No tests found for PoolingLinearClassifier
. To contribute a test please refer to this guide and this discussion.
Create a linear classifier with pooling.
The last output, MaxPooling
of all the outputs and AvgPooling
of all the outputs are concatenated, then blocks of bn_drop_lin
are stacked, according to the values in layers
and drops
.
On top of the pytorch or the fastai layers
, the language models use some custom layers specific to NLP.
show_doc(EmbeddingDropout, title_level=3)
class
EmbeddingDropout
[source][test]No tests found for EmbeddingDropout
. To contribute a test please refer to this guide and this discussion.
Apply dropout with probabily embed_p
to an embedding layer emb
.
Each row of the embedding matrix has a probability embed_p
of being replaced by zeros while the others are rescaled accordingly.
enc = nn.Embedding(100, 7, padding_idx=1)
enc_dp = EmbeddingDropout(enc, 0.5)
tst_input = torch.randint(0,100,(8,))
enc_dp(tst_input)
tensor([[-0.7379, -1.3970, -0.4075, -0.1676, 2.0396, 3.2226, 0.7128], [-0.0000, 0.0000, 0.0000, -0.0000, -0.0000, 0.0000, 0.0000], [-3.2579, 2.2972, -1.8704, -0.4090, 2.6477, -1.5015, 0.7158], [ 2.1455, 1.0571, -0.6086, 3.5700, 2.6271, -3.1353, 0.7277], [-3.7003, -1.8846, 0.2029, -0.6839, 0.2968, -2.0199, 1.3127], [-0.0000, 0.0000, -0.0000, -0.0000, 0.0000, 0.0000, -0.0000], [-0.0051, 2.7428, 3.0068, 0.6242, 1.2747, 0.9262, 0.4070], [ 1.9312, 3.0524, -1.2806, 1.5910, -2.1789, -0.1636, -3.4924]], grad_fn=<EmbeddingBackward>)
show_doc(RNNDropout, title_level=3)
class
RNNDropout
[source][test]
RNNDropout
(p
:float
=*0.5
*) ::Module
No tests found for RNNDropout
. To contribute a test please refer to this guide and this discussion.
Dropout with probability p
that is consistent on the seq_len dimension.
dp = RNNDropout(0.3)
tst_input = torch.randn(3,3,7)
tst_input, dp(tst_input)
(tensor([[[-2.1156, 0.9734, 0.2428, 0.9396, 0.4072, -0.8197, 0.3718], [ 0.4838, 1.3077, -0.8239, -0.6557, 1.3938, 0.6086, -0.2622], [ 0.2372, -0.1627, 0.3117, -0.4811, -1.0841, -0.5207, -0.5131]], [[-0.6924, 0.4122, 0.2517, -1.0120, 0.6808, 0.8800, -0.7463], [-0.9498, 0.7655, 0.7471, -0.2767, 1.2155, -0.1042, -2.1443], [-1.2342, 1.9187, -0.8481, -0.4115, -1.3223, 1.4266, -1.4150]], [[ 0.1539, 0.3142, 0.2158, 1.1411, 0.1316, 0.6158, -1.5078], [-1.0177, -0.9230, 0.9994, 0.1140, 0.7432, 0.4353, 0.0096], [-0.8231, 1.0086, 1.7685, 0.3304, -0.0896, -1.0513, -1.3017]]]), tensor([[[-3.0223, 1.3905, 0.0000, 0.0000, 0.5818, -0.0000, 0.5312], [ 0.6911, 1.8681, -0.0000, -0.0000, 1.9911, 0.0000, -0.3745], [ 0.3389, -0.2324, 0.0000, -0.0000, -1.5487, -0.0000, -0.7331]], [[-0.9892, 0.5889, 0.3596, -1.4458, 0.9725, 1.2571, -0.0000], [-1.3569, 1.0936, 1.0673, -0.3953, 1.7364, -0.1489, -0.0000], [-1.7631, 2.7410, -1.2116, -0.5879, -1.8889, 2.0380, -0.0000]], [[ 0.0000, 0.4489, 0.0000, 1.6301, 0.1880, 0.8797, -2.1539], [-0.0000, -1.3186, 0.0000, 0.1628, 1.0617, 0.6218, 0.0137], [-0.0000, 1.4408, 0.0000, 0.4720, -0.1280, -1.5019, -1.8595]]]))
show_doc(WeightDropout, title_level=3)
class
WeightDropout
[source][test]
WeightDropout
(module
:Module
,weight_p
:float
,layer_names
:StrList
=*['weight_hh_l0']
*) ::Module
No tests found for WeightDropout
. To contribute a test please refer to this guide and this discussion.
A module that warps another layer in which some weights will be replaced by 0 during training.
Applies dropout of probability weight_p
to the layers in layer_names
of module
in training mode. A copy of those weights is kept so that the dropout mask can change at every batch.
module = nn.LSTM(5, 2)
dp_module = WeightDropout(module, 0.4)
getattr(dp_module.module, 'weight_hh_l0')
Parameter containing: tensor([[-0.0702, 0.5725], [-0.3910, 0.6512], [-0.2203, -0.4315], [ 0.2750, -0.2917], [-0.4890, -0.3094], [ 0.4638, -0.3807], [-0.2290, -0.6964], [ 0.1224, 0.4043]], requires_grad=True)
It's at the beginning of a forward pass that the dropout is applied to the weights.
tst_input = torch.randn(4,20,5)
h = (torch.zeros(1,20,2), torch.zeros(1,20,2))
x,h = dp_module(tst_input,h)
getattr(dp_module.module, 'weight_hh_l0')
tensor([[-0.0000, 0.0000], [-0.6517, 0.0000], [-0.0000, -0.7191], [ 0.4583, -0.0000], [-0.0000, -0.0000], [ 0.7730, -0.6345], [-0.0000, -1.1607], [ 0.2040, 0.6739]], grad_fn=<MulBackward0>)
show_doc(PositionalEncoding, title_level=3)
class
PositionalEncoding
[source][test]
PositionalEncoding
(d
:int
) ::Module
No tests found for PositionalEncoding
. To contribute a test please refer to this guide and this discussion.
Encode the position with a sinusoid.
show_doc(DecoderLayer, title_level=3)
class
DecoderLayer
[source][test]
DecoderLayer
(n_heads
:int
,d_model
:int
,d_head
:int
,d_inner
:int
,resid_p
:float
=*0.0
,attn_p
:float
=0.0
,ff_p
:float
=0.0
,bias
:bool
=True
,scale
:bool
=True
,act
:Activation
=<Activation.ReLU: 1>
,double_drop
:bool
=True
,attn_cls
:Callable
='MultiHeadAttention'
*) ::Module
No tests found for DecoderLayer
. To contribute a test please refer to this guide and this discussion.
Basic block of a Transformer model.
show_doc(MultiHeadAttention, title_level=3)
class
MultiHeadAttention
[source][test]
MultiHeadAttention
(n_heads
:int
,d_model
:int
,d_head
:int
=*None
,resid_p
:float
=0.0
,attn_p
:float
=0.0
,bias
:bool
=True
,scale
:bool
=True
*) ::Module
No tests found for MultiHeadAttention
. To contribute a test please refer to this guide and this discussion.
MutiHeadAttention.
show_doc(MultiHeadRelativeAttention, title_level=3)
class
MultiHeadRelativeAttention
[source][test]
MultiHeadRelativeAttention
(n_heads
:int
,d_model
:int
,d_head
:int
,resid_p
:float
=*0.0
,attn_p
:float
=0.0
,bias
:bool
=True
,scale
:bool
=True
*) ::MultiHeadAttention
No tests found for MultiHeadRelativeAttention
. To contribute a test please refer to this guide and this discussion.
MutiHeadAttention with relative positional encoding.
show_doc(SequentialRNN, title_level=3)
class
SequentialRNN
[source][test]
SequentialRNN
(***args
**) ::Sequential
No tests found for SequentialRNN
. To contribute a test please refer to this guide and this discussion.
A sequential module that passes the reset call to its children.
show_doc(SequentialRNN.reset)
reset
[source][test]
reset
()
No tests found for reset
. To contribute a test please refer to this guide and this discussion.
Call the reset
function of self.children
(if they have one).
show_doc(dropout_mask)
dropout_mask
[source][test]
dropout_mask
(x
:Tensor
,sz
:Collection
[int
],p
:float
)
No tests found for dropout_mask
. To contribute a test please refer to this guide and this discussion.
Return a dropout mask of the same type as x
, size sz
, with probability p
to cancel an element.
tst_input = torch.randn(3,3,7)
dropout_mask(tst_input, (3,7), 0.3)
tensor([[0.0000, 1.4286, 1.4286, 0.0000, 1.4286, 1.4286, 0.0000], [1.4286, 1.4286, 1.4286, 0.0000, 1.4286, 0.0000, 0.0000], [1.4286, 0.0000, 1.4286, 0.0000, 0.0000, 0.0000, 1.4286]])
Such a mask is then expanded in the sequence length dimension and multiplied by the input to do an RNNDropout
.
show_doc(feed_forward)
feed_forward
[source][test]
feed_forward
(d_model
:int
,d_ff
:int
,ff_p
:float
=*0.0
,act
:Activation
=<Activation.ReLU: 1>
,double_drop
:bool
=True
*)
No tests found for feed_forward
. To contribute a test please refer to this guide and this discussion.
show_doc(WeightDropout.forward)
forward
[source][test]
forward
(***args
**:ArgStar
)
No tests found for forward
. To contribute a test please refer to this guide and this discussion.
Defines the computation performed at every call. Should be overridden by all subclasses.
.. note::
Although the recipe for forward pass needs to be defined within
this function, one should call the :class:Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
show_doc(EmbeddingDropout.forward)
forward
[source][test]
forward
(words
:LongTensor
,scale
:Optional
[float
]=*None
*) →Tensor
No tests found for forward
. To contribute a test please refer to this guide and this discussion.
Defines the computation performed at every call. Should be overridden by all subclasses.
.. note::
Although the recipe for forward pass needs to be defined within
this function, one should call the :class:Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
show_doc(RNNDropout.forward)
forward
[source][test]
forward
(x
:Tensor
) →Tensor
No tests found for forward
. To contribute a test please refer to this guide and this discussion.
Defines the computation performed at every call. Should be overridden by all subclasses.
.. note::
Although the recipe for forward pass needs to be defined within
this function, one should call the :class:Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
show_doc(WeightDropout.reset)
reset
[source][test]
reset
()
No tests found for reset
. To contribute a test please refer to this guide and this discussion.
show_doc(PoolingLinearClassifier.forward)
forward
[source][test]
forward
(input
:Tuple
[Tensor
,Tensor
,Tensor
]) →Tuple
[Tensor
,Tensor
,Tensor
]
No tests found for forward
. To contribute a test please refer to this guide and this discussion.
Defines the computation performed at every call. Should be overridden by all subclasses.
.. note::
Although the recipe for forward pass needs to be defined within
this function, one should call the :class:Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
show_doc(LinearDecoder.forward)
forward
[source][test]
forward
(input
:Tuple
[Tensor
,Tensor
]) →Tuple
[Tensor
,Tensor
,Tensor
]
No tests found for forward
. To contribute a test please refer to this guide and this discussion.
Defines the computation performed at every call. Should be overridden by all subclasses.
.. note::
Although the recipe for forward pass needs to be defined within
this function, one should call the :class:Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.