from fastai.gen_doc.nbdoc import *
from fastai.text import *
from fastai import *
text.tranform
contains the functions that deal behind the scenes with the two main tasks when preparing texts for modelling: tokenization and numericalization.
Tokenization splits the raw texts into tokens (wich can be words, or punctuation signs...). The most basic way to do this would be to separate according to spaces, but it's possible to be more subtle; for instance, the contractions like "isn't" or "don't" should be split in ["is","n't"] or ["do","n't"]. By default fastai will use the powerful spacy tokenizer.
Numericalization is easier as it just consists in attributing a unique id to each token and mapping each of those tokens to their respective ids.
This step is actually divided in two phases: first, we apply a certain list of rules
to the raw texts as preprocessing, then we use the tokenizer to split them in lists of tokens. Combining together those rules
, the tok_func
and the lang
to process the texts is the role of the Tokenizer
class.
show_doc(Tokenizer, doc_string=False)
class
Tokenizer
[source]
Tokenizer
(tok_func
:Callable
='SpacyTokenizer'
,lang
:str
='en'
,pre_rules
:ListRules
=None
,post_rules
:ListRules
=None
,special_cases
:StrList
=None
,n_cpus
:int
=None
)
This class will process texts by appling them the rules
then tokenizing them with tok_func(lang)
. special_cases
are a list of tokens passed as special to the tokenizer and n_cpus
is the number of cpus to use for multi-processing (by default, half the cpus available). We don't directly pass a tokenizer for multi-processing purposes: each process needs to initiate a tokenizer of its own. The rules and special_cases default to
default_rules = [fix_html, replace_rep, replace_wrep, deal_caps, spec_add_spaces, rm_useless_spaces]
and
default_spec_tok = [BOS, FLD, UNK, PAD]
show_doc(Tokenizer.process_text)
process_text
[source]
process_text
(t
:str
,tok
:BaseTokenizer
) →List
[str
]
Processe one text t
with tokenizer tok
.
show_doc(Tokenizer.process_all)
For an example, we're going to grab some IMDB reviews.
path = untar_data(URLs.IMDB_SAMPLE)
path
PosixPath('/home/ubuntu/.fastai/data/imdb_sample')
df = pd.read_csv(path/'texts.csv', header=None)
example_text = df.iloc[2][1]; example_text
'This is a extremely well-made film. The acting, script and camera-work are all first-rate. The music is good, too, though it is mostly early in the film, when things are still relatively cheery. There are no really superstars in the cast, though several faces will be familiar. The entire cast does an excellent job with the script.<br /><br />But it is hard to watch, because there is no good end to a situation like the one presented. It is now fashionable to blame the British for setting Hindus and Muslims against each other, and then cruelly separating them into two countries. There is some merit in this view, but it\'s also true that no one forced Hindus and Muslims in the region to mistreat each other as they did around the time of partition. It seems more likely that the British simply saw the tensions between the religions and were clever enough to exploit them to their own ends.<br /><br />The result is that there is much cruelty and inhumanity in the situation and this is very unpleasant to remember and to see on the screen. But it is never painted as a black-and-white case. There is baseness and nobility on both sides, and also the hope for change in the younger generation.<br /><br />There is redemption of a sort, in the end, when Puro has to make a hard choice between a man who has ruined her life, but also truly loved her, and her family which has disowned her, then later come looking for her. But by that point, she has no option that is without great pain for her.<br /><br />This film carries the message that both Muslims and Hindus have their grave faults, and also that both can be dignified and caring people. The reality of partition makes that realisation all the more wrenching, since there can never be real reconciliation across the India/Pakistan border. In that sense, it is similar to "Mr & Mrs Iyer".<br /><br />In the end, we were glad to have seen the film, even though the resolution was heartbreaking. If the UK and US could deal with their own histories of racism with this kind of frankness, they would certainly be better off.'
tokenizer = Tokenizer()
tok = SpacyTokenizer('en')
' '.join(tokenizer.process_text(example_text, tok))
'this is a extremely well - made film . the acting , script and camera - work are all first - rate . the music is good , too , though it is mostly early in the film , when things are still relatively cheery . there are no really superstars in the cast , though several faces will be familiar . the entire cast does an excellent job with the script . \n\n but it is hard to watch , because there is no good end to a situation like the one presented . it is now fashionable to blame the british for setting hindus and muslims against each other , and then cruelly separating them into two countries . there is some merit in this view , but it \'s also true that no one forced hindus and muslims in the region to mistreat each other as they did around the time of partition . it seems more likely that the british simply saw the tensions between the religions and were clever enough to exploit them to their own ends . \n\n the result is that there is much cruelty and inhumanity in the situation and this is very unpleasant to remember and to see on the screen . but it is never painted as a black - and - white case . there is baseness and nobility on both sides , and also the hope for change in the younger generation . \n\n there is redemption of a sort , in the end , when puro has to make a hard choice between a man who has ruined her life , but also truly loved her , and her family which has disowned her , then later come looking for her . but by that point , she has no option that is without great pain for her . \n\n this film carries the message that both muslims and hindus have their grave faults , and also that both can be dignified and caring people . the reality of partition makes that realisation all the more wrenching , since there can never be real reconciliation across the india / pakistan border . in that sense , it is similar to " mr & mrs iyer " . \n\n in the end , we were glad to have seen the film , even though the resolution was heartbreaking . if the uk and us could deal with their own histories of racism with this kind of frankness , they would certainly be better off .'
As explained before, the tokenizer split the text according to words/punctuations signs but in a smart manner. The rules (see below) also have modified the text a little bit. We can tokenize a list of texts directly at the same time:
df = pd.read_csv(path/'texts.csv', header=None)
texts = df[1].values
tokenizer = Tokenizer()
tokens = tokenizer.process_all(texts)
' '.join(tokens[2])
'this is a extremely well - made film . the acting , script and camera - work are all first - rate . the music is good , too , though it is mostly early in the film , when things are still relatively cheery . there are no really superstars in the cast , though several faces will be familiar . the entire cast does an excellent job with the script . \n\n but it is hard to watch , because there is no good end to a situation like the one presented . it is now fashionable to blame the british for setting hindus and muslims against each other , and then cruelly separating them into two countries . there is some merit in this view , but it \'s also true that no one forced hindus and muslims in the region to mistreat each other as they did around the time of partition . it seems more likely that the british simply saw the tensions between the religions and were clever enough to exploit them to their own ends . \n\n the result is that there is much cruelty and inhumanity in the situation and this is very unpleasant to remember and to see on the screen . but it is never painted as a black - and - white case . there is baseness and nobility on both sides , and also the hope for change in the younger generation . \n\n there is redemption of a sort , in the end , when puro has to make a hard choice between a man who has ruined her life , but also truly loved her , and her family which has disowned her , then later come looking for her . but by that point , she has no option that is without great pain for her . \n\n this film carries the message that both muslims and hindus have their grave faults , and also that both can be dignified and caring people . the reality of partition makes that realisation all the more wrenching , since there can never be real reconciliation across the india / pakistan border . in that sense , it is similar to " mr & mrs iyer " . \n\n in the end , we were glad to have seen the film , even though the resolution was heartbreaking . if the uk and us could deal with their own histories of racism with this kind of frankness , they would certainly be better off .'
The tok_func
must return an instance of BaseTokenizer
:
show_doc(BaseTokenizer)
show_doc(BaseTokenizer.tokenizer)
tokenizer
[source]
tokenizer
(t
:str
) →List
[str
]
Take a text t
and returns the list of its tokens.
show_doc(BaseTokenizer.add_special_cases)
add_special_cases
[source]
add_special_cases
(toks
:StrList
)
Record a list of special tokens toks
.
The fastai library uses spacy tokenizers as its default. The following class wraps it as BaseTokenizer
.
show_doc(SpacyTokenizer)
class
SpacyTokenizer
[source]
SpacyTokenizer
(lang
:str
) ::BaseTokenizer
Wrapper around a spacy tokenizer to make it a BaseTokenizer
.
If you want to use your custom tokenizer, just subclass the BaseTokenizer
and override its tokenizer
and add_spec_cases
functions.
Rules are just functions that take a string and return the modified string. This allows you to customize the list of default_rules
as you please. Those default_rules
are:
show_doc(deal_caps, doc_string=False)
deal_caps
[source]
deal_caps
(x
:StrList
) →StrList
In t
, if a word is written in all caps, we put it in a lower case and add a special token before. A model will more easily learn this way the meaning of the sentence. The rest of the capitals are removed.
deal_caps("I'm suddenly SHOUTING FOR NO REASON!")
"i'm suddenly xxup shouting xxup for no xxup reason!"
show_doc(fix_html, doc_string=False)
fix_html
[source]
fix_html
(x
:str
) →str
This rules replaces a bunch of HTML characters or norms in plain text ones. For instance <br />
are replaced by \n
,
by spaces etc...
fix_html("Some HTML text<br />")
'Some HTML& text\n'
show_doc(replace_rep, doc_string=False)
replace_rep
[source]
replace_rep
(t
:str
) →str
Whenever a character is repeated more than three times in t
, we replace the whole thing by 'TK_REP n char' where n is the number of occurences and char the character.
replace_rep("I'm so excited!!!!!!!!")
"I'm so excited xxrep 8 ! "
show_doc(replace_wrep, doc_string=False)
replace_wrep
[source]
replace_wrep
(t
:str
) →str
Whenever a word is repeated more than four times in t
, we replace the whole thing by 'TK_WREP n w' where n is the number of occurences and w the word repeated.
replace_wrep("I've never ever ever ever ever ever ever ever done this.")
"I've never xxwrep 7 ever done this."
show_doc(rm_useless_spaces)
rm_useless_spaces("Inconsistent use of spaces.")
'Inconsistent use of spaces.'
show_doc(spec_add_spaces)
spec_add_spaces('I #like to #put #hashtags #everywhere!')
'I # like to # put # hashtags # everywhere!'
To convert our set of tokens to unique ids (and be able to have them go through embeddings), we use the following class:
show_doc(Vocab, doc_string=False)
class
Vocab
[source]
Vocab
(itos
:Dict
[int
,str
])
Contain the correspondance between numbers and tokens and numericalize. itos
contains the id to token correspondance.
show_doc(Vocab.create, doc_string=False)
create
[source]
create
(tokens
:Tokens
,max_vocab
:int
,min_freq
:int
) →Vocab
Create a Vocab
dictionary from a set of tokens
. Only keeps max_vocab
tokens, and only if they appear at least min_freq
times, set the rest to UNK
.
show_doc(Vocab.numericalize)
show_doc(Vocab.textify)
textify
[source]
textify
(nums
:Collection
[int
],sep
=' '
) →List
[str
]
Convert a list of nums
to their tokens.
vocab = Vocab.create(tokens, max_vocab=1000, min_freq=2)
vocab.numericalize(tokens[2])[:10]
[14, 9, 6, 619, 85, 17, 110, 25, 4, 2]
show_doc(SpacyTokenizer.tokenizer)
tokenizer
[source]
tokenizer
(t
:str
) →List
[str
]
show_doc(SpacyTokenizer.add_special_cases)
add_special_cases
[source]
add_special_cases
(toks
:StrList
)