In this notebook, we'll see how to fine-tune one of the 🤗 Transformers model on a masked language modeling tasks.
Note: a Masked language modeling is a model that has to predict some tokens that are masked in the input. It still has access to the whole sentence, so it can use the tokens before and after the tokens masked to predict their value.
We will see how to easily load and preprocess the dataset for each one of those tasks, and how to use the Trainer
API to fine-tune a model on it.
A script version of this notebook you can directly run on a distributed environment or on TPU is available in our examples folder.
model_checkpoint = "neuralmind/bert-base-portuguese-cased"
If you're opening this Notebook on colab, you will need to connect to your Google Drive and to install 🤗 Transformers and 🤗 Datasets.
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
Mounted at /content/drive
%%capture
! pip install datasets transformers
If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.
To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.
First you have to store your authentication token from the Hugging Face website (sign up here if you haven't already!) then execute the following cell and input your username and password:
from huggingface_hub import notebook_login
notebook_login()
Login successful
Your token has been saved to /root/.huggingface/token
Authenticated through git-credential store but this isn't the helper defined on your machine.
You might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the default
git config --global credential.helper store
Then you need to install Git-LFS. Uncomment the following instructions:
%%capture
!apt install git-lfs
Make sure your version of Transformers is at least 4.11.0 since the functionality was introduced in that version:
import transformers
print(transformers.__version__)
# 4.14.1
4.15.0
import datasets
print(datasets.__version__)
# 1.17.0
1.17.0
import pathlib
from pathlib import Path
import pandas as pd
from datasets import Dataset, DatasetDict
path_to_text_files = "https://cic.unb.br/~teodecampos/LeNER-Br/LeNER-Br.zip"
!wget {path_to_text_files}
--2021-12-22 08:43:02-- https://cic.unb.br/~teodecampos/LeNER-Br/LeNER-Br.zip Resolving cic.unb.br (cic.unb.br)... 164.41.110.66 Connecting to cic.unb.br (cic.unb.br)|164.41.110.66|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 93637203 (89M) [application/zip] Saving to: ‘LeNER-Br.zip’ LeNER-Br.zip 100%[===================>] 89.30M 4.12MB/s in 18s 2021-12-22 08:43:22 (4.86 MB/s) - ‘LeNER-Br.zip’ saved [93637203/93637203]
!ls -al
total 91460 drwxr-xr-x 1 root root 4096 Dec 22 08:43 . drwxr-xr-x 1 root root 4096 Dec 22 08:26 .. drwxr-xr-x 4 root root 4096 Dec 3 14:33 .config -rw-r--r-- 1 root root 93637203 Aug 31 2018 LeNER-Br.zip drwxr-xr-x 1 root root 4096 Dec 3 14:33 sample_data
!unzip LeNER-Br.zip
Archive: LeNER-Br.zip creating: LeNER-Br/ inflating: LeNER-Br/index.html inflating: LeNER-Br/luz_etal_propor2018.pdf inflating: LeNER-Br/README.md creating: LeNER-Br/model/ inflating: LeNER-Br/model/evaluate.py extracting: LeNER-Br/model/requirements.txt inflating: LeNER-Br/model/build_data.py inflating: LeNER-Br/model/train.py inflating: LeNER-Br/model/evaluateText.py inflating: LeNER-Br/model/evaluateSentence.py inflating: LeNER-Br/model/classScores.py inflating: LeNER-Br/model/LICENSE.txt creating: LeNER-Br/leNER-Br/ creating: LeNER-Br/model/results/ creating: LeNER-Br/model/model/ inflating: LeNER-Br/model/model/config.pyc inflating: LeNER-Br/model/model/data_utils.pyc inflating: LeNER-Br/model/model/ner_model.py inflating: LeNER-Br/model/model/data_utils.py inflating: LeNER-Br/model/model/ner_model.pyc inflating: LeNER-Br/model/model/base_model.pyc inflating: LeNER-Br/model/model/config.py inflating: LeNER-Br/model/model/__init__.pyc extracting: LeNER-Br/model/model/__init__.py inflating: LeNER-Br/model/model/base_model.py inflating: LeNER-Br/model/model/general_utils.pyc inflating: LeNER-Br/model/model/general_utils.py creating: LeNER-Br/model/data/ inflating: LeNER-Br/model/data/train.txt inflating: LeNER-Br/model/data/words.txt inflating: LeNER-Br/model/data/glove.6B.300d.trimmed.npz inflating: LeNER-Br/model/data/chars.txt inflating: LeNER-Br/model/data/test.txt inflating: LeNER-Br/model/data/dev.txt inflating: LeNER-Br/model/data/tags.txt creating: LeNER-Br/leNER-Br/train/ inflating: LeNER-Br/leNER-Br/train/AgRgSTJ2.conll inflating: LeNER-Br/leNER-Br/train/adi3767.conll inflating: LeNER-Br/leNER-Br/train/AIRR15708820115050222.conll inflating: LeNER-Br/leNER-Br/train/Ag10000170733596001.conll inflating: LeNER-Br/leNER-Br/train/Port77DF.conll inflating: LeNER-Br/leNER-Br/train/train.conll inflating: LeNER-Br/leNER-Br/train/airr801422012.conll inflating: LeNER-Br/leNER-Br/train/ERR731004520105130003.conll inflating: LeNER-Br/leNER-Br/train/EDRR1TST.conll inflating: LeNER-Br/leNER-Br/train/AgRgTSE3.conll inflating: LeNER-Br/leNER-Br/train/DespSEPLAGDF.conll inflating: LeNER-Br/leNER-Br/train/ADI2TJDFT.conll inflating: LeNER-Br/leNER-Br/train/ACORDAOTCU25052016.conll inflating: LeNER-Br/leNER-Br/train/lei11340.conll inflating: LeNER-Br/leNER-Br/train/AP00001415620157010201.conll inflating: LeNER-Br/leNER-Br/train/RR942006420095040028.conll inflating: LeNER-Br/leNER-Br/train/EDEDARR208420135040232.conll inflating: LeNER-Br/leNER-Br/train/HC151914AgRES.conll inflating: LeNER-Br/leNER-Br/train/AC10024133855890001.conll inflating: LeNER-Br/leNER-Br/train/ED1STM.conll inflating: LeNER-Br/leNER-Br/train/AgRgTSE1.conll inflating: LeNER-Br/leNER-Br/train/Pet128TSE5.conll inflating: LeNER-Br/leNER-Br/train/TSTRR16037920105200001.conll inflating: LeNER-Br/leNER-Br/train/LoaDF2018.conll inflating: LeNER-Br/leNER-Br/train/AC1TJAC.conll inflating: LeNER-Br/leNER-Br/train/AgAIRR11889820145030011.conll inflating: LeNER-Br/leNER-Br/train/AIAgRAgI6193ARAGUARIMG.conll inflating: LeNER-Br/leNER-Br/train/Rcl3495STJ.conll inflating: LeNER-Br/leNER-Br/train/EEDRR9715120105020002.conll inflating: LeNER-Br/leNER-Br/train/HC340624SP.conll inflating: LeNER-Br/leNER-Br/train/Ag10105170208398001.conll inflating: LeNER-Br/leNER-Br/train/TCU4687.conll inflating: LeNER-Br/leNER-Br/train/REsp1583083RS.conll inflating: LeNER-Br/leNER-Br/train/HC70000845920187000000.conll inflating: LeNER-Br/leNER-Br/train/AP00000794920137060006.conll inflating: LeNER-Br/leNER-Br/train/CP32320177080008PA.conll inflating: LeNER-Br/leNER-Br/train/AC1TJMG.conll inflating: LeNER-Br/leNER-Br/train/AIRR3999520145020086.conll inflating: LeNER-Br/leNER-Br/train/AgRgSTJ1.conll inflating: LeNER-Br/leNER-Br/train/HC110260SP.conll inflating: LeNER-Br/leNER-Br/train/ED1TJAC.conll inflating: LeNER-Br/leNER-Br/train/AP771420167080008PA.conll inflating: LeNER-Br/leNER-Br/train/RR474820145230056.conll inflating: LeNER-Br/leNER-Br/train/Lei11788.conll inflating: LeNER-Br/leNER-Br/train/AgCr10582160008758001.conll inflating: LeNER-Br/leNER-Br/train/RR2574407120025020372.conll inflating: LeNER-Br/leNER-Br/train/AC1TCU.conll inflating: LeNER-Br/leNER-Br/train/HC418951PR.conll inflating: LeNER-Br/leNER-Br/train/EDAgRgTSE2.conll inflating: LeNER-Br/leNER-Br/train/REE5908TSE4.conll inflating: LeNER-Br/leNER-Br/train/AP00001441420167030203.conll creating: LeNER-Br/leNER-Br/test/ inflating: LeNER-Br/leNER-Br/test/AIRR285001420095060020.conll inflating: LeNER-Br/leNER-Br/test/AgAIRR617420125150072.conll inflating: LeNER-Br/leNER-Br/test/AIRR581406820065030079.conll inflating: LeNER-Br/leNER-Br/test/test.conll inflating: LeNER-Br/leNER-Br/test/AIRR3731820145060141.conll inflating: LeNER-Br/leNER-Br/test/AIRR10006691020135020322.conll inflating: LeNER-Br/leNER-Br/test/HC04798525420128130000.conll inflating: LeNER-Br/leNER-Br/test/ACORDAOTCU11602016.conll inflating: LeNER-Br/leNER-Br/test/RR14976020105020085.conll inflating: LeNER-Br/leNER-Br/test/HC10000150589281000.conll inflating: LeNER-Br/leNER-Br/test/RR-578030-46.1999.5.04.0018.conll creating: LeNER-Br/leNER-Br/scripts/ inflating: LeNER-Br/leNER-Br/scripts/abbrev_list.pkl inflating: LeNER-Br/leNER-Br/scripts/textToConll.py creating: LeNER-Br/leNER-Br/raw_text/ inflating: LeNER-Br/leNER-Br/raw_text/REE5908TSE4.txt inflating: LeNER-Br/leNER-Br/raw_text/HC04798525420128130000.txt inflating: LeNER-Br/leNER-Br/raw_text/Ag10000170733596001.txt inflating: LeNER-Br/leNER-Br/raw_text/RR-578030-46.1999.5.04.0018.txt inflating: LeNER-Br/leNER-Br/raw_text/LoaDF2018.txt inflating: LeNER-Br/leNER-Br/raw_text/adi3767.txt inflating: LeNER-Br/leNER-Br/raw_text/RR474820145230056.txt inflating: LeNER-Br/leNER-Br/raw_text/RR942006420095040028.txt inflating: LeNER-Br/leNER-Br/raw_text/DespSEPLAGDF.txt inflating: LeNER-Br/leNER-Br/raw_text/AC10024133855890001.txt inflating: LeNER-Br/leNER-Br/raw_text/ACO2821STF.txt inflating: LeNER-Br/leNER-Br/raw_text/ADI1TJDFT.txt inflating: LeNER-Br/leNER-Br/raw_text/HC110260SP.txt inflating: LeNER-Br/leNER-Br/raw_text/AP00000794920137060006.txt inflating: LeNER-Br/leNER-Br/raw_text/AgAIRR11889820145030011.txt inflating: LeNER-Br/leNER-Br/raw_text/AC1TCU.txt inflating: LeNER-Br/leNER-Br/raw_text/ERR731004520105130003.txt inflating: LeNER-Br/leNER-Br/raw_text/ED1STM.txt inflating: LeNER-Br/leNER-Br/raw_text/Port77DF.txt inflating: LeNER-Br/leNER-Br/raw_text/AP00001415620157010201.txt inflating: LeNER-Br/leNER-Br/raw_text/AIAgRAgI6193ARAGUARIMG.txt inflating: LeNER-Br/leNER-Br/raw_text/TSTRR16037920105200001.txt inflating: LeNER-Br/leNER-Br/raw_text/EEDRR9715120105020002.txt inflating: LeNER-Br/leNER-Br/raw_text/AgRgTSE3.txt inflating: LeNER-Br/leNER-Br/raw_text/TCU4687.txt inflating: LeNER-Br/leNER-Br/raw_text/HC70000692720177000000.txt inflating: LeNER-Br/leNER-Br/raw_text/Agr10540170008341001.txt inflating: LeNER-Br/leNER-Br/raw_text/CP32320177080008PA.txt inflating: LeNER-Br/leNER-Br/raw_text/AIRR15708820115050222.txt inflating: LeNER-Br/leNER-Br/raw_text/HC10000170833503000.txt inflating: LeNER-Br/leNER-Br/raw_text/AgCr10582160008758001.txt inflating: LeNER-Br/leNER-Br/raw_text/EDAgRgTSE2.txt inflating: LeNER-Br/leNER-Br/raw_text/AgRgSTJ2.txt inflating: LeNER-Br/leNER-Br/raw_text/HC70000845920187000000.txt inflating: LeNER-Br/leNER-Br/raw_text/ADI2TJDFT.txt inflating: LeNER-Br/leNER-Br/raw_text/ACORDAOTCU11602016.txt inflating: LeNER-Br/leNER-Br/raw_text/Lei11788.txt inflating: LeNER-Br/leNER-Br/raw_text/Rcl3495STJ.txt inflating: LeNER-Br/leNER-Br/raw_text/20150110436469APC.txt inflating: LeNER-Br/leNER-Br/raw_text/AIRR3999520145020086.txt inflating: LeNER-Br/leNER-Br/raw_text/Pet128TSE5.txt inflating: LeNER-Br/leNER-Br/raw_text/lei11340.txt inflating: LeNER-Br/leNER-Br/raw_text/REsp1583083RS.txt inflating: LeNER-Br/leNER-Br/raw_text/ED1TJAC.txt inflating: LeNER-Br/leNER-Br/raw_text/AgRg1STM.txt inflating: LeNER-Br/leNER-Br/raw_text/airr801422012.txt inflating: LeNER-Br/leNER-Br/raw_text/EDEDARR208420135040232.txt inflating: LeNER-Br/leNER-Br/raw_text/AIRR285001420095060020.txt inflating: LeNER-Br/leNER-Br/raw_text/RR2574407120025020372.txt inflating: LeNER-Br/leNER-Br/raw_text/AC1TJMG.txt inflating: LeNER-Br/leNER-Br/raw_text/HC340624SP.txt inflating: LeNER-Br/leNER-Br/raw_text/RR14976020105020085.txt inflating: LeNER-Br/leNER-Br/raw_text/HC151914AgRES.txt inflating: LeNER-Br/leNER-Br/raw_text/AgRgSTJ1.txt inflating: LeNER-Br/leNER-Br/raw_text/AIRR581406820065030079.txt inflating: LeNER-Br/leNER-Br/raw_text/EDRR1TST.txt inflating: LeNER-Br/leNER-Br/raw_text/AIRR3731820145060141.txt inflating: LeNER-Br/leNER-Br/raw_text/AC20150110436469APC.txt inflating: LeNER-Br/leNER-Br/raw_text/AgRgTSE1.txt inflating: LeNER-Br/leNER-Br/raw_text/INSTRUCAOON06043378120186000000.txt inflating: LeNER-Br/leNER-Br/raw_text/AP771420167080008PA.txt inflating: LeNER-Br/leNER-Br/raw_text/AC1TJAC.txt inflating: LeNER-Br/leNER-Br/raw_text/Ag10105170208398001.txt inflating: LeNER-Br/leNER-Br/raw_text/HC418951PR.txt inflating: LeNER-Br/leNER-Br/raw_text/AIRR10006691020135020322.txt inflating: LeNER-Br/leNER-Br/raw_text/AP00001441420167030203.txt inflating: LeNER-Br/leNER-Br/raw_text/AgAIRR617420125150072.txt inflating: LeNER-Br/leNER-Br/raw_text/APO1TJDFT.txt extracting: LeNER-Br/leNER-Br/raw_text/metadadosAgRgTSE1_meta.json inflating: LeNER-Br/leNER-Br/raw_text/ACORDAOTCU25052016.txt inflating: LeNER-Br/leNER-Br/raw_text/HC10000150589281000.txt creating: LeNER-Br/leNER-Br/metadata/ inflating: LeNER-Br/leNER-Br/metadata/20150110436469APC_meta.json inflating: LeNER-Br/leNER-Br/metadata/APO1TJDFT_meta.json inflating: LeNER-Br/leNER-Br/metadata/HC10000150589281000_meta.json inflating: LeNER-Br/leNER-Br/metadata/RR2574407120025020372_meta.json inflating: LeNER-Br/leNER-Br/metadata/Ag10000170733596001_meta.json inflating: LeNER-Br/leNER-Br/metadata/HC340624SP_meta.json inflating: LeNER-Br/leNER-Br/metadata/ACO2821STF_meta.json inflating: LeNER-Br/leNER-Br/metadata/TCU4687_meta.json inflating: LeNER-Br/leNER-Br/metadata/AC20150110436469APC_meta.json inflating: LeNER-Br/leNER-Br/metadata/AC1TJAC_meta.json inflating: LeNER-Br/leNER-Br/metadata/ED1TJAC_meta.json inflating: LeNER-Br/leNER-Br/metadata/ERR731004520105130003_meta.json inflating: LeNER-Br/leNER-Br/metadata/HC110260SP_meta.json inflating: LeNER-Br/leNER-Br/metadata/EDRR1TST_meta.json inflating: LeNER-Br/leNER-Br/metadata/HC151914AgRES_meta.json inflating: LeNER-Br/leNER-Br/metadata/RR942006420095040028_meta.json inflating: LeNER-Br/leNER-Br/metadata/EDAgRgTSE2_meta.json inflating: LeNER-Br/leNER-Br/metadata/AgAIRR617420125150072_meta.json inflating: LeNER-Br/leNER-Br/metadata/RR-578030-46.1999.5.04.0018_meta.json inflating: LeNER-Br/leNER-Br/metadata/CP32320177080008PA_meta.json inflating: LeNER-Br/leNER-Br/metadata/HC70000692720177000000_meta.json inflating: LeNER-Br/leNER-Br/metadata/AIRR15708820115050222_meta.json inflating: LeNER-Br/leNER-Br/metadata/AP00000794920137060006_meta.json inflating: LeNER-Br/leNER-Br/metadata/Port77DF_meta.json inflating: LeNER-Br/leNER-Br/metadata/ADI2TJDFT_meta.json inflating: LeNER-Br/leNER-Br/metadata/Rcl3495STJ_meta.json inflating: LeNER-Br/leNER-Br/metadata/AC10024133855890001_meta.json inflating: LeNER-Br/leNER-Br/metadata/HC418951PR_meta.json inflating: LeNER-Br/leNER-Br/metadata/AgRgSTJ1_meta.json inflating: LeNER-Br/leNER-Br/metadata/Ag10105170208398001_meta.json inflating: LeNER-Br/leNER-Br/metadata/AIRR581406820065030079_meta.json inflating: LeNER-Br/leNER-Br/metadata/AC1TJMG_meta.json inflating: LeNER-Br/leNER-Br/metadata/AIRR3731820145060141_meta.json inflating: LeNER-Br/leNER-Br/metadata/AIAgRAgI6193ARAGUARIMG_meta.json inflating: LeNER-Br/leNER-Br/metadata/ED1STM_meta.json inflating: LeNER-Br/leNER-Br/metadata/adi3767_meta.json inflating: LeNER-Br/leNER-Br/metadata/REsp1583083RS_meta.json inflating: LeNER-Br/leNER-Br/metadata/EDEDARR208420135040232_meta.json inflating: LeNER-Br/leNER-Br/metadata/Lei11788_meta.json inflating: LeNER-Br/leNER-Br/metadata/AC1TCU_meta.json inflating: LeNER-Br/leNER-Br/metadata/ACORDAOTCU11602016_meta.json inflating: LeNER-Br/leNER-Br/metadata/HC10000170833503000_meta.json inflating: LeNER-Br/leNER-Br/metadata/AgRgSTJ2_meta.json inflating: LeNER-Br/leNER-Br/metadata/AIRR285001420095060020_meta.json inflating: LeNER-Br/leNER-Br/metadata/AIRR10006691020135020322_meta.json inflating: LeNER-Br/leNER-Br/metadata/AgCr10582160008758001_meta.json inflating: LeNER-Br/leNER-Br/metadata/ADI1TJDFT_meta.json inflating: LeNER-Br/leNER-Br/metadata/Pet128TSE5_meta.json inflating: LeNER-Br/leNER-Br/metadata/AgRgTSE1_meta.json inflating: LeNER-Br/leNER-Br/metadata/DespSEPLAGDF_meta.json inflating: LeNER-Br/leNER-Br/metadata/HC70000845920187000000_meta.json inflating: LeNER-Br/leNER-Br/metadata/AP00001415620157010201_meta.json inflating: LeNER-Br/leNER-Br/metadata/INSTRUCAOON06043378120186000000_meta.json inflating: LeNER-Br/leNER-Br/metadata/RR14976020105020085_meta.json inflating: LeNER-Br/leNER-Br/metadata/EEDRR9715120105020002_meta.json inflating: LeNER-Br/leNER-Br/metadata/Agr10540170008341001_meta.json inflating: LeNER-Br/leNER-Br/metadata/lei11340_meta.json inflating: LeNER-Br/leNER-Br/metadata/LoaDF2018_meta.json inflating: LeNER-Br/leNER-Br/metadata/AIRR3999520145020086_meta.json inflating: LeNER-Br/leNER-Br/metadata/RR474820145230056_meta.json inflating: LeNER-Br/leNER-Br/metadata/AgRg1STM_meta.json inflating: LeNER-Br/leNER-Br/metadata/airr801422012_meta.json inflating: LeNER-Br/leNER-Br/metadata/airr8014220212_meta.json inflating: LeNER-Br/leNER-Br/metadata/AP00001441420167030203_meta.json inflating: LeNER-Br/leNER-Br/metadata/AP771420167080008PA_meta.json inflating: LeNER-Br/leNER-Br/metadata/HC04798525420128130000_meta.json inflating: LeNER-Br/leNER-Br/metadata/TSTRR16037920105200001_meta.json inflating: LeNER-Br/leNER-Br/metadata/AgAIRR11889820145030011_meta.json inflating: LeNER-Br/leNER-Br/metadata/REE5908TSE4_meta.json inflating: LeNER-Br/leNER-Br/metadata/ACORDAOTCU25052016_meta.json inflating: LeNER-Br/leNER-Br/metadata/AgRgTSE3_meta.json creating: LeNER-Br/leNER-Br/dev/ inflating: LeNER-Br/leNER-Br/dev/AC20150110436469APC.conll inflating: LeNER-Br/leNER-Br/dev/INSTRUCAOON06043378120186000000.conll inflating: LeNER-Br/leNER-Br/dev/ACO2821STF.conll inflating: LeNER-Br/leNER-Br/dev/HC10000170833503000.conll inflating: LeNER-Br/leNER-Br/dev/Agr10540170008341001.conll inflating: LeNER-Br/leNER-Br/dev/AgRg1STM.conll inflating: LeNER-Br/leNER-Br/dev/ADI1TJDFT.conll inflating: LeNER-Br/leNER-Br/dev/20150110436469APC.conll inflating: LeNER-Br/leNER-Br/dev/APO1TJDFT.conll inflating: LeNER-Br/leNER-Br/dev/dev.conll inflating: LeNER-Br/leNER-Br/dev/HC70000692720177000000.conll creating: LeNER-Br/model/results/prototype_revised/ inflating: LeNER-Br/model/results/prototype_revised/events.out.tfevents.1527043006.pedro-Lenovo-ideapad-320-15IKB inflating: LeNER-Br/model/results/prototype_revised/log.txt creating: LeNER-Br/model/model/__pycache__/ inflating: LeNER-Br/model/model/__pycache__/__init__.cpython-36.pyc inflating: LeNER-Br/model/model/__pycache__/ner_model.cpython-36.pyc inflating: LeNER-Br/model/model/__pycache__/general_utils.cpython-36.pyc inflating: LeNER-Br/model/model/__pycache__/config.cpython-36.pyc inflating: LeNER-Br/model/model/__pycache__/data_utils.cpython-36.pyc inflating: LeNER-Br/model/model/__pycache__/base_model.cpython-36.pyc creating: LeNER-Br/model/results/prototype_revised/model.weights/ inflating: LeNER-Br/model/results/prototype_revised/model.weights/checkpoint inflating: LeNER-Br/model/results/prototype_revised/model.weights/_meta inflating: LeNER-Br/model/results/prototype_revised/model.weights/_index inflating: LeNER-Br/model/results/prototype_revised/model.weights/_data-00000-of-00001
path_to_text_files = '/content/LeNER-Br/leNER-Br/raw_text'
p = Path(path_to_text_files).glob('**/*')
files = [x for x in p if x.is_file() and x.suffix == '.txt']
files
[PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AgAIRR11889820145030011.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AgRgTSE3.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AIRR10006691020135020322.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/ED1STM.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/Pet128TSE5.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/HC04798525420128130000.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/Rcl3495STJ.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/20150110436469APC.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/ADI1TJDFT.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/RR-578030-46.1999.5.04.0018.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/REE5908TSE4.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AC1TCU.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/HC418951PR.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AP00001441420167030203.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AC10024133855890001.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/TCU4687.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AIRR3731820145060141.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AgRgTSE1.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/ACORDAOTCU25052016.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/RR942006420095040028.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/ERR731004520105130003.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/ACO2821STF.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/Ag10105170208398001.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/ADI2TJDFT.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AP771420167080008PA.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AgCr10582160008758001.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AIRR581406820065030079.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AIAgRAgI6193ARAGUARIMG.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AIRR15708820115050222.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/HC70000845920187000000.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/Ag10000170733596001.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/HC10000150589281000.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/LoaDF2018.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/TSTRR16037920105200001.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/EDEDARR208420135040232.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/RR14976020105020085.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AP00001415620157010201.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/Lei11788.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/REsp1583083RS.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AgRg1STM.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/Port77DF.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/lei11340.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AgRgSTJ2.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/EDRR1TST.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/HC110260SP.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AC1TJMG.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/RR474820145230056.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/HC10000170833503000.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/airr801422012.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/HC70000692720177000000.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/adi3767.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/RR2574407120025020372.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AgAIRR617420125150072.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AC1TJAC.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/Agr10540170008341001.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/EDAgRgTSE2.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AP00000794920137060006.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/DespSEPLAGDF.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AgRgSTJ1.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AC20150110436469APC.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/ACORDAOTCU11602016.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/HC340624SP.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/EEDRR9715120105020002.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/ED1TJAC.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/CP32320177080008PA.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/APO1TJDFT.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/INSTRUCAOON06043378120186000000.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AIRR3999520145020086.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/AIRR285001420095060020.txt'), PosixPath('/content/LeNER-Br/leNER-Br/raw_text/HC151914AgRES.txt')]
len(files)
70
paragraphs_list = list()
for file in files:
paragraphs_by_file_list = list()
with open(file, 'r') as f:
data = f.read()
paragraphs = data.split("\n\n")
num = 0
for paragraph in paragraphs:
p = paragraph.strip()
if p != '':
paragraphs_by_file_list.append(p.replace('\n', ' '))
num += 1
paragraphs_list.extend(paragraphs_by_file_list)
len(paragraphs_list)
3324
df = pd.DataFrame(paragraphs_list)
df.rename(columns={0: 'text'}, inplace=True)
df.head()
text | |
---|---|
0 | A C Ó R D Ã O |
1 | (7ª Turma) |
2 | GMDAR/NB/LPLM |
3 | AGRAVO. AGRAVO DE INSTRUMENTO EM RECURSO DE RE... |
4 | Vistos, relatados e discutidos estes autos de ... |
df['text'].str.split().apply(len).value_counts()
1 178 2 158 3 133 4 101 6 100 ... 980 1 934 1 2957 1 860 1 1968 1 Name: text, Length: 260, dtype: int64
There are files that have not been broken into small paragraphs, but the code that will be applied to the tokenized datasets (train and validation) will concatenate them all and group them into chunks. Therefore, we don't need to deal with this problem.
from sklearn.model_selection import train_test_split
train, validation = train_test_split(df, test_size=0.2)
train.reset_index(drop=True, inplace=True)
validation.reset_index(drop=True, inplace=True)
train.head()
text | |
---|---|
0 | Os honorários periciais inserem-se em um conte... |
1 | Com efeito, nos termos do art. 61, §1º, da Lei... |
2 | Tendo, com suporte nas razões já demonstradas,... |
3 | Neste sentido é o entendimento do Superior Tri... |
4 | E continua: "por isso é que se diz que a decis... |
train_dataset = Dataset.from_pandas(train)
validation_dataset = Dataset.from_pandas(validation)
datasets = DatasetDict()
datasets['train'] = train_dataset
datasets['validation'] = validation_dataset
datasets
DatasetDict({ train: Dataset({ features: ['text'], num_rows: 15252 }) validation: Dataset({ features: ['text'], num_rows: 3813 }) })
repo_id = "pierreguillou/lener_br_finetuning_language_model"
datasets.push_to_hub(repo_id)
Pushing split train to the Hub.
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
Pushing split validation to the Hub. The repository already exists: the `private` keyword argument will be ignored.
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
For each of the following tasks, we will use this dataset that we pushed to the HF library. You can load it very easily with the 🤗 Datasets library.
from datasets import load_dataset
repo_id = "pierreguillou/lener_br_finetuning_language_model"
datasets = load_dataset(repo_id)
Downloading: 0%| | 0.00/736 [00:00<?, ?B/s]
Using custom data configuration pierreguillou--lener_br_finetuning_language_model-d5d35d543fec7e31
Downloading and preparing dataset None/None (download: 1.11 MiB, generated: 1.79 MiB, post-processed: Unknown size, total: 2.90 MiB) to /root/.cache/huggingface/datasets/parquet/pierreguillou--lener_br_finetuning_language_model-d5d35d543fec7e31/0.0.0/1638526fd0e8d960534e2155dc54fdff8dce73851f21f031d2fb9c2cf757c121...
0%| | 0/2 [00:00<?, ?it/s]
Downloading: 0%| | 0.00/243k [00:00<?, ?B/s]
Downloading: 0%| | 0.00/925k [00:00<?, ?B/s]
0%| | 0/2 [00:00<?, ?it/s]
Dataset parquet downloaded and prepared to /root/.cache/huggingface/datasets/parquet/pierreguillou--lener_br_finetuning_language_model-d5d35d543fec7e31/0.0.0/1638526fd0e8d960534e2155dc54fdff8dce73851f21f031d2fb9c2cf757c121. Subsequent calls will reuse this data.
0%| | 0/2 [00:00<?, ?it/s]
datasets
DatasetDict({ validation: Dataset({ features: ['text'], num_rows: 3813 }) train: Dataset({ features: ['text'], num_rows: 15252 }) })
You can replace the dataset above with any dataset hosted on the hub or use your own files. Just uncomment the following cell and replace the paths with values that will lead to your files:
# datasets = load_dataset("text", data_files={"train": path_to_train.txt, "validation": path_to_validation.txt}
You can also load datasets from a csv or a JSON file, see the full documentation for more information.
To access an actual element, you need to select a split first, then give an index:
datasets["train"][10]
{'text': 'Branco-AC - Mod. 500258 - Autos n.º 1002199-81.2017.8.01.0000/50000'}
To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset.
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
display(HTML(df.to_html()))
show_random_elements(datasets["train"])
text | |
---|---|
0 | documento pode ser acessado no endereço eletrônico http://www.stf.jus.br/portal/autenticacao/ sob o número 3373814. |
1 | por ele constituída, "(...) no caso em tela, esta Impetrante NÃO FOI INTIMADA, E DECLARA QUE NÃO |
2 | "eventual questão atinente à existência de coligação exige instrução, o que não |
3 | BELMIRO CONSELHO ESPECIAL, Data de Julgamento: |
4 | Documento assinado digitalmente conforme MP n° 2.200-2/2001 de 24/08/2001, que institui a Infraestrutura de Chaves Públicas Brasileira - ICP-Brasil. O |
5 | R$ 1.243.372,38 (9,09%) para R$ 161.340,49 (1,18%) o total |
6 | Endereço: Rua Tribunal de Justiça, s/n, Via Verde, CEP 69.915-631, Tel. 68 3302-0444/0445, Rio BrancoAC |
7 | [ACÓRDÃO] |
8 | Não procede o inconformismo, uma vez que não se pode fazer distinção, para aplicação do direito, entre servidor público estatutário e servidor com contrato regido pela CLT. |
9 | Social teriam extrapolado a previsão legal, criando novos |
As we can see, some of the texts are a full paragraph of a Wikipedia article while others are just titles or empty lines.
For masked language modeling (MLM) we are going to use the same preprocessing as before for our dataset with one additional step: we will randomly mask some tokens (by replacing them by [MASK]
) and the labels will be adjusted to only include the masked tokens (we don't have to predict the non-masked tokens).
We will use the neuralmind/bert-base-portuguese-cased
model for this example. You can pick any of the checkpoints listed here instead:
To tokenize all our texts with the same vocabulary that was used when training the model, we have to download a pretrained tokenizer. This is all done by the AutoTokenizer
class:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)
Downloading: 0%| | 0.00/43.0 [00:00<?, ?B/s]
Downloading: 0%| | 0.00/647 [00:00<?, ?B/s]
Downloading: 0%| | 0.00/205k [00:00<?, ?B/s]
Downloading: 0%| | 0.00/2.00 [00:00<?, ?B/s]
Downloading: 0%| | 0.00/112 [00:00<?, ?B/s]
We can now call the tokenizer on all our texts. This is very simple, using the map
method from the Datasets library. First we define a function that call the tokenizer on our texts:
def tokenize_function(examples):
return tokenizer(examples["text"])
Then we apply it to all the splits in our datasets
object, using batched=True
and 4 processes to speed up the preprocessing. We won't need the text
column afterward, so we discard it.
tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
If we now look at an element of our datasets, we will see the text have been replaced by the input_ids
the model will need:
tokenized_datasets["train"][1]
{'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 3305, 293, 5576, 179, 117, 5776, 285, 123, 20165, 182, 125, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
Now for the harder part: we need to concatenate all our texts together then split the result in small chunks of a certain block_size
. To do this, we will use the map
method again, with the option batched=True
. This option actually lets us change the number of examples in the datasets by returning a different number of examples than we got. This way, we can create our new samples from a batch of examples.
First, we grab the maximum length our model was pretrained with. This might be a big too big to fit in your GPU RAM, so here we take a bit less at just 128.
# block_size = tokenizer.model_max_length
block_size = 128
Then we write the preprocessing function that will group our texts:
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
First note that we duplicate the inputs for our labels. This is because the model of the 🤗 Transformers library apply the shifting to the right, so we don't need to do it manually.
Also note that by default, the map
method will send a batch of 1,000 examples to be treated by the preprocessing function. So here, we will drop the remainder to make the concatenated tokenized texts a multiple of block_size
every 1,000 examples. You can adjust this behavior by passing a higher batch size (which will also be processed slower). You can also speed-up the preprocessing by using multiprocessing:
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
batch_size=1000,
num_proc=4,
)
And we can check our datasets have changed: now the samples contain chunks of block_size
contiguous tokens, potentially spanning over several of our original texts.
tokenizer.decode(lm_datasets["train"][1]["input_ids"])
'o eminente Relator, pedindo respeitosas vênias à divergência. [SEP] [CLS] 5. Instruído o feito, a Unidade Técnica apresentou proposta final de encaminhamento acorde, que, nos termos do inciso I, § [UNK] do art. [UNK] da Lei [UNK] 8. 443 / 92 transcrevo ( Peças 15 / 16 ) : [SEP] [CLS] qualquer outro cadastro de inadimplentes pelo mesmo motivo [SEP] [CLS] Presidência da República [SEP] [CLS] Branco - AC - Mod. 500258 - Autos n. [UNK] 1002199 - 81. 2017. 8. 01. 0000 / 50000 [SEP] [CLS]'
Now that the data has been cleaned, we're ready to instantiate our Trainer
. irst we use a model suitable for masked LM:
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)
Downloading: 0%| | 0.00/418M [00:00<?, ?B/s]
Some weights of the model checkpoint at neuralmind/bert-base-portuguese-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
And some TrainingArguments
:
from transformers import Trainer, TrainingArguments
# hyperparameters, which are passed into the training job
per_device_batch_size = 8
gradient_accumulation_steps = 1
learning_rate = 2e-5 # (AdamW) we started with 3e-4, then 1e-4, then 5e-5 but the model overfits fastly
num_train_epochs = 5 # we started with 10 epochs but the model overfits fastly
weight_decay = 0.01
save_total_limit = 2
logging_steps = 100 # melhor evaluate frequently (5000 seems too high)
eval_steps = logging_steps
evaluation_strategy = 'steps'
logging_strategy = 'steps'
save_strategy = 'steps'
save_steps = logging_steps
load_best_model_at_end = True
fp16 = True
# folders
model_name = model_checkpoint.split("/")[-1]
folder_model = 'e' + str(num_train_epochs) + '_lr' + str(learning_rate)
output_dir = '/content/drive/MyDrive/' + 'lm-lenerbr-' + str(model_name) + '/checkpoints/' + folder_model
logging_dir = '/content/drive/MyDrive/' + 'lm-lenerbr-' + str(model_name) + '/logs/' + folder_model
# get best model through a metric
metric_for_best_model = 'eval_loss'
if metric_for_best_model == 'eval_f1':
greater_is_better = True
elif metric_for_best_model == 'eval_loss':
greater_is_better = False
training_args = TrainingArguments(
output_dir=output_dir,
learning_rate=learning_rate,
per_device_train_batch_size=per_device_batch_size,
per_device_eval_batch_size=per_device_batch_size*2,
gradient_accumulation_steps=gradient_accumulation_steps,
num_train_epochs=num_train_epochs,
weight_decay=weight_decay,
save_total_limit=save_total_limit,
logging_steps = logging_steps,
eval_steps = logging_steps,
load_best_model_at_end = load_best_model_at_end,
metric_for_best_model = metric_for_best_model,
greater_is_better = greater_is_better,
gradient_checkpointing = False,
do_train = True,
do_eval = True,
do_predict = True,
evaluation_strategy = evaluation_strategy,
logging_dir=logging_dir,
logging_strategy = logging_strategy,
save_strategy = save_strategy,
save_steps = save_steps,
fp16 = fp16,
push_to_hub=False,
)
Finally, we use a special data_collator
. The data_collator
is a function that is responsible of taking the samples and batching them in tensors. In the previous example, we had nothing special to do, so we just used the default for this argument. Here we want to do the random-masking. We could do it as a pre-processing step (like the tokenization) but then the tokens would always be masked the same way at each epoch. By doing this step inside the data_collator
, we ensure this random masking is done in a new way each time we go over the data.
To do this masking for us, the library provides a DataCollatorForLanguageModeling
. We can adjust the probability of the masking:
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)
Then we just have to pass everything to Trainer
and begin training:
from transformers.trainer_callback import EarlyStoppingCallback
# wait early_stopping_patience x eval_steps before to stp the training in order to get a better model
early_stopping_patience = save_total_limit
trainer = Trainer(
model=model,
args=training_args,
train_dataset=lm_datasets["train"],
eval_dataset=lm_datasets["validation"],
data_collator=data_collator,
callbacks=[EarlyStoppingCallback(early_stopping_patience=early_stopping_patience)],
)
Using amp half precision backend
trainer.train()
***** Running training ***** Num examples = 3227 Num Epochs = 5 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 2020
Step | Training Loss | Validation Loss |
---|---|---|
100 | 1.988700 | 1.616412 |
200 | 1.724900 | 1.561100 |
300 | 1.713400 | 1.499991 |
400 | 1.687400 | 1.451414 |
500 | 1.579700 | 1.433665 |
600 | 1.556900 | 1.407338 |
700 | 1.591400 | 1.421942 |
800 | 1.546000 | 1.406395 |
900 | 1.510100 | 1.352389 |
1000 | 1.507100 | 1.394799 |
1100 | 1.462200 | 1.368093 |
***** Running Evaluation ***** Num examples = 826 Batch size = 16 Saving model checkpoint to /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-100 Configuration saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-100/config.json Model weights saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-100/pytorch_model.bin ***** Running Evaluation ***** Num examples = 826 Batch size = 16 Saving model checkpoint to /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-200 Configuration saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-200/config.json Model weights saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-200/pytorch_model.bin ***** Running Evaluation ***** Num examples = 826 Batch size = 16 Saving model checkpoint to /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-300 Configuration saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-300/config.json Model weights saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-300/pytorch_model.bin Deleting older checkpoint [/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-100] due to args.save_total_limit ***** Running Evaluation ***** Num examples = 826 Batch size = 16 Saving model checkpoint to /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-400 Configuration saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-400/config.json Model weights saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-400/pytorch_model.bin Deleting older checkpoint [/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-200] due to args.save_total_limit ***** Running Evaluation ***** Num examples = 826 Batch size = 16 Saving model checkpoint to /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-500 Configuration saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-500/config.json Model weights saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-500/pytorch_model.bin Deleting older checkpoint [/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-300] due to args.save_total_limit ***** Running Evaluation ***** Num examples = 826 Batch size = 16 Saving model checkpoint to /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-600 Configuration saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-600/config.json Model weights saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-600/pytorch_model.bin Deleting older checkpoint [/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-400] due to args.save_total_limit ***** Running Evaluation ***** Num examples = 826 Batch size = 16 Saving model checkpoint to /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-700 Configuration saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-700/config.json Model weights saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-700/pytorch_model.bin Deleting older checkpoint [/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-500] due to args.save_total_limit ***** Running Evaluation ***** Num examples = 826 Batch size = 16 Saving model checkpoint to /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-800 Configuration saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-800/config.json Model weights saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-800/pytorch_model.bin Deleting older checkpoint [/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-600] due to args.save_total_limit ***** Running Evaluation ***** Num examples = 826 Batch size = 16 Saving model checkpoint to /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-900 Configuration saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-900/config.json Model weights saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-900/pytorch_model.bin Deleting older checkpoint [/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-700] due to args.save_total_limit ***** Running Evaluation ***** Num examples = 826 Batch size = 16 Saving model checkpoint to /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-1000 Configuration saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-1000/config.json Model weights saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-1000/pytorch_model.bin Deleting older checkpoint [/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-800] due to args.save_total_limit ***** Running Evaluation ***** Num examples = 826 Batch size = 16 Saving model checkpoint to /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-1100 Configuration saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-1100/config.json Model weights saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-1100/pytorch_model.bin Deleting older checkpoint [/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-1000] due to args.save_total_limit Training completed. Do not forget to share your model on huggingface.co/models =) Loading best model from /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/checkpoints/e5_lr2e-05/checkpoint-900 (score: 1.3523892164230347).
TrainOutput(global_step=1100, training_loss=1.6243395579944957, metrics={'train_runtime': 1502.8827, 'train_samples_per_second': 10.736, 'train_steps_per_second': 1.344, 'total_flos': 578387661603840.0, 'train_loss': 1.6243395579944957, 'epoch': 2.72})
Like before, we can evaluate our model on the validation set. The perplexity is much lower than for the CLM objective because for the MLM objective, we only have to make predictions for the masked tokens (which represent 15% of the total here) while having access to the rest of the tokens. It's thus an easier task for the model.
import math
eval_results = trainer.evaluate()
print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
***** Running Evaluation ***** Num examples = 826 Batch size = 16
Perplexity: 4.11
# save best model
model_dir = '/content/drive/MyDrive/' + 'lm-lenerbr-' + str(model_name) + '/model/'
trainer.save_model(model_dir)
Saving model checkpoint to /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/ Configuration saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/config.json Model weights saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/pytorch_model.bin
# save tokenizer
tokenizer.save_pretrained(model_dir)
tokenizer config file saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/tokenizer_config.json Special tokens file saved in /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/special_tokens_map.json
('/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/tokenizer_config.json', '/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/special_tokens_map.json', '/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/vocab.txt', '/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/added_tokens.json', '/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/tokenizer.json')
# trainer.push_to_hub()
# load best model
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained(model_dir)
loading configuration file /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/config.json Model config BertConfig { "_name_or_path": "/content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/", "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "directionality": "bidi", "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "output_past": true, "pad_token_id": 0, "pooler_fc_size": 768, "pooler_num_attention_heads": 12, "pooler_num_fc_layers": 3, "pooler_size_per_head": 128, "pooler_type": "first_token_transform", "position_embedding_type": "absolute", "torch_dtype": "float32", "transformers_version": "4.15.0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 29794 } loading weights file /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/pytorch_model.bin All model checkpoint weights were used when initializing BertForMaskedLM. All the weights of BertForMaskedLM were initialized from the model checkpoint at /content/drive/MyDrive/lm-lenerbr-bert-base-portuguese-cased/model/. If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForMaskedLM for predictions without further training.
# push model and tokenizer to HF model hub
if model_checkpoint == "neuralmind/bert-base-portuguese-cased":
model.push_to_hub('pierreguillou/bert-base-cased-pt-lenerbr')
tokenizer.push_to_hub('pierreguillou/bert-base-cased-pt-lenerbr')
else:
model.push_to_hub('pierreguillou/bert-large-cased-pt-lenerbr')
tokenizer.push_to_hub('pierreguillou/bert-large-cased-pt-lenerbr')
/usr/local/lib/python3.7/dist-packages/huggingface_hub/hf_api.py:726: FutureWarning: `create_repo` now takes `token` as an optional positional argument. Be sure to adapt your code! FutureWarning, Cloning https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr into local empty directory. Configuration saved in pierreguillou/bert-base-cased-pt-lenerbr/config.json Model weights saved in pierreguillou/bert-base-cased-pt-lenerbr/pytorch_model.bin
Upload file pytorch_model.bin: 0%| | 3.37k/416M [00:00<?, ?B/s]
To https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr 58f9e9a..d906a41 main -> main tokenizer config file saved in pierreguillou/bert-base-cased-pt-lenerbr/tokenizer_config.json Special tokens file saved in pierreguillou/bert-base-cased-pt-lenerbr/special_tokens_map.json To https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr d906a41..786c3cf main -> main
You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier "your-username/the-name-you-picked"
so for instance:
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("sgugger/my-awesome-model")