Model_Training_with_BERT.ipynb:

Use Text Extensions for Pandas to integrate BERT tokenization with model training for named entity recognition on Pandas.

Introduction

This notebook shows how to use the open source library Text Extensions for Pandas to seamlessly integrate BERT tokenization and embeddings with model training for named entity recognition using Pandas DataFrames.

This example will build on the analysis of the CoNLL-2003 corpus done in Analyze_Model_Outputs to train a new model for named entity recognition (NER) using state-of-the-art natural language understanding with BERT tokenization and embeddings. While the model used is rather simple and will only get modest scoring results, the purpose is to demonstrate how Text Extensions for Pandas integrates BERT from Huggingface Transformers with the TensorArray extension for model training and scoring, all within Pandas DataFrames. See Text_Extension_for_Pandas_Overview for TensorArray specification and more example usage.

The notebook is divided into the following steps:

  1. Retokenize the entire corpus using a "BERT-compatible" tokenizer, and map the token/entity labels from the original corpus on to the new tokenization.
  2. Generate BERT embeddings for every token in the entire corpus in one pass, and store those embeddings in a DataFrame column (of type TensorDtype) alongside the tokens and labels.
  3. Persist the DataFrame with computed BERT embeddings to disk as a checkpoint.
  4. Use the embeddings to train a multinomial logistic regression model to perform named entity recognition.
  5. Compute precision/recall for the model predictions on a test set.

Environment Setup

This notebook requires a Python 3.7 or later environment with NumPy, Pandas, scikit-learn, PyTorch and Huggingface transformers.

The notebook also requires the text_extensions_for_pandas library. You can satisfy this dependency in two ways:

  • Run pip install text_extensions_for_pandas before running this notebook. This command adds the library to your Python environment.
  • Run this notebook out of your local copy of the Text Extensions for Pandas project's source tree. In this case, the notebook will use the version of Text Extensions for Pandas in your local source tree if the package is not installed in your Python environment.
In [1]:
import gc
import os
import sys
from typing import *
import numpy as np
import pandas as pd
import sklearn.pipeline
import sklearn.linear_model
import torch
import transformers

# And of course we need the text_extensions_for_pandas library itself.
try:
    import text_extensions_for_pandas as tp
except ModuleNotFoundError as e:
    # If we're running from within the project source tree and the parent Python
    # environment doesn't have the text_extensions_for_pandas package, use the
    # version in the local source tree.
    if not os.getcwd().endswith("notebooks"):
        raise e
    if ".." not in sys.path:
        sys.path.insert(0, "..")
    import text_extensions_for_pandas as tp

Named Entity Recognition with BERT on CoNLL-2003

CoNLL, the SIGNLL Conference on Computational Natural Language Learning, is an annual academic conference for natural language processing researchers. Each year's conference features a competition involving a challenging NLP task. The task for the 2003 competition involved identifying mentions of named entities in English and German news articles from the late 1990's. The corpus for this 2003 competition is one of the most widely-used benchmarks for the performance of named entity recognition models. Current state-of-the-art results on this corpus produce an F1 score (harmonic mean of precision and recall) of 0.93. The best F1 score in the original competition was 0.89.

For more information about this data set, we recommend reading the conference paper about the competition results, "Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition,".

Note that the data set is licensed for research use only. Be sure to adhere to the terms of the license when using this data set!

The developers of the CoNLL-2003 corpus defined a file format for the corpus, based on the file format used in the earlier Message Understanding Conference competition. This format is generally known as "CoNLL format" or "CoNLL-2003 format".

In the following cell, we use the facilities of Text Extensions for Pandas to download a copy of the CoNLL-2003 data set. Then we read the CoNLL-2003-format file containing the test fold of the corpus and translate the data into a collection of Pandas DataFrame objects, one Dataframe per document. Finally, we display the Dataframe for the first document of the test fold of the corpus.

In [2]:
# Download and cache the data set.
# NOTE: This data set is licensed for research use only. Be sure to adhere
#  to the terms of the license when using this data set!
data_set_info = tp.io.conll.maybe_download_conll_data("outputs")
data_set_info
Out[2]:
{'train': 'outputs/eng.train',
 'dev': 'outputs/eng.testa',
 'test': 'outputs/eng.testb'}

Show how to retokenize with a BERT tokenizer.

The BERT model is originally from the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. The model is pre-trained with masked language modeling and next sentence prediction objectives, which make it effective for masked token prediction and NLU.

With the CoNLL-2003 corpus loaded, it will need to be retokenized using a "BERT-compatible" tokenizer. Then we can map the token/entity labels from the original corpus on to the new tokenization.

We will start by showing the retokenizing process for a single document before doing the same on the entire corpus.

In [3]:
# Read in the corpus in its original tokenization.
corpus_raw = {}
for fold_name, file_name in data_set_info.items():
    df_list = tp.io.conll.conll_2003_to_dataframes(file_name, 
                                                   ["pos", "phrase", "ent"],
                                                   [False, True, True])
    corpus_raw[fold_name] = [
        df.drop(columns=["pos", "phrase_iob", "phrase_type"])
        for df in df_list
    ]

test_raw = corpus_raw["test"]

# Pick out the dataframe for a single example document.
example_df = test_raw[5]
example_df
Out[3]:
span ent_iob ent_type sentence line_num
0 [0, 10): '-DOCSTART-' O None [0, 10): '-DOCSTART-' 1469
1 [11, 18): 'CRICKET' O None [11, 62): 'CRICKET- PAKISTAN V NEW ZEALAND ONE... 1471
2 [18, 19): '-' O None [11, 62): 'CRICKET- PAKISTAN V NEW ZEALAND ONE... 1472
3 [20, 28): 'PAKISTAN' B LOC [11, 62): 'CRICKET- PAKISTAN V NEW ZEALAND ONE... 1473
4 [29, 30): 'V' O None [11, 62): 'CRICKET- PAKISTAN V NEW ZEALAND ONE... 1474
... ... ... ... ... ...
350 [1620, 1621): '8' O None [1590, 1634): 'Third one-day match: December 8... 1865
351 [1621, 1622): ',' O None [1590, 1634): 'Third one-day match: December 8... 1866
352 [1623, 1625): 'in' O None [1590, 1634): 'Third one-day match: December 8... 1867
353 [1626, 1633): 'Karachi' B LOC [1590, 1634): 'Third one-day match: December 8... 1868
354 [1633, 1634): '.' O None [1590, 1634): 'Third one-day match: December 8... 1869

355 rows × 5 columns

The example_df contains columns span and sentence of dtypes SpanDtype and TokenSpanDtype. These represent spans from the target text, and here they contain tokens of the text and the sentence containing that token. See the notebook Text_Extension_for_Pandas_Overview for more on SpanArray and TokenSpanArray.

In [4]:
example_df.dtypes
Out[4]:
span             SpanDtype
ent_iob             object
ent_type            object
sentence    TokenSpanDtype
line_num             int64
dtype: object

Convert IOB-Tagged Data to Lists of Entity Mentions

The data we've looked at so far has been in IOB2 format). Each row of our DataFrame represents a token, and each token is tagged with an entity type (ent_type) and an IOB tag (ent_iob). The first token of each named entity mention is tagged B, while subsequent tokens are tagged I. Tokens that aren't part of any named entity are tagged O.

IOB2 format is a convenient way to represent a corpus, but it is a less useful representation for analyzing the result quality of named entity recognition models. Most tokens in a typical NER corpus will be tagged O, any measure of error rate in terms of tokens will over-emphasizing the tokens that are part of entities. Token-level error rate implicitly assigns higher weight to named entity mentions that consist of multiple tokens, further unbalancing error metrics. And most crucially, a naive comparison of IOB tags can result in marking an incorrect answer as correct. Consider a case where the correct sequence of labels is B, B, I but the model has output B, I, I; in this case, last two tokens of model output are both incorrect (the model has assigned them to the same entity as the first token), but a naive token-level comparison will consider the last token to be correct.

The CoNLL 2003 competition used the number of errors in extracting entire entity mentions to measure the result quality of the entries. We will use the same metric in this notebook. To compute entity-level errors, we convert the IOB-tagged tokens into pairs of <entity span, entity type>. Text Extensions for Pandas includes a function iob_to_spans() that will handle this conversion for you.

In [5]:
# Convert the corpus IOB2 tagged DataFrame to one with entity span and type columns.
spans_df = tp.io.conll.iob_to_spans(example_df)
spans_df
Out[5]:
span ent_type
0 [20, 28): 'PAKISTAN' LOC
1 [31, 42): 'NEW ZEALAND' LOC
2 [80, 83): 'GMT' MISC
3 [85, 92): 'SIALKOT' LOC
4 [94, 102): 'Pakistan' LOC
... ... ...
69 [1488, 1501): 'Shahid Afridi' PER
70 [1512, 1523): 'Salim Malik' PER
71 [1535, 1545): 'Ijaz Ahmad' PER
72 [1565, 1573): 'Pakistan' LOC
73 [1626, 1633): 'Karachi' LOC

74 rows × 2 columns

Initialize our BERT Tokenizer and Model

Here we configure and initialize the Huggingface transformers BERT tokenizer and model. Text Extensions for Pandas provides a make_bert_tokens() function that will use the tokenizer to create BERT tokens as a span column in a DataFrame, suitable to compute BERT embeddings with.

In [6]:
# Huggingface transformers BERT Configuration.
bert_model_name = "dslim/bert-base-NER"

tokenizer = transformers.BertTokenizerFast.from_pretrained(bert_model_name, 
                                                           add_special_tokens=True)

# Disable the warning about long sequences. We know what we're doing.
# Different versions of transformers disable this warning differently,
# so we need to do this twice.
tokenizer.deprecation_warnings[
    "sequence-length-is-longer-than-the-specified-maximum"] = True
tokenizer.model_max_length = 16384

# Retokenize the document's text with the BERT tokenizer as a DataFrame 
# with a span column.
bert_toks_df = tp.io.bert.make_bert_tokens(example_df["span"].values[0].target_text, tokenizer)
bert_toks_df
Out[6]:
token_id span input_id token_type_id attention_mask special_tokens_mask
0 0 [0, 0): '' 101 0 1 True
1 1 [0, 1): '-' 118 0 1 False
2 2 [1, 2): 'D' 141 0 1 False
3 3 [2, 4): 'OC' 9244 0 1 False
4 4 [4, 6): 'ST' 9272 0 1 False
... ... ... ... ... ... ...
684 684 [1621, 1622): ',' 117 0 1 False
685 685 [1623, 1625): 'in' 1107 0 1 False
686 686 [1626, 1633): 'Karachi' 16237 0 1 False
687 687 [1633, 1634): '.' 119 0 1 False
688 688 [0, 0): '' 102 0 1 True

689 rows × 6 columns

In [7]:
# BERT tokenization includes special zero-length tokens.
bert_toks_df[bert_toks_df["special_tokens_mask"]]
Out[7]:
token_id span input_id token_type_id attention_mask special_tokens_mask
0 0 [0, 0): '' 101 0 1 True
688 688 [0, 0): '' 102 0 1 True
In [8]:
# Align the BERT tokens with the original tokenization.
bert_spans = tp.TokenSpanArray.align_to_tokens(bert_toks_df["span"],
                                               spans_df["span"])
pd.DataFrame({
    "original_span": spans_df["span"],
    "bert_spans": bert_spans,
    "ent_type": spans_df["ent_type"]
})
Out[8]:
original_span bert_spans ent_type
0 [20, 28): 'PAKISTAN' [20, 28): 'PAKISTAN' LOC
1 [31, 42): 'NEW ZEALAND' [31, 42): 'NEW ZEALAND' LOC
2 [80, 83): 'GMT' [80, 83): 'GMT' MISC
3 [85, 92): 'SIALKOT' [85, 92): 'SIALKOT' LOC
4 [94, 102): 'Pakistan' [94, 102): 'Pakistan' LOC
... ... ... ...
69 [1488, 1501): 'Shahid Afridi' [1488, 1501): 'Shahid Afridi' PER
70 [1512, 1523): 'Salim Malik' [1512, 1523): 'Salim Malik' PER
71 [1535, 1545): 'Ijaz Ahmad' [1535, 1545): 'Ijaz Ahmad' PER
72 [1565, 1573): 'Pakistan' [1565, 1573): 'Pakistan' LOC
73 [1626, 1633): 'Karachi' [1626, 1633): 'Karachi' LOC

74 rows × 3 columns

In [9]:
# Generate IOB2 tags and entity labels that align with the BERT tokens.
# See https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)
bert_toks_df[["ent_iob", "ent_type"]] = tp.io.conll.spans_to_iob(bert_spans, 
                                                        spans_df["ent_type"])
bert_toks_df[10:20]
Out[9]:
token_id span input_id token_type_id attention_mask special_tokens_mask ent_iob ent_type
10 10 [15, 17): 'KE' 22441 0 1 False O <NA>
11 11 [17, 18): 'T' 1942 0 1 False O <NA>
12 12 [18, 19): '-' 118 0 1 False O <NA>
13 13 [20, 22): 'PA' 8544 0 1 False B LOC
14 14 [22, 23): 'K' 2428 0 1 False I LOC
15 15 [23, 25): 'IS' 6258 0 1 False I LOC
16 16 [25, 27): 'TA' 9159 0 1 False I LOC
17 17 [27, 28): 'N' 2249 0 1 False I LOC
18 18 [29, 30): 'V' 159 0 1 False O <NA>
19 19 [31, 33): 'NE' 26546 0 1 False B LOC
In [10]:
# Create a Pandas categorical type for consistent encoding of categories
# across all documents.
ENTITY_TYPES = ["LOC", "MISC", "ORG", "PER"]
token_class_dtype, int_to_label, label_to_int = tp.io.conll.make_iob_tag_categories(ENTITY_TYPES)
token_class_dtype
Out[10]:
CategoricalDtype(categories=['O', 'B-LOC', 'B-MISC', 'B-ORG', 'B-PER', 'I-LOC', 'I-MISC',
                  'I-ORG', 'I-PER'],
, ordered=False)
In [11]:
# The traditional way to transform NER to token classification is to 
# treat each combination of {I,O,B} X {entity type} as a different
# class. Generate class labels in that format.
classes_df = tp.io.conll.add_token_classes(bert_toks_df, token_class_dtype)
classes_df
Out[11]:
token_id span input_id token_type_id attention_mask special_tokens_mask ent_iob ent_type token_class token_class_id
0 0 [0, 0): '' 101 0 1 True O <NA> O 0
1 1 [0, 1): '-' 118 0 1 False O <NA> O 0
2 2 [1, 2): 'D' 141 0 1 False O <NA> O 0
3 3 [2, 4): 'OC' 9244 0 1 False O <NA> O 0
4 4 [4, 6): 'ST' 9272 0 1 False O <NA> O 0
... ... ... ... ... ... ... ... ... ... ...
684 684 [1621, 1622): ',' 117 0 1 False O <NA> O 0
685 685 [1623, 1625): 'in' 1107 0 1 False O <NA> O 0
686 686 [1626, 1633): 'Karachi' 16237 0 1 False B LOC B-LOC 1
687 687 [1633, 1634): '.' 119 0 1 False O <NA> O 0
688 688 [0, 0): '' 102 0 1 True O <NA> O 0

689 rows × 10 columns

Show how to compute BERT embeddings

We are going to use the BERT embeddings as the feature vector to train our model. First, we will show how they are computed

In [12]:
# Initialize the BERT model that will be used to generate embeddings.
bert = transformers.BertModel.from_pretrained(bert_model_name)

# Force garbage collection in case this notebook is running on a low-RAM environment.
gc.collect()

# Compute BERT embeddings with the BERT model and add result to our example DataFrame.
embeddings_df = tp.io.bert.add_embeddings(classes_df, bert)
embeddings_df[["token_id", "span", "input_id", "ent_iob", "ent_type", "token_class", "embedding"]].iloc[10:20]
Some weights of the model checkpoint at dslim/bert-base-NER were not used when initializing BertModel: ['classifier.weight', 'classifier.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Out[12]:
token_id span input_id ent_iob ent_type token_class embedding
10 10 [15, 17): 'KE' 22441 O <NA> O [ -0.19854125, -0.46898478, 0.7755599...
11 11 [17, 18): 'T' 1942 O <NA> O [ -0.24190304, -0.42399377, 0.955406...
12 12 [18, 19): '-' 118 O <NA> O [ -0.20076738, -0.7481939, 1.302213...
13 13 [20, 22): 'PA' 8544 B LOC B-LOC [ 0.2020257, -0.26199907, 0.3297634...
14 14 [22, 23): 'K' 2428 I LOC I-LOC [ -0.5462166, -0.90924495, -0.05836733...
15 15 [23, 25): 'IS' 6258 I LOC I-LOC [ -0.37400314, -0.6890743, -0.1446248...
16 16 [25, 27): 'TA' 9159 I LOC I-LOC [ -0.46548596, -0.8717423, 0.3557480...
17 17 [27, 28): 'N' 2249 I LOC I-LOC [ -0.18682732, -0.9008188, 0.3601504...
18 18 [29, 30): 'V' 159 O <NA> O [ -0.16640136, -0.8363809, 0.874061...
19 19 [31, 33): 'NE' 26546 B LOC B-LOC [ -0.3024105, -0.8382667, 1.105809...
In [13]:
embeddings_df[["span", "ent_iob", "ent_type", "embedding"]].iloc[70:75]
Out[13]:
span ent_iob ent_type embedding
70 [155, 168): 'international' O <NA> [ 0.23405041, -0.5534875, 0.9083985, ...
71 [169, 176): 'between' O <NA> [ 0.27792975, -0.6853796, 1.1050363, ...
72 [177, 185): 'Pakistan' B LOC [ 0.19718906, -0.46341094, 0.5182328, ...
73 [186, 189): 'and' O <NA> [ 0.20423545, -0.63758826, 0.82874423, ...
74 [190, 193): 'New' B LOC [ 0.28740737, -0.47174248, 0.77719426, ...
In [14]:
# The `embedding` column is an extension type `TensorDtype` that holds a 
#`TensorArray` provided by Text Extensions for Pandas.
embeddings_df["embedding"].dtype
Out[14]:
<text_extensions_for_pandas.array.tensor.TensorDtype at 0x7fe7585d2af0>

A TensorArray can be constructed with a NumPy array of arbitrary dimensions, added to a DataFrame, then used with standard Pandas functionality. See the notebook Text_Extension_for_Pandas_Overview for more on TensorArray.

In [15]:
# Zero-copy conversion to NumPy can be done by first unwrapping the
# `TensorArray` with `.array` and calling `to_numpy()`.
embeddings_arr = embeddings_df["embedding"].array.to_numpy()
embeddings_arr.dtype, embeddings_arr.shape
Out[15]:
(dtype('float32'), (689, 768))

Generate BERT tokens and BERT embeddings for the entire corpus

Text Extensions for Pandas has a convenience function that will combine the above cells to create BERT tokens and embeddings. We will use this to add embeddings to the entire corpus.

In [16]:
# Example usage of the convenience function to create BERT tokens and embeddings.
tp.io.bert.conll_to_bert(example_df, tokenizer, bert, token_class_dtype)
Out[16]:
token_id span input_id token_type_id attention_mask special_tokens_mask ent_iob ent_type token_class token_class_id embedding
0 0 [0, 0): '' 101 0 1 True O <NA> O 0 [ -0.08307116, -0.35959044, 1.015067...
1 1 [0, 1): '-' 118 0 1 False O <NA> O 0 [ -0.22862588, -0.49313605, 1.284232...
2 2 [1, 2): 'D' 141 0 1 False O <NA> O 0 [ 0.028480446, -0.17874268, 1.54320...
3 3 [2, 4): 'OC' 9244 0 1 False O <NA> O 0 [ -0.46517605, -0.29836014, 1.073768...
4 4 [4, 6): 'ST' 9272 0 1 False O <NA> O 0 [ -0.10730826, -0.3372096, 1.226979...
... ... ... ... ... ... ... ... ... ... ... ...
684 684 [1621, 1622): ',' 117 0 1 False O <NA> O 0 [ -0.12806588, -0.002324244, 0.6781316...
685 685 [1623, 1625): 'in' 1107 0 1 False O <NA> O 0 [ 0.30534068, -0.52625746, 0.8281702...
686 686 [1626, 1633): 'Karachi' 16237 0 1 False B LOC B-LOC 1 [ -0.04873929, -0.3379735, -0.0583514...
687 687 [1633, 1634): '.' 119 0 1 False O <NA> O 0 [ -0.0052893925, -0.29743084, 0.716173...
688 688 [0, 0): '' 102 0 1 True O <NA> O 0 [ -0.5030238, 0.36253875, 0.731493...

689 rows × 11 columns

When this notebook is running on a resource-constrained environment like Binder, there may not be enough RAM available to hold all the embeddings in memory. So we use Gaussian random projection to reduce the size of the embeddings. The projection shrinks the embeddings by a factor of 3 at the expense of a small decrease in model accuracy.

Change the constant SHRINK_EMBEDDINGS in the following cell to False if you want to disable this behavior.

In [17]:
SHRINK_EMBEDDINGS = True
PROJECTION_DIMS = 256
RANDOM_SEED=42

import sklearn.random_projection
projection = sklearn.random_projection.GaussianRandomProjection(
    n_components=PROJECTION_DIMS, random_state=RANDOM_SEED)

def maybe_shrink_embeddings(df):
    if SHRINK_EMBEDDINGS:
        df["embedding"] = tp.TensorArray(projection.fit_transform(df["embedding"]))
    return df
In [18]:
# Run the entire corpus through our processing pipeline.
bert_toks_by_fold = {}
for fold_name in corpus_raw.keys():
    print(f"Processing fold '{fold_name}'...")
    raw = corpus_raw[fold_name]
    with torch.inference_mode():  # This line cuts CPU usage by ~50%
        bert_toks_by_fold[fold_name] = tp.jupyter.run_with_progress_bar(
            len(raw), lambda i: maybe_shrink_embeddings(tp.io.bert.conll_to_bert(
                raw[i], tokenizer, bert, token_class_dtype)))
    
bert_toks_by_fold["dev"][20]
Processing fold 'train'...
Processing fold 'dev'...
Processing fold 'test'...
Out[18]:
token_id span input_id token_type_id attention_mask special_tokens_mask ent_iob ent_type token_class token_class_id embedding
0 0 [0, 0): '' 101 0 1 True O <NA> O 0 [ -0.06799730722665887, 2.664292496984028...
1 1 [0, 1): '-' 118 0 1 False O <NA> O 0 [ -0.7262477871614377, 2.600414199244437...
2 2 [1, 2): 'D' 141 0 1 False O <NA> O 0 [ -0.09688767345391286, 2.951251600481012...
3 3 [2, 4): 'OC' 9244 0 1 False O <NA> O 0 [ -0.15686700764492822, 2.585945891391126...
4 4 [4, 6): 'ST' 9272 0 1 False O <NA> O 0 [ -0.13613133440041497, 2.820193808843421...
... ... ... ... ... ... ... ... ... ... ... ...
2154 2154 [5704, 5705): ')' 114 0 1 False O <NA> O 0 [ -1.643701220752026, 1.257602895023083...
2155 2155 [5706, 5708): '39' 3614 0 1 False O <NA> O 0 [ -1.6270134925747186, 1.351350566308111...
2156 2156 [5708, 5709): '.' 119 0 1 False O <NA> O 0 [ -1.4468312387950375, 1.38293831378890...
2157 2157 [5709, 5711): '93' 5429 0 1 False O <NA> O 0 [ -1.6746394773845812, 1.611593948841774...
2158 2158 [0, 0): '' 102 0 1 True O <NA> O 0 [ -1.7103215591248637, 1.323178591000971...

2159 rows × 11 columns

Collate the data structures we've generated so far

In [19]:
# Create a single DataFrame with the entire corpus's embeddings.
corpus_df = tp.io.conll.combine_folds(bert_toks_by_fold)
corpus_df
Out[19]:
fold doc_num token_id span input_id token_type_id attention_mask special_tokens_mask ent_iob ent_type token_class token_class_id embedding
0 train 0 0 [0, 0): '' 101 0 1 True O <NA> O 0 [ -1.1311553691542877, 2.76648593421354...
1 train 0 1 [0, 1): '-' 118 0 1 False O <NA> O 0 [ -1.2222068473266146, 2.527425640627292...
2 train 0 2 [1, 2): 'D' 141 0 1 False O <NA> O 0 [ -0.7579851055799667, 2.73181597486195...
3 train 0 3 [2, 4): 'OC' 9244 0 1 False O <NA> O 0 [ -0.6730784947110267, 2.38562714803554...
4 train 0 4 [4, 6): 'ST' 9272 0 1 False O <NA> O 0 [ -0.5528018738380444, 2.76605626434104...
... ... ... ... ... ... ... ... ... ... ... ... ... ...
416536 test 230 314 [1386, 1393): 'brother' 1711 0 1 False O <NA> O 0 [ -1.7699805568359452, 1.740577378824614...
416537 test 230 315 [1393, 1394): ',' 117 0 1 False O <NA> O 0 [ -2.217042553207956, 1.188014284432918...
416538 test 230 316 [1395, 1400): 'Bobby' 5545 0 1 False B PER B-PER 4 [ 0.17265078748216925, 2.21287031816488...
416539 test 230 317 [1400, 1401): '.' 119 0 1 False O <NA> O 0 [ -2.022874581969901, 1.548629892512103...
416540 test 230 318 [0, 0): '' 102 0 1 True O <NA> O 0 [ -2.196537811154486, 2.14273333538158...

416541 rows × 13 columns

Checkpoint

With the TensorArray from Text Extensions for Pandas, the computed embeddings can be persisted as a tensor along with the rest of the DataFrame using standard Pandas input/output methods. Since this is a costly operation and the embeddings are deterministic, it can save lots of time to checkpoint the data here and save the results to disk. This will allow us to continue working with model training without needing to re-compute the BERT embeddings again.

Save DataFrame with Embeddings Tensor

In [20]:
# Write the tokenized corpus with embeddings to a Feather file.
# We can't currently serialize span columns that cover multiple documents (see issue #73 https://github.com/CODAIT/text-extensions-for-pandas/issues/73),
# so drop span columns from the contents we write to the Feather file.
cols_to_drop = [c for c in corpus_df.columns if "span" in c]
corpus_df.drop(columns=cols_to_drop).to_feather("outputs/corpus.feather")

Load DataFrame with Previously Computed Embeddings

In [21]:
# Read the serialized embeddings back in so that you can rerun the model 
# training parts of this notebook (the cells from here onward) without 
# regenerating the embeddings.
corpus_df = pd.read_feather("outputs/corpus.feather")
corpus_df
Out[21]:
fold doc_num token_id input_id token_type_id attention_mask special_tokens_mask ent_iob ent_type token_class token_class_id embedding
0 train 0 0 101 0 1 True O <NA> O 0 [ -1.1311553691542877, 2.76648593421354...
1 train 0 1 118 0 1 False O <NA> O 0 [ -1.2222068473266146, 2.527425640627292...
2 train 0 2 141 0 1 False O <NA> O 0 [ -0.7579851055799667, 2.73181597486195...
3 train 0 3 9244 0 1 False O <NA> O 0 [ -0.6730784947110267, 2.38562714803554...
4 train 0 4 9272 0 1 False O <NA> O 0 [ -0.5528018738380444, 2.76605626434104...
... ... ... ... ... ... ... ... ... ... ... ... ...
416536 test 230 314 1711 0 1 False O <NA> O 0 [ -1.7699805568359452, 1.740577378824614...
416537 test 230 315 117 0 1 False O <NA> O 0 [ -2.217042553207956, 1.188014284432918...
416538 test 230 316 5545 0 1 False B PER B-PER 4 [ 0.17265078748216925, 2.21287031816488...
416539 test 230 317 119 0 1 False O <NA> O 0 [ -2.022874581969901, 1.548629892512103...
416540 test 230 318 102 0 1 True O <NA> O 0 [ -2.196537811154486, 2.14273333538158...

416541 rows × 12 columns

Training a model on the BERT embeddings

Now we will use the loaded BERT embeddings to train a multinomial model to predict the token class from the embeddings tensor.

In [22]:
# Extract the training set DataFrame.
train_df = corpus_df[corpus_df["fold"] == "train"]
train_df
Out[22]:
fold doc_num token_id input_id token_type_id attention_mask special_tokens_mask ent_iob ent_type token_class token_class_id embedding
0 train 0 0 101 0 1 True O <NA> O 0 [ -1.1311553691542877, 2.76648593421354...
1 train 0 1 118 0 1 False O <NA> O 0 [ -1.2222068473266146, 2.527425640627292...
2 train 0 2 141 0 1 False O <NA> O 0 [ -0.7579851055799667, 2.73181597486195...
3 train 0 3 9244 0 1 False O <NA> O 0 [ -0.6730784947110267, 2.38562714803554...
4 train 0 4 9272 0 1 False O <NA> O 0 [ -0.5528018738380444, 2.76605626434104...
... ... ... ... ... ... ... ... ... ... ... ... ...
281104 train 945 53 17057 0 1 False B ORG B-ORG 3 [ -1.3644899324386204, 0.1387769900935160...
281105 train 945 54 122 0 1 False O <NA> O 0 [ -1.4544672314078606, 1.4293731057006...
281106 train 945 55 4617 0 1 False B ORG B-ORG 3 [ -1.0318755443110903, 0.4064806114217064...
281107 train 945 56 123 0 1 False O <NA> O 0 [ -1.2597896004962865, 1.395942742925384...
281108 train 945 57 102 0 1 True O <NA> O 0 [ -1.6741569815858808, 1.901864765138888...

281109 rows × 12 columns

In [23]:
%%time

# Train a multinomial logistic regression model on the training set.
MULTI_CLASS = "multinomial"
    
# How many iterations to run the BGFS optimizer when fitting logistic
# regression models. 100 ==> Fast; 10000 ==> Full convergence
LBGFS_ITERATIONS = 10000
_REGULARIZATION_COEFF = 1e-1  # Smaller values ==> more regularization

base_pipeline = sklearn.pipeline.Pipeline([
    # Standard scaler. This only makes a difference for certain classes
    # of embeddings.
    #("scaler", sklearn.preprocessing.StandardScaler()),
    ("mlogreg", sklearn.linear_model.LogisticRegression(
        multi_class=MULTI_CLASS,
        verbose=1,
        max_iter=LBGFS_ITERATIONS,
        C=_REGULARIZATION_COEFF
    ))
])

X_train = train_df["embedding"].values
Y_train = train_df["token_class_id"]
base_model = base_pipeline.fit(X_train, Y_train)
base_model
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
CPU times: user 46min 3s, sys: 4min 23s, total: 50min 27s
Wall time: 6min 22s
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:  6.4min finished
Out[23]:
Pipeline(steps=[('mlogreg',
                 LogisticRegression(C=0.1, max_iter=10000,
                                    multi_class='multinomial', verbose=1))])

Make Predictions on Token Class from BERT Embeddings

Using our model, we can now predict the token class from the test set using the computed embeddings.

In [24]:
# Define a function that will let us make predictions on a fold of the corpus.
def predict_on_df(df: pd.DataFrame, id_to_class: Dict[int, str], predictor):
    """
    Run a trained model on a DataFrame of tokens with embeddings.

    :param df: DataFrame of tokens for a document, containing a TokenSpan column
     called "embedding" for each token.
    :param id_to_class: Mapping from class ID to class name, as returned by
     :func:`text_extensions_for_pandas.make_iob_tag_categories`
    :param predictor: Python object with a `predict_proba` method that accepts
     a numpy array of embeddings.
    :returns: A copy of `df`, with the following additional columns:
     `predicted_id`, `predicted_class`, `predicted_iob`, `predicted_type`
     and `predicted_class_pr`.
    """
    result_df = df.copy()
    class_pr = tp.TensorArray(predictor.predict_proba(result_df["embedding"]))
    result_df["predicted_id"] = np.argmax(class_pr, axis=1)
    result_df["predicted_class"] = [id_to_class[i]
                                    for i in result_df["predicted_id"].values]
    iobs, types = tp.io.conll.decode_class_labels(result_df["predicted_class"].values)
    result_df["predicted_iob"] = iobs
    result_df["predicted_type"] = types
    result_df["predicted_class_pr"] = class_pr
    return result_df
In [25]:
# Make predictions on the test set.
test_results_df = predict_on_df(corpus_df[corpus_df["fold"] == "test"], int_to_label, base_model)
test_results_df.head()
Out[25]:
fold doc_num token_id input_id token_type_id attention_mask special_tokens_mask ent_iob ent_type token_class token_class_id embedding predicted_id predicted_class predicted_iob predicted_type predicted_class_pr
351001 test 0 0 101 0 1 True O <NA> O 0 [ 0.07419002371155237, 2.81491930509171... 0 O O None [ 0.9997307514975134, 5.294607015948672e-0...
351002 test 0 1 118 0 1 False O <NA> O 0 [ -0.7553124891222318, 2.712434591871051... 0 O O None [ 0.9980035154999108, 1.533050022629027e-0...
351003 test 0 2 141 0 1 False O <NA> O 0 [ 0.11465290957193339, 3.11397875179331... 0 O O None [ 0.9969301297651303, 0.000670705720761996...
351004 test 0 3 9244 0 1 False O <NA> O 0 [ -0.14387838512527962, 2.9257680850885... 0 O O None [ 0.9990384089044105, 8.475109949412816e-0...
351005 test 0 4 9272 0 1 False O <NA> O 0 [ 0.08375985078305932, 3.067161861783276... 0 O O None [ 0.9996995206821001, 6.044135027078061e-0...
In [26]:
# Take a slice to show a region with more entities.
test_results_df.iloc[40:50]
Out[26]:
fold doc_num token_id input_id token_type_id attention_mask special_tokens_mask ent_iob ent_type token_class token_class_id embedding predicted_id predicted_class predicted_iob predicted_type predicted_class_pr
351041 test 0 40 3309 0 1 False I PER I-PER 8 [ 0.06028430363940268, 2.833449942439... 5 I-LOC I LOC [ 0.05335241986368567, 0.01558548709678581...
351042 test 0 41 1306 0 1 False I PER I-PER 8 [ 0.011815326065059528, 2.4804891126405... 5 I-LOC I LOC [ 0.26071159739023836, 0.0810894424222212...
351043 test 0 42 2001 0 1 False I PER I-PER 8 [ 0.1896747233694964, 2.0841390182245... 5 I-LOC I LOC [ 0.0008087046569282995, 0.01409121178547858...
351044 test 0 43 1181 0 1 False I PER I-PER 8 [ -0.08919079934068028, 2.673042893674... 5 I-LOC I LOC [ 0.01641422864584388, 0.01922057245520043...
351045 test 0 44 2293 0 1 False I PER I-PER 8 [ -0.5675588015558329, 2.1915603140880... 5 I-LOC I LOC [ 0.06287713949432004, 0.05853431405140322...
351046 test 0 45 18589 0 1 False B LOC B-LOC 1 [ -0.025756110031628202, 2.4176568055402... 1 B-LOC B LOC [ 0.002164163302129627, 0.533655982403914...
351047 test 0 46 118 0 1 False I LOC I-LOC 5 [ -0.8143908150954474, 2.2432229840625... 5 I-LOC I LOC [ 0.40170332018342714, 0.01464842540432879...
351048 test 0 47 19016 0 1 False I LOC I-LOC 5 [ -0.7613811626814251, 2.1040792203968... 5 I-LOC I LOC [ 0.04547783920785417, 0.370807027150105...
351049 test 0 48 2249 0 1 False I LOC I-LOC 5 [ -0.5023455357742641, 2.467216928215... 5 I-LOC I LOC [ 0.0014782178334539389, 0.01311886606422394...
351050 test 0 49 117 0 1 False O <NA> O 0 [ -1.0898376005782766, 2.4839734026886... 0 O O None [ 0.9997009893806189, 3.928979951597114e-0...

Compute Precision and Recall

With our model predictions on the test set, we can now compute precision and recall. To do this, we will use the following steps:

  1. Split up test set predictions by document, so we can work on the document level.
  2. Join the test predictions with token information into one DataFrame per document.
  3. Convert each DataFrame from IOB2 format to span, entity type pairs as done before.
  4. Compute accuracy for each document as a DataFrame.
  5. Aggregate per-document accuracy to get overal precision/recall.
In [27]:
# Split model outputs for an entire fold back into documents and add
# token information.

# Get unique documents per fold.
fold_and_doc = test_results_df[["fold", "doc_num"]] \
        .drop_duplicates() \
        .to_records(index=False)

# Index by fold, doc and token id, then make sure sorted.
indexed_df = test_results_df \
        .set_index(["fold", "doc_num", "token_id"], verify_integrity=True) \
        .sort_index()

# Join predictions with token information, for each document.
test_results_by_doc = {}
for collection, doc_num in fold_and_doc:
    doc_slice = indexed_df.loc[collection, doc_num].reset_index()
    doc_toks = bert_toks_by_fold[collection][doc_num][
        ["token_id", "span", "ent_iob", "ent_type"]
    ].rename(columns={"id": "token_id"})
    joined_df = doc_toks.copy().merge(
        doc_slice[["token_id", "predicted_iob", "predicted_type"]])
    test_results_by_doc[(collection, doc_num)] = joined_df
    
# Test results are now in one DataFrame per document.
test_results_by_doc[("test", 0)].iloc[40:60]
Out[27]:
token_id span ent_iob ent_type predicted_iob predicted_type
40 40 [68, 70): 'di' I PER I LOC
41 41 [70, 71): 'm' I PER I LOC
42 42 [72, 74): 'La' I PER I LOC
43 43 [74, 75): 'd' I PER I LOC
44 44 [75, 77): 'ki' I PER I LOC
45 45 [78, 80): 'AL' B LOC B LOC
46 46 [80, 81): '-' I LOC I LOC
47 47 [81, 83): 'AI' I LOC I LOC
48 48 [83, 84): 'N' I LOC I LOC
49 49 [84, 85): ',' O <NA> O None
50 50 [86, 92): 'United' B LOC B LOC
51 51 [93, 97): 'Arab' I LOC I LOC
52 52 [98, 106): 'Emirates' I LOC I LOC
53 53 [107, 111): '1996' O <NA> O None
54 54 [111, 112): '-' O <NA> O None
55 55 [112, 114): '12' O <NA> O None
56 56 [114, 115): '-' O <NA> O None
57 57 [115, 117): '06' O <NA> O None
58 58 [118, 123): 'Japan' B LOC B LOC
59 59 [124, 129): 'began' O <NA> O None
In [28]:
# Convert IOB2 format to spans, entity type with `tp.io.conll.iob_to_spans()`.
test_actual_spans = {k: tp.io.conll.iob_to_spans(v) for k, v in test_results_by_doc.items()}
test_model_spans = {k:
        tp.io.conll.iob_to_spans(v, iob_col_name = "predicted_iob",
                                 entity_type_col_name = "predicted_type")
            .rename(columns={"predicted_type": "ent_type"})
        for k, v in test_results_by_doc.items()}

test_model_spans[("test", 0)].head()
Out[28]:
span ent_type
0 [19, 24): 'JAPAN' PER
1 [29, 34): 'LUCKY' LOC
2 [40, 45): 'CHINA' LOC
3 [66, 77): 'Nadim Ladki' PER
4 [78, 84): 'AL-AIN' LOC
In [29]:
# Compute per-document statistics into a single DataFrame.
test_stats_by_doc = tp.io.conll.compute_accuracy_by_document(test_actual_spans, test_model_spans)
test_stats_by_doc
Out[29]:
fold doc_num num_true_positives num_extracted num_entities precision recall F1
0 test 0 42 46 45 0.913043 0.933333 0.923077
1 test 1 41 42 44 0.976190 0.931818 0.953488
2 test 2 52 54 54 0.962963 0.962963 0.962963
3 test 3 41 45 44 0.911111 0.931818 0.921348
4 test 4 17 19 19 0.894737 0.894737 0.894737
... ... ... ... ... ... ... ... ...
226 test 226 7 7 7 1.000000 1.000000 1.000000
227 test 227 18 19 21 0.947368 0.857143 0.900000
228 test 228 24 25 27 0.960000 0.888889 0.923077
229 test 229 26 27 27 0.962963 0.962963 0.962963
230 test 230 24 27 28 0.888889 0.857143 0.872727

231 rows × 8 columns

In [30]:
# Collection-wide precision and recall can be computed by aggregating
# our DataFrame.
tp.io.conll.compute_global_accuracy(test_stats_by_doc)
Out[30]:
{'num_true_positives': 4749,
 'num_entities': 5648,
 'num_extracted': 5591,
 'precision': 0.8494008227508496,
 'recall': 0.8408286118980169,
 'F1': 0.8450929798024736}

Adjusting the BERT Model Output

The above results aren't bad for a first shot, but taking a look a some of the predictions will show that sometimes the tokens have been split up into multiple entities. This is because the BERT tokenizer uses WordPiece to make subword tokens, see https://huggingface.co/transformers/tokenizer_summary.html and https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf for more information.

This is going to cause a problem when computing precision/recall because we are comparing exact spans, and if the entity is split, it will be counted as a false negative and possibly one or more false positives. Luckily we can fix up with Text Extension for Pandas.

Let's drill down to see an example of the issue and how to correct it.

In [31]:
# Every once in a while, the BERT model will split a token in the original data
# set into multiple entities. For example, look at document 202 of the test set:
test_model_spans[("test", 202)].head(10)
Out[31]:
span ent_type
0 [11, 22): 'RUGBY UNION' ORG
1 [24, 31): 'BRITISH' MISC
2 [41, 47): 'LONDON' LOC
3 [70, 77): 'British' MISC
4 [111, 125): 'Pilkington Cup' MISC
5 [139, 146): 'Reading' ORG
6 [150, 151): 'W' ORG
7 [151, 156): 'idnes' ORG
8 [159, 166): 'English' MISC
9 [180, 184): 'Bath' ORG

Notice [150, 151): 'W' and [151, 156): 'idnes'. These outputs are part of the same original token, but have been split by the model.

In [32]:
# We can use spanner algebra in `tp.spanner.overlap_join()`
# to fix up these outputs.
spans_df = test_model_spans[("test", 202)]
toks_df = test_raw[202]

# First, find which tokens the spans overlap with:
overlaps_df = (
    tp.spanner.overlap_join(spans_df["span"], toks_df["span"],
                            "span", "corpus_token")
        .merge(spans_df)
)
overlaps_df.head(10)
Out[32]:
span corpus_token ent_type
0 [11, 22): 'RUGBY UNION' [11, 16): 'RUGBY' ORG
1 [11, 22): 'RUGBY UNION' [17, 22): 'UNION' ORG
2 [24, 31): 'BRITISH' [24, 31): 'BRITISH' MISC
3 [41, 47): 'LONDON' [41, 47): 'LONDON' LOC
4 [70, 77): 'British' [70, 77): 'British' MISC
5 [111, 125): 'Pilkington Cup' [111, 121): 'Pilkington' MISC
6 [111, 125): 'Pilkington Cup' [122, 125): 'Cup' MISC
7 [139, 146): 'Reading' [139, 146): 'Reading' ORG
8 [150, 151): 'W' [150, 156): 'Widnes' ORG
9 [151, 156): 'idnes' [150, 156): 'Widnes' ORG
In [33]:
# Next, compute the minimum span that covers all the corpus tokens
# that overlap with each entity span.
agg_df = (
    overlaps_df
    .groupby("span")
    .aggregate({"corpus_token": "sum", "ent_type": "first"})
    .reset_index()
)
agg_df.head(10)
Out[33]:
span corpus_token ent_type
0 [11, 22): 'RUGBY UNION' [11, 22): 'RUGBY UNION' ORG
1 [24, 31): 'BRITISH' [24, 31): 'BRITISH' MISC
2 [41, 47): 'LONDON' [41, 47): 'LONDON' LOC
3 [70, 77): 'British' [70, 77): 'British' MISC
4 [111, 125): 'Pilkington Cup' [111, 125): 'Pilkington Cup' MISC
5 [139, 146): 'Reading' [139, 146): 'Reading' ORG
6 [150, 151): 'W' [150, 156): 'Widnes' ORG
7 [151, 156): 'idnes' [150, 156): 'Widnes' ORG
8 [159, 166): 'English' [159, 166): 'English' MISC
9 [180, 184): 'Bath' [180, 184): 'Bath' ORG
In [34]:
# Finally, take unique values and covert character-based spans to token
# spans in the corpus tokenization (since the new offsets might not match a
# BERT tokenizer token boundary).
cons_df = (
    tp.spanner.consolidate(agg_df, "corpus_token")[["corpus_token", "ent_type"]]
        .rename(columns={"corpus_token": "span"})
)
cons_df["span"] = tp.TokenSpanArray.align_to_tokens(toks_df["span"],
                                                    cons_df["span"])
cons_df.head(10)
Out[34]:
span ent_type
0 [11, 22): 'RUGBY UNION' ORG
1 [24, 31): 'BRITISH' MISC
2 [41, 47): 'LONDON' LOC
3 [70, 77): 'British' MISC
4 [111, 125): 'Pilkington Cup' MISC
5 [139, 146): 'Reading' ORG
6 [150, 156): 'Widnes' ORG
8 [159, 166): 'English' MISC
9 [180, 184): 'Bath' ORG
10 [188, 198): 'Harlequins' ORG
In [35]:
# Text Extensions for Pandas contains a single function that repeats the actions of the 
# previous 3 cells.
tp.io.bert.align_bert_tokens_to_corpus_tokens(test_model_spans[("test", 202)], test_raw[202]).head(10)
Out[35]:
span ent_type
0 [11, 22): 'RUGBY UNION' ORG
1 [24, 31): 'BRITISH' MISC
2 [41, 47): 'LONDON' LOC
3 [70, 77): 'British' MISC
4 [111, 125): 'Pilkington Cup' MISC
5 [139, 146): 'Reading' ORG
6 [150, 156): 'Widnes' ORG
8 [159, 166): 'English' MISC
9 [180, 184): 'Bath' ORG
10 [188, 198): 'Harlequins' ORG
In [36]:
# Run all of our DataFrames through `align_bert_tokens_to_corpus_tokens()`.
keys = list(test_model_spans.keys())
new_values = tp.jupyter.run_with_progress_bar(
    len(keys), 
    lambda i: tp.io.bert.align_bert_tokens_to_corpus_tokens(test_model_spans[keys[i]], test_raw[keys[i][1]]))
test_model_spans = {k: v for k, v in zip(keys, new_values)}
test_model_spans[("test", 202)].head(10)
Out[36]:
span ent_type
0 [11, 22): 'RUGBY UNION' ORG
1 [24, 31): 'BRITISH' MISC
2 [41, 47): 'LONDON' LOC
3 [70, 77): 'British' MISC
4 [111, 125): 'Pilkington Cup' MISC
5 [139, 146): 'Reading' ORG
6 [150, 156): 'Widnes' ORG
8 [159, 166): 'English' MISC
9 [180, 184): 'Bath' ORG
10 [188, 198): 'Harlequins' ORG
In [37]:
# Compute per-document statistics into a single DataFrame.
test_stats_by_doc = tp.io.conll.compute_accuracy_by_document(test_actual_spans, test_model_spans)
test_stats_by_doc
Out[37]:
fold doc_num num_true_positives num_extracted num_entities precision recall F1
0 test 0 43 46 45 0.934783 0.955556 0.945055
1 test 1 41 42 44 0.976190 0.931818 0.953488
2 test 2 52 54 54 0.962963 0.962963 0.962963
3 test 3 42 44 44 0.954545 0.954545 0.954545
4 test 4 17 19 19 0.894737 0.894737 0.894737
... ... ... ... ... ... ... ... ...
226 test 226 7 7 7 1.000000 1.000000 1.000000
227 test 227 18 19 21 0.947368 0.857143 0.900000
228 test 228 24 25 27 0.960000 0.888889 0.923077
229 test 229 26 27 27 0.962963 0.962963 0.962963
230 test 230 25 27 28 0.925926 0.892857 0.909091

231 rows × 8 columns

In [38]:
# Collection-wide precision and recall can be computed by aggregating
# our DataFrame.
tp.io.conll.compute_global_accuracy(test_stats_by_doc)
Out[38]:
{'num_true_positives': 4893,
 'num_entities': 5648,
 'num_extracted': 5520,
 'precision': 0.8864130434782609,
 'recall': 0.8663243626062322,
 'F1': 0.8762535816618912}

These results are a bit better than before, and while the F1 score is not high compared to todays standards, it is decent enough for a simplistic model. More importantly, we did show it was fairly easy to create a model for named entity recognition and analyze the output by leveraging the functionalitiy of Pandas DataFrames along with Text Extensions for Pandas SpanArray, TensorArray and integration with BERT from Huggingface Transformers.