Licensed under the MIT License.
Tip: If you want to run through the notebook quickly, you can set the
QUICK_RUN
flag in the cell below toTrue
. This will run the notebook on a small subset of the data and a use a smaller number of epochs.
If you run into CUDA out-of-memory error or the jupyter kernel dies constantly, try reducing the BATCH_SIZE
and MAX_LEN
, but note that model performance will be compromised.
## Set QUICK_RUN = True to run the notebook on a small subset of data and a smaller number of epochs.
QUICK_RUN = True
import sys
sys.path.append("../../")
import os
import json
import pandas as pd
import numpy as np
import scrapbook as sb
from sklearn.metrics import classification_report, accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
import torch
import torch.nn as nn
from interpret_text.experimental.common.utils_bert import Language, Tokenizer, BERTSequenceClassifier
from interpret_text.experimental.common.timer import Timer
from notebooks.test_utils.utils_mnli import load_mnli_pandas_df
from interpret_text.experimental.unified_information import UnifiedInformationExplainer
In this notebook, we fine-tune and evaluate a pretrained BERT model on a subset of the MultiNLI dataset.
We use a sequence classifier that wraps Hugging Face's PyTorch implementation of Google's BERT.
Here we set some parameters that we use for our modeling task.
TRAIN_DATA_FRACTION = 1
TEST_DATA_FRACTION = 1
NUM_EPOCHS = 1
if QUICK_RUN:
TRAIN_DATA_FRACTION = 0.001
TEST_DATA_FRACTION = 0.001
NUM_EPOCHS = 1
if torch.cuda.is_available():
BATCH_SIZE = 1
else:
BATCH_SIZE = 8
DATA_FOLDER = "./temp"
BERT_CACHE_DIR = "./temp"
LANGUAGE = Language.ENGLISH
TO_LOWER = True
MAX_LEN = 150
BATCH_SIZE_PRED = 512
TRAIN_SIZE = 0.6
LABEL_COL = "genre"
TEXT_COL = "sentence1"
We start by loading a subset of the data. The following function also downloads and extracts the files, if they don't exist in the data folder.
The MultiNLI dataset is mainly used for natural language inference (NLI) tasks, where the inputs are sentence pairs and the labels are entailment indicators. The sentence pairs are also classified into genres that allow for more coverage and better evaluation of NLI models.
For our classification task, we use the first sentence only as the text input, and the corresponding genre as the label. We select the examples corresponding to one of the entailment labels (neutral in this case) to avoid duplicate rows, as the sentences are not unique, whereas the sentence pairs are.
df = load_mnli_pandas_df(DATA_FOLDER, "train")
df = df[df["gold_label"]=="neutral"] # get unique sentences
These are the five genres in the dataset:
df[[LABEL_COL, TEXT_COL]].head()
df[LABEL_COL].value_counts()
We start by splitting the data for training and testing, and then we encode the class labels:
# split
df_train, df_test = train_test_split(df, train_size = TRAIN_SIZE, random_state=0)
df_train = df_train.reset_index(drop=True)
df_test = df_test.reset_index(drop=True)
if QUICK_RUN:
df_train = df_train.sample(frac=TRAIN_DATA_FRACTION).reset_index(drop=True)
df_test = df_test.sample(frac=TEST_DATA_FRACTION).reset_index(drop=True)
# encode labels
label_encoder = LabelEncoder()
labels_train = label_encoder.fit_transform(df_train[LABEL_COL])
labels_test = label_encoder.transform(df_test[LABEL_COL])
num_labels = len(np.unique(labels_train))
print("Number of unique labels: {}".format(num_labels))
print("Number of training examples: {}".format(df_train.shape[0]))
print("Number of testing examples: {}".format(df_test.shape[0]))
Before we start training, we tokenize the text documents and convert them to lists of tokens. The following steps instantiate a BERT tokenizer
given the language, and tokenize the text of the training and testing sets.
tokenizer = Tokenizer(LANGUAGE, to_lower=TO_LOWER, cache_dir=BERT_CACHE_DIR)
tokens_train = tokenizer.tokenize(list(df_train[TEXT_COL]))
tokens_test = tokenizer.tokenize(list(df_test[TEXT_COL]))
In addition, we perform the following preprocessing steps in the cell below:
MAX_LEN = 150
See the original implementation for more information on BERT's input format.
tokens_train, mask_train, _ = tokenizer.preprocess_classification_tokens(tokens_train, MAX_LEN)
tokens_test, mask_test, _ = tokenizer.preprocess_classification_tokens(tokens_test, MAX_LEN)
Next, we use a sequence classifier that loads a pre-trained BERT model, given the language and number of labels.
classifier = BERTSequenceClassifier(language=LANGUAGE, num_labels=num_labels, cache_dir=BERT_CACHE_DIR)
We train the classifier using the training set. This involves fine-tuning the BERT Transformer and learning a linear classification layer on top of that:
with Timer() as t:
classifier.fit(token_ids=tokens_train,
input_mask=mask_train,
labels=labels_train,
num_epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=True)
print("[Training time: {:.3f} hrs]".format(t.interval / 3600))
We score the test set using the trained classifier:
preds = classifier.predict(token_ids=tokens_test,
input_mask=mask_test,
batch_size=BATCH_SIZE_PRED)
Finally, we compute the overall accuracy, precision, recall, and F1 metrics on the test set. We also look at the metrics for eact of the genres in the the dataset.
report = classification_report(labels_test, preds, target_names=label_encoder.classes_, output_dict=True)
accuracy = accuracy_score(labels_test, preds)
print("accuracy: {}".format(accuracy))
print(json.dumps(report, indent=4, sort_keys=True))
# for testing
sb.glue("accuracy", accuracy)
sb.glue("precision", report["macro avg"]["precision"])
sb.glue("recall", report["macro avg"]["recall"])
sb.glue("f1", report["macro avg"]["f1-score"])
device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
classifier.model.to(device)
for param in classifier.model.parameters():
param.requires_grad = False
classifier.model.eval()
interpreter_unified = UnifiedInformationExplainer(model=classifier.model,
train_dataset=list(df_train[TEXT_COL]),
device=device,
target_layer=14,
classes=label_encoder.classes_)
idx = 7
text = df_test[TEXT_COL][idx]
true_label = df_test[LABEL_COL][idx]
predicted_label = label_encoder.inverse_transform([preds[idx]])
print(text, true_label, predicted_label)
explanation_unified = interpreter_unified.explain_local(text, true_label)
from interpret_text.experimental.widget import ExplanationDashboard
ExplanationDashboard(explanation_unified)