This notebook shows how to apply quantization aware training, using the Intel Neural Compressor (INC) library, for any tasks of the GLUE benchmark. This is made possible thanks to 🤗 Optimum Intel, an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to accelerate end-to-end pipelines on a variety of Intel processors.
If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers, 🤗 Datasets and 🤗 Optimum. Uncomment the following cell and run it.
#! pip install datasets transformers optimum[neural-compressor]
Make sure your version of 🤗 Optimum is at least 1.6.0 since the functionality was introduced in that version:
from optimum.intel.version import __version__
print(__version__)
The GLUE Benchmark is a group of nine classification tasks on sentences or pairs of sentences which are:
We will see how to apply post-training static quantization on a DistilBERT model fine-tuned on the SST-2 task:
GLUE_TASKS = ["cola", "mnli", "mnli-mm", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"]
task = "sst2"
model_checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
batch_size = 16
max_train_samples = 200
max_eval_samples = 200
We will use the 🤗 Datasets and 🤗 Evaluate libraries to download the data and get the metric we need to use for evaluation. This can be easily done with the functions load_dataset
and load
.
Apart from mnli-mm
being a special code, we can directly pass our task name to those functions. load_dataset
will cache the dataset to avoid downloading it again the next time you run this cell.
import evaluate
from datasets import load_dataset
actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset("glue", actual_task)
metric = evaluate.load("glue", actual_task)
Note that load
has loaded the proper metric associated to your task, which is:
so the metric object only computes the one(s) needed for your task.
We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely.
from transformers.utils import send_example_telemetry
send_example_telemetry("text_classification_quantization_inc_notebook", framework="none")
Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers Tokenizer
which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.
To do all of this, we instantiate our tokenizer with the AutoTokenizer.from_pretrained
method, which will ensure that:
That vocabulary will be cached, so it's not downloaded again the next time we run the cell.
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
To preprocess our dataset, we will thus need the names of the columns containing the sentence(s). The following dictionary keeps track of the correspondence task to column names:
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mnli-mm": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
We can double check it does work on our current dataset:
sentence1_key, sentence2_key = task_to_keys[task]
if sentence2_key is None:
print(f"Sentence: {dataset['train'][0][sentence1_key]}")
else:
print(f"Sentence 1: {dataset['train'][0][sentence1_key]}")
print(f"Sentence 2: {dataset['train'][0][sentence2_key]}")
We can then write the function that will preprocess our samples. We just feed them to the tokenizer
with the argument truncation=True
. This will ensure that an input longer than what the model selected can handle will be truncated to the maximum length accepted by the model.
max_seq_length = min(128, tokenizer.model_max_length)
padding = "max_length"
def preprocess_function(examples):
args = (
(examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key])
)
return tokenizer(*args, padding=padding, max_length=max_seq_length, truncation=True)
To apply this function on all the sentences (or pairs of sentences) in our dataset, we just use the map
method of our dataset
object we created earlier. This will apply the function on all the elements of all the splits in dataset
, so our training, validation and testing data will be preprocessed in one single command.
encoded_dataset = dataset.map(preprocess_function, batched=True)
Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass load_from_cache_file=False
in the call to map
to not use the cached files and force the preprocessing to be applied again.
Note that we passed batched=True
to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently.
Quantization aware training simulates the effects of quantization during training in order to alleviate its effects on the model's performance.
Now that our data is ready, we can download the pretrained model and fine-tune it. Since all our tasks are about sentence classification, we use the AutoModelForSequenceClassification
class. Like with the tokenizer, the from_pretrained
method will download and cache the model for us. The only thing we have to specify is the number of labels for our problem (which is always 2, except for STS-B which is a regression problem and MNLI where we have 3 labels):
from transformers import AutoModelForSequenceClassification, TrainingArguments, default_data_collator
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)
The INCTrainer
class provides an API to train your model while combining different compression techniques such as knowledge distillation, pruning and quantization. The INCTrainer
is very similar to the 🤗 Transformers Trainer
, which can be replaced with minimal changes in your code. In addition to the usual
To instantiate an INCTrainer
, we will need to define three more things. First, we need to create the quantization configuration describing the quantization proccess we wish to apply. Quantization will be applied on the embeddings, on the linear layers as well as on their corresponding input activations.
from neural_compressor import QuantizationAwareTrainingConfig
quantization_config = QuantizationAwareTrainingConfig()
TrainingArguments
is a class that contains all the attributes to customize the training. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional:
metric_name = "pearson" if task == "stsb" else "matthews_correlation" if task == "cola" else "accuracy"
save_directory = f"{model_checkpoint.split('/')[-1]}-finetuned-{task}"
args = TrainingArguments(
output_dir = save_directory,
do_train=True,
do_eval=False,
eval_strategy = "epoch",
save_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=1,
weight_decay=0.01,
load_best_model_at_end=True,
metric_for_best_model=metric_name,
)
The last thing to define for our INCTrainer
is how to compute the metrics from the predictions. We need to define a function for this, which will just use the metric
we loaded earlier, the only preprocessing we have to do is to take the argmax of our predicted logits (our just squeeze the last axis in the case of STS-B):
import numpy as np
def compute_metrics(eval_pred):
predictions, labels = eval_pred
if task != "stsb":
predictions = np.argmax(predictions, axis=1)
else:
predictions = predictions[:, 0]
return metric.compute(predictions=predictions, references=labels)
Then we just need to pass all of this along with our datasets to the INCTrainer
:
import copy
from optimum.intel.neural_compressor import INCTrainer
validation_key = "validation_mismatched" if task == "mnli-mm" else "validation_matched" if task == "mnli" else "validation"
trainer = INCTrainer(
model=model,
quantization_config=quantization_config,
task="sequence-classification", # optional : only needed to export the model to the ONNX format
args=args,
train_dataset=encoded_dataset["train"].select(range(max_train_samples)),
eval_dataset=encoded_dataset[validation_key].select(range(max_eval_samples)),
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=default_data_collator,
)
fp_model = copy.deepcopy(model)
We can now finetune our model by just calling the train
method:
trainer.train()
We can run evaluation by just calling the evaluate
method:
trainer.evaluate()
import os
import torch
def get_model_size(model):
torch.save(model.state_dict(), "tmp.pt")
model_size = os.path.getsize("tmp.pt") / (1024*1024)
os.remove("tmp.pt")
return round(model_size, 2)
fp_model_size = get_model_size(fp_model)
q_model_size = get_model_size(trainer.model)
print(f"The full-precision model size is {round(fp_model_size)} MB while the quantized model one is {round(q_model_size)} MB.")
print(f"The quantized model is {round(fp_model_size / q_model_size, 2)}x smaller than the full-precision one.")
To save the resulting quantized model, you can use the save_model
method. By setting save_onnx_model
to True
, the model will be additionnaly exported to the ONNX format.
trainer.save_model(save_onnx_model=True)
You must instantiate you model using our INCModelForXxx
[https://huggingface.co/docs/optimum/main/intel/reference_inc#optimum.intel.neural_compressor.INCModel] or ORTModelForXxx
[https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort] classes to load respectively your quantized PyTorch or ONNX model hosted locally or on the 🤗 hub :
from optimum.intel.neural_compressor import INCModelForSequenceClassification
from optimum.onnxruntime import ORTModelForSequenceClassification
pytorch_model = INCModelForSequenceClassification.from_pretrained(save_directory)
onnx_model = ORTModelForSequenceClassification.from_pretrained(save_directory)