Note: This notebook has been moved to a new branch named "latest". Click here to get the most updated version of the notebook. This branch is deprecated.
This tutorial demonstrates how to apply INT8
quantization to the speech recognition model, known as Wav2Vec2, using the NNCF (Neural Network Compression Framework) 8-bit quantization in post-training mode (without the fine-tuning pipeline). This notebook uses a fine-tuned Wav2Vec2-Base-960h PyTorch model trained on the LibriSpeech ASR corpus. The tutorial is designed to be extendable to custom models and datasets. It consists of the following steps:
%pip install -q "openvino>=2023.3.0" "nncf>=2.7"
%pip install datasets "torchmetrics>=0.11.0" "torch>=2.1.0" --extra-index-url https://download.pytorch.org/whl/cpu
%pip install -q soundfile librosa "transformers>=4.36.2" --extra-index-url https://download.pytorch.org/whl/cpu
import numpy as np
import openvino as ov
import torch
import IPython.display as ipd
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from pathlib import Path
# Set the data and model directories, model source URL and model filename.
MODEL_DIR = Path("model")
MODEL_DIR.mkdir(exist_ok=True)
Perform the following:
torch_model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h", ctc_loss_reduction="mean")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
BATCH_SIZE = 1
MAX_SEQ_LENGTH = 30480
ov_model = ov.convert_model(torch_model, example_input=torch.zeros([1, MAX_SEQ_LENGTH], dtype=torch.float))
ir_model_path = MODEL_DIR / "wav2vec2_base.xml"
ov.save_model(ov_model, ir_model_path)
For demonstration purposes, we will use short dummy version of LibriSpeech dataset - patrickvonplaten/librispeech_asr_dummy
to speed up model evaluation. Model accuracy can be different from reported in the paper. For reproducing original accuracy, use librispeech_asr
dataset.
from datasets import load_dataset
dataset = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
test_sample = dataset[0]["audio"]
# define preprocessing function for converting audio to input values for model
def map_to_input(batch):
preprocessed_signal = processor(batch["audio"]["array"], return_tensors="pt", padding="longest", sampling_rate=batch['audio']['sampling_rate'])
input_values = preprocessed_signal.input_values
batch['input_values'] = input_values
return batch
# apply preprocessing function to dataset and remove audio column, to save memory as we do not need it anymore
dataset = dataset.map(map_to_input, batched=False, remove_columns=["audio"])
NNCF provides a suite of advanced algorithms for Neural Networks inference optimization in OpenVINO with minimal accuracy drop.
Create a quantized model from the pre-trained FP16
model and the calibration dataset. The optimization process contains the following steps:
nncf.quantize
for getting an optimized model. The nncf.quantize
function provides an interface for model quantization. It requires an instance of the OpenVINO Model and quantization dataset. Optionally, some additional parameters for the configuration quantization process (number of samples for quantization, preset, ignored scope, etc.) can be provided. For more accurate results, we should keep the operation in the postprocessing subgraph in floating point precision, using the ignored_scope
parameter. For more information see Tune quantization parameters. For this model, ignored scope was selected experimentally, based on result of quantization with accuracy control. For understanding how it works please check following notebookov.save_model
function.import nncf
from nncf.parameters import ModelType
def transform_fn(data_item):
"""
Extract the model's input from the data item.
The data item here is the data item that is returned from the data source per iteration.
This function should be passed when the data item cannot be used as model's input.
"""
return np.array(data_item["input_values"])
calibration_dataset = nncf.Dataset(dataset, transform_fn)
quantized_model = nncf.quantize(
ov_model,
calibration_dataset,
model_type=ModelType.TRANSFORMER, # specify additional transformer patterns in the model
ignored_scope=nncf.IgnoredScope(
names=[
"__module.wav2vec2.feature_extractor.conv_layers.1.conv/aten::_convolution/Convolution",
"__module.wav2vec2.feature_extractor.conv_layers.2.conv/aten::_convolution/Convolution",
"__module.wav2vec2.feature_extractor.conv_layers.3.conv/aten::_convolution/Convolution",
"__module.wav2vec2.feature_extractor.conv_layers.0.conv/aten::_convolution/Convolution",
],
),
)
INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, openvino
Output()
Output()
INFO:nncf:4 ignored nodes were found by name in the NNCFGraph INFO:nncf:36 ignored nodes were found by name in the NNCFGraph INFO:nncf:Not adding activation input quantizer for operation: 3 __module.wav2vec2.feature_extractor.conv_layers.0.conv/aten::_convolution/Convolution INFO:nncf:Not adding activation input quantizer for operation: 11 __module.wav2vec2.feature_extractor.conv_layers.1.conv/aten::_convolution/Convolution INFO:nncf:Not adding activation input quantizer for operation: 13 __module.wav2vec2.feature_extractor.conv_layers.2.conv/aten::_convolution/Convolution INFO:nncf:Not adding activation input quantizer for operation: 15 __module.wav2vec2.feature_extractor.conv_layers.3.conv/aten::_convolution/Convolution
Output()
Output()
MODEL_NAME = 'quantized_wav2vec2_base'
quantized_model_path = Path(f"{MODEL_NAME}_openvino_model/{MODEL_NAME}_quantized.xml")
ov.save_model(quantized_model, quantized_model_path)
Both initial (FP16
) and quantized (INT8
) models are exactly the same in use.
Start with taking one example from the dataset to show inference steps for it.
ipd.Audio(test_sample["array"], rate=16000)
For running model with OpenVINO, we should select inference device first. Please select one from available devices from dropdown list:
import ipywidgets as widgets
core = ov.Core()
device = widgets.Dropdown(
options=core.available_devices + ["AUTO"],
value='AUTO',
description='Device:',
disabled=False,
)
device
Next, load the quantized model to the inference pipeline.
compiled_model = core.compile_model(model=quantized_model, device_name=device.value)
input_data = np.expand_dims(test_sample["array"], axis=0)
Next, make a prediction.
predictions = compiled_model(input_data)[0]
predicted_ids = np.argmax(predictions, axis=-1)
transcription = processor.batch_decode(torch.from_numpy(predicted_ids))
print(transcription)
['BECAUSE YOU ARE SLEEPING INSTEAD OF CONQUERING THE LOVELY ROSE PRINCESS HAS BECOME A FIDDLE WITHOUT A BEOW WHILE POOR SHAGGY SITS THERE A COOING DOVE']
For model accuracy evaluation, Word Error Rate metric can be used. Word Error Rate or WER is the ratio of errors in a transcript to the total words spoken. A lower WER in speech-to-text means better accuracy in recognizing speech.
For WER calculation, we will use torchmetrics
library.
from torchmetrics import WordErrorRate
from tqdm.notebook import tqdm
# inference function for pytorch
def torch_infer(model, sample):
logits = model(torch.Tensor(sample['input_values'])).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
return transcription
# inference function for openvino
def ov_infer(model, sample):
logits = model(np.array(sample['input_values']))[0]
predicted_ids = np.argmax(logits, axis=-1)
transcription = processor.batch_decode(torch.from_numpy(predicted_ids))
return transcription
def compute_wer(dataset, model, infer_fn):
wer = WordErrorRate()
for sample in tqdm(dataset):
# run infer function on sample
transcription = infer_fn(model, sample)
# update metric on sample result
wer.update(transcription, [sample['text']])
# finalize metric calculation
result = wer.compute()
return result
Now, you just need to decode predicted probabilities to text, using tokenizer decode_logits
.
Alternatively, use a built-in Wav2Vec2Processor
tokenizer from the transformers
package.
Now, compute WER for the original PyTorch model, OpenVINO IR model and quantized model.
compiled_fp32_ov_model = core.compile_model(ov_model, device.value)
pt_result = compute_wer(dataset, torch_model, torch_infer)
ov_result = compute_wer(dataset, compiled_fp32_ov_model, ov_infer)
int8_ov_result = compute_wer(dataset, compiled_model, ov_infer)
print(f'[PyTorch] Word Error Rate: {pt_result:.4f}')
print(f'[OpenVino FP16] Word Error Rate: {ov_result:.4}')
print(f'[OpenVino INT8] Word Error Rate: {int8_ov_result:.4f}')
/home/maleksandr/test_notebooks/107-speech-recognition/openvino_notebooks/notebooks/107-speech-recognition-quantization/venv/lib/python3.10/site-packages/torchmetrics/utilities/prints.py:62: FutureWarning: Importing `WordErrorRate` from `torchmetrics` was deprecated and will be removed in 2.0. Import `WordErrorRate` from `torchmetrics.text` instead. _future_warning(
0%| | 0/73 [00:00<?, ?it/s]
0%| | 0/73 [00:00<?, ?it/s]
0%| | 0/73 [00:00<?, ?it/s]
[PyTorch] Word Error Rate: 0.0530 [OpenVino FP16] Word Error Rate: 0.05304 [OpenVino INT8] Word Error Rate: 0.0539
Finally, use Benchmark Tool to measure the inference performance of the FP16
and INT8
models.
NOTE: For more accurate performance, it is recommended to run
benchmark_app
in a terminal/command prompt after closing other applications. Runbenchmark_app -m model.xml -d CPU
to benchmark async inference on CPU for one minute. ChangeCPU
toGPU
to benchmark on GPU. Runbenchmark_app --help
to see an overview of all command-line options.
# Inference FP16 model (OpenVINO IR)
! benchmark_app -m $ir_model_path -shape [1,30480] -d $device.value -api async
[Step 1/11] Parsing and validating input arguments [ INFO ] Parsing input parameters [Step 2/11] Loading OpenVINO Runtime [ INFO ] OpenVINO: [ INFO ] Build ................................. 2023.3.0-13649-bbddb891712 [ INFO ] [ INFO ] Device info: [ INFO ] CPU [ INFO ] Build ................................. 2023.3.0-13649-bbddb891712 [ INFO ] [ INFO ] [Step 3/11] Setting device configuration [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. [Step 4/11] Reading model files [ INFO ] Loading model files [ INFO ] Read model took 39.87 ms [ INFO ] Original model I/O parameters: [ INFO ] Model inputs: [ INFO ] input_values (node: input_values) : f32 / [...] / [?,?] [ INFO ] Model outputs: [ INFO ] logits , 1170 (node: __module.lm_head/aten::linear/Add) : f32 / [...] / [?,?,32] [Step 5/11] Resizing model to match image sizes and given batch [ INFO ] Model batch size: 1 [ INFO ] Reshaping model: 'input_values': [1,30480] [ INFO ] Reshape model took 30.61 ms [Step 6/11] Configuring input of the model [ INFO ] Model inputs: [ INFO ] input_values (node: input_values) : f32 / [...] / [1,30480] [ INFO ] Model outputs: [ INFO ] logits , 1170 (node: __module.lm_head/aten::linear/Add) : f32 / [...] / [1,95,32] [Step 7/11] Loading the model to the device [ INFO ] Compile model took 743.32 ms [Step 8/11] Querying optimal runtime parameters [ INFO ] Model: [ INFO ] NETWORK_NAME: Model0 [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 12 [ INFO ] NUM_STREAMS: 12 [ INFO ] AFFINITY: Affinity.CORE [ INFO ] INFERENCE_NUM_THREADS: 36 [ INFO ] PERF_COUNT: NO [ INFO ] INFERENCE_PRECISION_HINT: <Type: 'float32'> [ INFO ] PERFORMANCE_HINT: THROUGHPUT [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 [ INFO ] ENABLE_CPU_PINNING: True [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE [ INFO ] ENABLE_HYPER_THREADING: True [ INFO ] EXECUTION_DEVICES: ['CPU'] [ INFO ] CPU_DENORMALS_OPTIMIZATION: False [ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0 [Step 9/11] Creating infer requests and preparing input tensors [ WARNING ] No input files were given for input 'input_values'!. This input will be filled with random values! [ INFO ] Fill input 'input_values' with random values [Step 10/11] Measuring performance (Start inference asynchronously, 12 inference requests, limits: 60000 ms duration) [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). [ INFO ] First inference took 78.50 ms [Step 11/11] Dumping statistics report [ INFO ] Execution Devices:['CPU'] [ INFO ] Count: 3228 iterations [ INFO ] Duration: 60314.79 ms [ INFO ] Latency: [ INFO ] Median: 223.21 ms [ INFO ] Average: 223.81 ms [ INFO ] Min: 92.71 ms [ INFO ] Max: 261.87 ms [ INFO ] Throughput: 53.52 FPS
# Inference INT8 model (OpenVINO IR)
! benchmark_app -m $quantized_model_path -shape [1,30480] -d $device.value -api async
[Step 1/11] Parsing and validating input arguments [ INFO ] Parsing input parameters [Step 2/11] Loading OpenVINO Runtime [ INFO ] OpenVINO: [ INFO ] Build ................................. 2023.3.0-13649-bbddb891712 [ INFO ] [ INFO ] Device info: [ INFO ] CPU [ INFO ] Build ................................. 2023.3.0-13649-bbddb891712 [ INFO ] [ INFO ] [Step 3/11] Setting device configuration [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. [Step 4/11] Reading model files [ INFO ] Loading model files [ INFO ] Read model took 55.69 ms [ INFO ] Original model I/O parameters: [ INFO ] Model inputs: [ INFO ] input_values (node: input_values) : f32 / [...] / [?,?] [ INFO ] Model outputs: [ INFO ] logits , 1170 (node: __module.lm_head/aten::linear/Add) : f32 / [...] / [?,?,32] [Step 5/11] Resizing model to match image sizes and given batch [ INFO ] Model batch size: 1 [ INFO ] Reshaping model: 'input_values': [1,30480] [ INFO ] Reshape model took 36.51 ms [Step 6/11] Configuring input of the model [ INFO ] Model inputs: [ INFO ] input_values (node: input_values) : f32 / [...] / [1,30480] [ INFO ] Model outputs: [ INFO ] logits , 1170 (node: __module.lm_head/aten::linear/Add) : f32 / [...] / [1,95,32] [Step 7/11] Loading the model to the device [ INFO ] Compile model took 1300.12 ms [Step 8/11] Querying optimal runtime parameters [ INFO ] Model: [ INFO ] NETWORK_NAME: Model0 [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 12 [ INFO ] NUM_STREAMS: 12 [ INFO ] AFFINITY: Affinity.CORE [ INFO ] INFERENCE_NUM_THREADS: 36 [ INFO ] PERF_COUNT: NO [ INFO ] INFERENCE_PRECISION_HINT: <Type: 'float32'> [ INFO ] PERFORMANCE_HINT: THROUGHPUT [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 [ INFO ] ENABLE_CPU_PINNING: True [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE [ INFO ] ENABLE_HYPER_THREADING: True [ INFO ] EXECUTION_DEVICES: ['CPU'] [ INFO ] CPU_DENORMALS_OPTIMIZATION: False [ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0 [Step 9/11] Creating infer requests and preparing input tensors [ WARNING ] No input files were given for input 'input_values'!. This input will be filled with random values! [ INFO ] Fill input 'input_values' with random values [Step 10/11] Measuring performance (Start inference asynchronously, 12 inference requests, limits: 60000 ms duration) [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). [ INFO ] First inference took 81.38 ms [Step 11/11] Dumping statistics report [ INFO ] Execution Devices:['CPU'] [ INFO ] Count: 4500 iterations [ INFO ] Duration: 60142.86 ms [ INFO ] Latency: [ INFO ] Median: 159.61 ms [ INFO ] Average: 160.08 ms [ INFO ] Min: 81.89 ms [ INFO ] Max: 204.87 ms [ INFO ] Throughput: 74.82 FPS