In [1]:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
os.environ["CUDA_VISIBLE_DEVICES"]=""  # Enforce CPU usage
from psutil import cpu_count
import tensorflow as tf
import numpy as np

# Constants from the performance optimization available in onnxruntime
# It needs to be done before importing onnxruntime
os.environ["OMP_NUM_THREADS"] = str(cpu_count(logical=True))
os.environ["OMP_WAIT_POLICY"] = 'ACTIVE'

ONNX and TensorFlow Lite Support in ktrain

As of v0.24.x, predictors in ktrain provide built-in support for exports to ONNX and TensorFlow Lite formats. This allows you to more easily take a ktrain-trained model and use it to make predictions outside of ktrain (or even TensorFlow) in deployment scenarios. In this notebook, we will show a text classification example of this.

Let us begin by loading a previously trained Predictor instance, which consists of both the DistilBert model and its associated Preprocessor instance.

In [2]:
import ktrain
predictor = ktrain.load_predictor('/tmp/my_distilbert_predictor')
print(predictor.model)
print(predictor.preproc)
<transformers.modeling_tf_distilbert.TFDistilBertForSequenceClassification object at 0x7fb1475b30f0>
<ktrain.text.preprocessor.Transformer object at 0x7fb299048cc0>

The cell above assumes that the model was previously trained on the 20 Newsgroup corpus using a GPU (e.g., on Google Colab). The files in question can be easily created with ktrain:

# install ktrain
!pip install ktrain

# load text data
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
from sklearn.datasets import fetch_20newsgroups
train_b = fetch_20newsgroups(subset='train', categories=categories, shuffle=True)
test_b = fetch_20newsgroups(subset='test',categories=categories, shuffle=True)
(x_train, y_train) = (train_b.data, train_b.target)
(x_test, y_test) = (test_b.data, test_b.target)

# build, train, and validate model (Transformer is wrapper around transformers library)
import ktrain
from ktrain import text
MODEL_NAME = 'distilbert-base-uncased'
t = text.Transformer(MODEL_NAME, maxlen=500, class_names=train_b.target_names)
trn = t.preprocess_train(x_train, y_train)
val = t.preprocess_test(x_test, y_test)
model = t.get_classifier()
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)
learner.fit_onecycle(5e-5, 1)

# save predictor
predictor = ktrain.get_predictor(learner.model, t)
predictor.save('/tmp/my_distilbert_predictor')

TensorFlow Lite Inferences

Here, we export our model to TensorFlow LITE and use it to make predictions without ktrain.

In [3]:
# export TensorFlow Lite model
tflite_model_path = '/tmp/model.tflite'
tflite_model_path = predictor.export_model_to_tflite(tflite_model_path)

# load interpreter
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# preprocess and predict outside of ktrain
doc = 'My computer monitor is blurry.'
maxlen = predictor.preproc.maxlen
tokenizer = predictor.preproc.get_tokenizer()
inputs = tokenizer(doc, max_length=maxlen, padding='max_length', return_tensors="tf")
interpreter.set_tensor(input_details[0]['index'], inputs['attention_mask'])
interpreter.set_tensor(input_details[1]['index'], inputs['input_ids'])
interpreter.invoke()
output_tflite = interpreter.get_tensor(output_details[0]['index'])
print()
print('text input: %s' % (doc))
print()
print('predicted logits: %s' % (output_tflite))
print()
print("predicted class: %s" % ( predictor.get_classes()[np.argmax(output_tflite[0])]) )
converting to TFLite format ... this may take a few moments...
INFO:absl:Using experimental converter: If you encountered a problem please file a bug. You can opt-out by setting experimental_new_converter=False
done.

text input: My computer monitor is blurry.

predicted logits: [[-1.137866    2.7797258  -0.87084955 -1.243239  ]]

predicted class: comp.graphics

ONNX Inferences

Here, we will export our trained model to ONNX and make predictions outside of both ktrain and TensorFlow using the ONNX runtime.

In [4]:
# export ONNX model
onnx_model_path = '/tmp/model.onnx'
onnx_model_path = predictor.export_model_to_onnx(onnx_model_path)
print(onnx_model_path)

# create ONNX inference session (you can also do this manually instead of using create_onnx_session)
sess = predictor.create_onnx_session(onnx_model_path)

# preprocess and predict outside of ktrain and TensorFlow
doc = 'I received a chest x-ray at the hospital.'
maxlen = predictor.preproc.maxlen
tokenizer = predictor.preproc.get_tokenizer()
input_dict = tokenizer(doc, max_length=maxlen, padding='max_length')
feed = {}
feed['input_ids'] = np.array(input_dict['input_ids']).astype('int32')[None,:]
feed['attention_mask'] = np.array(input_dict['attention_mask']).astype('int32')[None,:]
output_onnx = sess.run(None, feed)
print()
print('text input: %s' % (doc))
print()
print('predicted logits: %s' % (output_onnx))
print()
print("predicted class: %s" % ( predictor.get_classes()[np.argmax(output_onnx[0][0])]) )
tf executing eager_mode: True
INFO:keras2onnx:tf executing eager_mode: True
tf.keras model eager_mode: False
INFO:keras2onnx:tf.keras model eager_mode: False
converting to ONNX format ... this may take a few moments...
The ONNX operator number change on the optimization: 1317 -> 844
INFO:keras2onnx:The ONNX operator number change on the optimization: 1317 -> 844
done.
/tmp/model.onnx

text input: I received a chest x-ray at the hospital.

predicted logits: [array([[-1.557031  , -0.78585184,  3.1943865 , -1.13119   ]],
      dtype=float32)]

predicted class: sci.med
In [ ]: