jsonl
Welcome to this getting started guide, we will use the new Hugging Face Inference DLCs and Amazon SageMaker Python SDK to deploy two transformer model for inference. In the first example we deploy a trained Hugging Face Transformer model on to SageMaker for inference. In the second example we directly deploy one of the 10 000+ Hugging Face Transformers from the Hub to Amazon SageMaker for Inference.<
not included in the notebook
After you train a model, you can use Amazon SageMaker Batch Transform to perform inferences with the model. In Batch Transform you provide your inference data as a S3 uri and SageMaker will care of downloading it, running the prediction and uploading the results afterwards to S3 again. You can find more documentation for Batch Transform here
If you trained the model using the HuggingFace estimator, you can invoke transformer()
method to create a transform job for a model based on the training job.
batch_job = huggingface_estimator.transformer(
instance_count=1,
instance_type='ml.c5.2xlarge',
strategy='SingleRecord')
batch_job.transform(
data='s3://s3-uri-to-batch-data',
content_type='application/json',
split_type='Line')
For more details about what can be specified here, see API docs.
!pip install "sagemaker>=2.48.0" "datasets==1.11" --upgrade
jsonl
¶In this example we are using the provided tweet_data.csv
as dataset. The csv
contains ~1800 tweets about different airlines. The csv
contains 1 column "inputs"
with the tweets. To use this csv
we need to convert it into a jsonl
file and upload it to s3. Due to the complex structure of text are only jsonl
file supported for batch transform. As pre-processing we are removing the @
in the beginning of the tweet to get the names/identities correct.
_NOTE: While preprocessing you need to make sure that your inputs
fit the max_length
.
import csv
import json
import sagemaker
from sagemaker.s3 import S3Uploader,s3_path_join
# get the s3 bucket
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
sagemaker_session_bucket = sess.default_bucket()
# datset files
dataset_csv_file="tweet_data.csv"
dataset_jsonl_file="tweet_data.jsonl"
with open(dataset_csv_file, "r+") as infile, open(dataset_jsonl_file, "w+") as outfile:
reader = csv.DictReader(infile)
for row in reader:
# remove @
row["inputs"] = row["inputs"].replace("@","")
json.dump(row, outfile)
outfile.write('\n')
# uploads a given file to S3.
input_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/input")
output_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/output")
s3_file_uri = S3Uploader.upload(dataset_jsonl_file,input_s3_path)
print(f"{dataset_jsonl_file} uploaded to {s3_file_uri}")
The created file looks like this
{"inputs": "VirginAmerica What dhepburn said."}
{"inputs": "VirginAmerica plus you've added commercials to the experience... tacky."}
{"inputs": "VirginAmerica I didn't today... Must mean I need to take another trip!"}
{"inputs": "VirginAmerica it's really aggressive to blast obnoxious \"entertainment\"...."}
{"inputs": "VirginAmerica and it's a really big bad thing about it"}
{"inputs": "VirginAmerica seriously would pay $30 a flight for seats that didn't h...."}
{"inputs": "VirginAmerica yes, nearly every time I fly VX this \u201cear worm\u201d won\u2019t go away :)"}
{"inputs": "VirginAmerica Really missed a prime opportunity for Men Without ..."}
{"inputs": "virginamerica Well, I didn't\u2026but NOW I DO! :-D"}
{"inputs": "VirginAmerica it was amazing, and arrived an hour early. You're too good to me."}
{"inputs": "VirginAmerica did you know that suicide is the second leading cause of death among teens 10-24"}
{"inputs": "VirginAmerica I <3 pretty graphics. so much better than minimal iconography. :D"}
{"inputs": "VirginAmerica This is such a great deal! Already thinking about my 2nd trip ..."}
....
We use the twitter-roberta-base-sentiment model running our batch transform job. This is a RoBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark.
from sagemaker.huggingface.model import HuggingFaceModel
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'cardiffnlp/twitter-roberta-base-sentiment',
'HF_TASK':'text-classification'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.6", # transformers version used
pytorch_version="1.7", # pytorch version used
py_version='py36', # python version used
)
# create Transformer to run our batch job
batch_job = huggingface_model.transformer(
instance_count=1,
instance_type='ml.p3.2xlarge',
output_path=output_s3_path, # we are using the same s3 path to save the output with the input
strategy='SingleRecord')
# starts batch transform job and uses s3 data as input
batch_job.transform(
data=s3_file_uri,
content_type='application/json',
split_type='Line')
import json
from sagemaker.s3 import S3Downloader
from ast import literal_eval
# creating s3 uri for result file -> input file + .out
output_file = f"{dataset_jsonl_file}.out"
output_path = s3_path_join(output_s3_path,output_file)
# download file
S3Downloader.download(output_path,'.')
batch_transform_result = []
with open(output_file) as f:
for line in f:
# converts jsonline array to normal array
line = "[" + line.replace("[","").replace("]",",") + "]"
batch_transform_result = literal_eval(line)
# print results
print(batch_transform_result[:3])