Newly introduced in transformers v2.3.0, pipelines provides a high-level, easy to use, API for doing inference over a variety of downstream-tasks, including:
question
, context
) the model should find the span of text in content
answering the question
.context
.input
article to a shorter article.Pipelines encapsulate the overall process of every NLP process:
The overall API is exposed to the end-user through the pipeline()
method with the following
structure:
from transformers import pipeline
# Using default model and tokenizer for the task
pipeline("<task-name>")
# Using a user-specified model
pipeline("<task-name>", model="<model_name>")
# Using custom model/tokenizer as str
pipeline('<task-name>', model='<model name>', tokenizer='<tokenizer_name>')
!pip install -q transformers
|████████████████████████████████| 645kB 4.4MB/s |████████████████████████████████| 3.8MB 11.7MB/s |████████████████████████████████| 890kB 51.5MB/s |████████████████████████████████| 1.0MB 46.0MB/s Building wheel for sacremoses (setup.py) ... done
from __future__ import print_function
import ipywidgets as widgets
from transformers import pipeline
nlp_sentence_classif = pipeline('sentiment-analysis')
nlp_sentence_classif('Such a nice weather outside !')
HBox(children=(IntProgress(value=0, description='Downloading', max=230, style=ProgressStyle(description_width=…
[{'label': 'POSITIVE', 'score': 0.9997656}]
nlp_token_class = pipeline('ner')
nlp_token_class('Hugging Face is a French company based in New-York.')
HBox(children=(IntProgress(value=0, description='Downloading', max=230, style=ProgressStyle(description_width=…
[{'entity': 'I-ORG', 'score': 0.9970937967300415, 'word': 'Hu'}, {'entity': 'I-ORG', 'score': 0.9345749020576477, 'word': '##gging'}, {'entity': 'I-ORG', 'score': 0.9787060022354126, 'word': 'Face'}, {'entity': 'I-MISC', 'score': 0.9981995820999146, 'word': 'French'}, {'entity': 'I-LOC', 'score': 0.9983047246932983, 'word': 'New'}, {'entity': 'I-LOC', 'score': 0.8913459181785583, 'word': '-'}, {'entity': 'I-LOC', 'score': 0.9979523420333862, 'word': 'York'}]
nlp_qa = pipeline('question-answering')
nlp_qa(context='Hugging Face is a French company based in New-York.', question='Where is based Hugging Face ?')
HBox(children=(IntProgress(value=0, description='Downloading', max=230, style=ProgressStyle(description_width=…
convert squad examples to features: 100%|██████████| 1/1 [00:00<00:00, 142.60it/s] add example index and unique id: 100%|██████████| 1/1 [00:00<00:00, 4341.93it/s]
{'answer': 'New-York.', 'end': 50, 'score': 0.9632969241603995, 'start': 42}
nlp_fill = pipeline('fill-mask')
nlp_fill('Hugging Face is a French company based in ' + nlp_fill.tokenizer.mask_token)
HBox(children=(IntProgress(value=0, description='Downloading', max=230, style=ProgressStyle(description_width=…
[{'score': 0.23106741905212402, 'sequence': '<s> Hugging Face is a French company based in Paris</s>', 'token': 2201}, {'score': 0.08198167383670807, 'sequence': '<s> Hugging Face is a French company based in Lyon</s>', 'token': 12790}, {'score': 0.04769487306475639, 'sequence': '<s> Hugging Face is a French company based in Geneva</s>', 'token': 11559}, {'score': 0.04762246832251549, 'sequence': '<s> Hugging Face is a French company based in Brussels</s>', 'token': 6497}, {'score': 0.041305847465991974, 'sequence': '<s> Hugging Face is a French company based in France</s>', 'token': 1470}]
Summarization is currently supported by Bart
and T5
.
TEXT_TO_SUMMARIZE = """
New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.
A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.
Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other.
In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage.
Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the
2010 marriage license application, according to court documents.
Prosecutors said the marriages were part of an immigration scam.
On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.
After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective
Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.
All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.
Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.
Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.
The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s
Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.
Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.
If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18.
"""
summarizer = pipeline('summarization')
summarizer(TEXT_TO_SUMMARIZE)
Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/modelcard.json' to download model card file. Creating an empty model card.
[{'summary_text': 'Liana Barrientos has been married 10 times, sometimes within two weeks of each other. Prosecutors say the marriages were part of an immigration scam. She is believed to still be married to four men, and at one time, she was married to eight men at once. Her eighth husband was deported in 2006 to his native Pakistan.'}]
Translation is currently supported by T5
for the language mappings English-to-French (translation_en_to_fr
), English-to-German (translation_en_to_de
) and English-to-Romanian (translation_en_to_ro
).
# English to French
translator = pipeline('translation_en_to_fr')
translator("HuggingFace is a French company that is based in New York City. HuggingFace's mission is to solve NLP one commit at a time")
HBox(children=(IntProgress(value=0, description='Downloading', max=230, style=ProgressStyle(description_width=…
[{'translation_text': 'HuggingFace est une entreprise française basée à New York et dont la mission est de résoudre les problèmes de NLP, un engagement à la fois.'}]
# English to German
translator = pipeline('translation_en_to_de')
translator("The history of natural language processing (NLP) generally started in the 1950s, although work can be found from earlier periods.")
HBox(children=(IntProgress(value=0, description='Downloading', max=230, style=ProgressStyle(description_width=…
[{'translation_text': 'Die Geschichte der natürlichen Sprachenverarbeitung (NLP) begann im Allgemeinen in den 1950er Jahren, obwohl die Arbeit aus früheren Zeiten zu finden ist.'}]
Text generation is currently supported by GPT-2, OpenAi-GPT, TransfoXL, XLNet, CTRL and Reformer.
text_generator = pipeline("text-generation")
text_generator("Today is a beautiful day and I will")
HBox(children=(FloatProgress(value=0.0, description='Downloading', max=230.0, style=ProgressStyle(description_…
Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence
[{'generated_text': 'Today is a beautiful day and I will celebrate my birthday!"\n\nThe mother told CNN the two had planned their meal together. After dinner, she added that she and I walked down the street and stopped at a diner near her home. "He'}]
import numpy as np
nlp_features = pipeline('feature-extraction')
output = nlp_features('Hugging Face is a French company based in Paris')
np.array(output).shape # (Samples, Tokens, Vector Size)
HBox(children=(IntProgress(value=0, description='Downloading', max=230, style=ProgressStyle(description_width=…
(1, 12, 768)
Alright ! Now you have a nice picture of what is possible through transformers' pipelines, and there is more to come in future releases.
In the meantime, you can try the different pipelines with your own inputs
task = widgets.Dropdown(
options=['sentiment-analysis', 'ner', 'fill_mask'],
value='ner',
description='Task:',
disabled=False
)
input = widgets.Text(
value='',
placeholder='Enter something',
description='Your input:',
disabled=False
)
def forward(_):
if len(input.value) > 0:
if task.value == 'ner':
output = nlp_token_class(input.value)
elif task.value == 'sentiment-analysis':
output = nlp_sentence_classif(input.value)
else:
if input.value.find('<mask>') == -1:
output = nlp_fill(input.value + ' <mask>')
else:
output = nlp_fill(input.value)
print(output)
input.on_submit(forward)
display(task, input)
Dropdown(description='Task:', index=1, options=('sentiment-analysis', 'ner', 'fill_mask'), value='ner')
Text(value='', description='Your input:', placeholder='Enter something')
[{'word': 'Peter', 'score': 0.9935821294784546, 'entity': 'I-PER'}, {'word': 'Pan', 'score': 0.9901397228240967, 'entity': 'I-PER'}, {'word': 'Marseille', 'score': 0.9984904527664185, 'entity': 'I-LOC'}, {'word': 'France', 'score': 0.9998687505722046, 'entity': 'I-LOC'}]
context = widgets.Textarea(
value='Einstein is famous for the general theory of relativity',
placeholder='Enter something',
description='Context:',
disabled=False
)
query = widgets.Text(
value='Why is Einstein famous for ?',
placeholder='Enter something',
description='Question:',
disabled=False
)
def forward(_):
if len(context.value) > 0 and len(query.value) > 0:
output = nlp_qa(question=query.value, context=context.value)
print(output)
query.on_submit(forward)
display(context, query)
Textarea(value='Einstein is famous for the general theory of relativity', description='Context:', placeholder=…
Text(value='Why is Einstein famous for ?', description='Question:', placeholder='Enter something')
convert squad examples to features: 100%|██████████| 1/1 [00:00<00:00, 363.99it/s] add example index and unique id: 100%|██████████| 1/1 [00:00<00:00, 5178.15it/s]
{'score': 0.40340594113729367, 'start': 27, 'end': 54, 'answer': 'general theory of relativity'}