#!/usr/bin/env python # coding: utf-8 # # Text_Extensions_for_Pandas_Overview.ipynb: #

Overview of the basic functionality and usage of Text Extensions for Pandas.

#
# ## Text Extensions for Pandas # # [Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas) is a library that provides natural language processing support for Pandas DataFrames. It includes [Pandas](https://pandas.pydata.org) extension arrays that help with natural language processing, and integrates with other popular NLP libraries to provide a workflow centered around the easy to use and powerful Pandas [DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html). # # This notebook gives an overview of the basic functionality of Text Extensions for Pandas, and serves as a jumping off point to more in-depth examples of specific functionality. See the following notebooks that use Text Extensions for Pandas for data analysis, NLP, and model training: # # - [Analyze_Model_Outputs](./Analyze_Model_Outputs.ipynb) - analyze the outputs of a NLP model on a target corpus # - [Analyze_Text](./Analyze_Text.ipynb) - usage with the IBM Watson cloud API # - [Integrate_NLP_Libraries](./Integrate_NLP_Libraries.ipynb) - integration with SpaCy and IBM Watson # - [Model_Training_with_BERT](./Model_Training_with_BERT.ipynb) - model training for NER with BERT tokenization and embeddings # - [Understand_Tables](./Understand_Tables.ipynb) - integration with IBM Watson Discovery for understanding of tables in PDFs and documents # # API reference can be found at https://text-extensions-for-pandas.readthedocs.io/en/latest/ # ## Environment Setup # # This notebook requires a Python 3.6 or later environment with NumPy, and Pandas. # # The notebook also requires the `text_extensions_for_pandas` library. You can satisfy this dependency in two ways: # # * Run `pip install text_extensions_for_pandas` before running this notebook. This command adds the library to your Python environment. # * Run this notebook out of your local copy of the Text Extensions for Pandas project's [source tree](https://github.com/CODAIT/text-extensions-for-pandas). In this case, the notebook will use the version of Text Extensions for Pandas in your local source tree **if the package is not installed in your Python environment**. # In[1]: import os import regex import sys import numpy as np import pandas as pd # And of course we need the text_extensions_for_pandas library itself. try: import text_extensions_for_pandas as tp except ModuleNotFoundError as e: # If we're running from within the project source tree and the parent Python # environment doesn't have the text_extensions_for_pandas package, use the # version in the local source tree. if not os.getcwd().endswith("notebooks"): raise e if ".." not in sys.path: sys.path.insert(0, "..") import text_extensions_for_pandas as tp # ## Pandas Extension Arrays # # Text Extensions for Pandas provides several Pandas extension arrays on which much of the functionality is built on top of. This section will introduce and show basic usage of these extension arrays. # ### SpanArray # # A `SpanArray` represents a column of character-based spans over a single target text. It is backed by 2 child arrays of integers that are the begin and end offsets of each span item from the target text. Spans can use any offset within the target text and can also overlap with each other. A `SpanArray` can efficiently represent the tokenized result of text because each token is not copied, only offsets are stored. Equality of spans is determined by the text and offset values, so each token will be unique within the text. # # The `SpanArray` is a Pandas extension type, so it can be wrapped as a series and included in a DataFrame to make use of standard Pandas functionality. The values of a `SpanArray` are also designed to render nicely as HTML, for easy display of the span offsets, text and highlighted target text. # # We will show some basic operations of the `SpanArray` by tokenizing a small example piece of text. # In[2]: # Sample text input. text = """\ In AD 932, King Arthur and his squire, Patsy, travel throughout Britain \ searching for men to join the Knights of the Round Table. Along the way, \ he recruits Sir Bedevere the Wise, Sir Lancelot the Brave, Sir Galahad \ the Pure, Sir Robin the Not-Quite-So-Brave-as-Sir-Lancelot, and Sir \ Not-Appearing-in-this-Film, along with their squires and Robin's troubadours.\ """ # In[3]: # Define a crude tokenizer to split by words, for example use only. def tokenize_with_offsets(text): """Return offsets of tokens from given `text`""" splits = text.split(" ") begins = np.cumsum([0] + [len(s) + 1 for s in splits[:-1]]) ends = begins + [len(s.strip(",.")) for s in splits] return begins, ends # In[4]: # Tokenize the text to get begin, end offsets and construct a `SpanArray`. begins, ends = tokenize_with_offsets(text) tokens = tp.SpanArray(text, begins, ends) # The array nicely renders in HTML to show offsets, text of the span, # and highlighted target text. tokens # In[5]: # Indexing the array with an integer will produce a `Span`, which is a single # element in the array. tok = tokens[43] tok # In[6]: # It can also be indexed with a slice, producing another `SpanArray`. toks = tokens[40:44] toks # In[7]: # Iterate over the array to get each `Span`. toks = [span for span in tokens[40:44]] toks # In[8]: # Addition of `Span`s or `SpanArray`s are supported. # The result is the minimum `Span` that covers both `Span`s. result = toks[0] + toks[-1] result # In[9]: # You can check if one `Span` contains another. result.contains(toks[1]) # In[10]: # Also if two `Span`s overlap. a = toks[0] + toks[2] b = toks[2] + toks[3] a.overlaps(b) # In[11]: # Get 2 `Span`s to test equality. sir = tokens[36] other_sir = tokens[40] sir, other_sir # In[12]: # Equality is determined by text and offset values, not just text. sir == other_sir, \ sir.covered_text == other_sir.covered_text # In[13]: # Only a `Span` from the same target text with matching offsets is equal. sir == tp.Span(text, 204, 207) # ### TokenSpanArray # # A `TokenSpanArray` builds on a `SpanArray` with the ability to span text as indices of a `SpanArray` instead of character based offsets. This makes it convenient to use when doing analysis on the token level. Similar to `SpanArray`, a single item in a `TokenSpanArray` is a `TokenSpan`. For an example, let's define a single `TokenSpan` using the target text from above. # In[14]: # Single `TokenSpan` to cover "King Arthur" - notice we begin with the third # token and end at the fifth. tp.TokenSpan(tokens, 3, 5) # In[15]: # We can also make a `TokenSpanArray` with a list of begin and end offsets of # measured in tokens. Here we make spans of the names within the target text. begin_tokens = [3, 8, 28, 32, 36, 40, 45, 52] end_tokens = [5, 9, 32, 36, 40, 44, 47, 53] token_spans = tp.TokenSpanArray(tokens, begin_tokens, end_tokens) token_spans # In[16]: # When all the spans in a `TokenSpanArray` come from the same document, you can access # the tokens of that document via the `document_tokens` property: token_spans.document_tokens[:5] # In[17]: # Both SpanArrays and TokenSpanArrays can contain spans from multiple documents. tokens_2 = tp.SpanArray("Second document", [0, 7], [6, 15]) token_spans_2 = tp.TokenSpanArray(tokens_2, [0], [2]) two_doc_series = pd.concat([pd.Series(token_spans[0:1]), pd.Series(token_spans_2)]) two_doc_series.array # Note that the HTML representation now contains the annotated text of two documents. We can use the `tokens` property to view view the two sets of tokens backing the two spans in this array: # In[18]: two_doc_series.array.tokens # ### Spanner # # The `spanner` module of Text Extensions for Pandas provides span-specific operations # for Pandas DataFrames, based on the Document Spanners formalism, also known as # spanner algebra. # # Spanner algebra is an extension of relational algebra with additional operations # to cover NLP applications. See the paper ["Document Spanners: A Formal Approach to # Information Extraction"]( # https://researcher.watson.ibm.com/researcher/files/us-fagin/jacm15.pdf) by Fagin et al. # for more information. # # The available operations in `spanner` include: `consolidate()` to eliminate overlap in a span column, extract matching tokens with `extract_dict()` for dictionary matching or `extract_regex_tok()` for regular expression matching, joining series of spans with `adjacent_join()`, `contain_join()`, or `overlap_join()`, and projection on spans with `lemmatize()`. # # Here we will show how to extract tokens matching regular expressions and then join the results to a DataFrame. # In[19]: # Extract tokens using a regular expression, here we find all the knights. knights = tp.spanner.extract_regex_tok(tokens, regex.compile(r"Sir.\S+"), max_len=2) knights # In[20]: # Try to find all knight's virtues, not as easy and end up with other spans. virtues = tp.spanner.extract_regex_tok(tokens, regex.compile(r"the.\S+"), max_len=2) virtues # In[21]: # Calling `tp.spanner.adjacent_join()` will join two span columns, where a pair # of spans match if they are adjacent in the text. # Now, easily join the 2 results and match each knight to their virtue. tp.spanner.adjacent_join(knights["match"], virtues["match"], first_name="knight", second_name="virtue") # ### TensorArray # # A `TensorArray` represents an array of [tensors](https://en.wikipedia.org/wiki/Tensor#As_multidimensional_arrays) where each element is an N-dimensional tensor of the same shape. If there are M tensor elements in the array, then the entire `TensorArray` will have a shape of M x N, where the outer dimension is the number of elements. Backing the `TensorArray` is a [numpy.ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) with shape M x N. Tensors, or numpy.ndarrays, are often used as feature vectors for machine learning model training and inference results. In Text Extensions for Pandas, they are used to store BERT embeddings from `io.bert.add_embeddings()` that can then be used to train a NLU model. # # `TensorArray`s can be constructed with zero copy from a single `numpy.ndarray` or with a sequence of elements of similar shape. Conversion of a `TensorArray` to a `numpy.ndarray` can be done with zero copy by calling `TensorArray.to_numpy()` or using the provided numpy array interface, e.g. `numpy.asarray(TensorArray(...))`. The `TensorArray` is a Pandas extension type of type `TensorDtype` and can be wrapped in a `pandas.Series` or used as a column in a `pandas.DataFrame` and used in standard Pandas operations. A `NULL` or missing value in the `TensorArray` is represented as a N-dimensional `numpy.ndarray` where all items are `numpy.nan`. Standard arithmetic and comparison operations are supported and delegated to the backing `numpy.ndarray`. Taking a slice or multiple item selection will produce another `TensorArray`, while a single element selection will produce a `TensorElement` that also wraps a view of the `numpy.ndarray`, with similar operator support. # In[22]: # Construct from a numpy.ndarray. arr = tp.TensorArray(np.arange(10).reshape(5, 2)) arr, arr.dtype # In[23]: # Wrap in a Pandas Series. s = pd.Series(arr) s # In[24]: # Convert back to numpy using the provided array interface. np_arr = np.asarray(s) np_arr, np_arr.dtype # In[25]: # Apply operations on the Series, result is another Series of type TensorDtype. thresh = s > 4 thresh # In[26]: # Create a boolean selection mask. Use `.array` to get the Series as # a `TensorArray` which can be used directly on numpy operations and # returns another `TensorArray` mask = np.all(thresh.array, axis=1) mask, type(mask) # In[27]: # Apply Pandas selection on the Series of TensorDtype by converting # the mask to a numpy boolean array. s[mask.to_numpy()] # In[28]: # TensorArray can also be added to a Pandas DataFrame. df = pd.DataFrame({"time": pd.date_range('2018-01-01', periods=5, freq='H'), "features": arr}) df # In[29]: # TensorArray supports many of the standard DataFrame operations. df.sort_values(by="time", ascending=False) # ### Saving Pandas Extension Arrays to Disk # # Pandas supports several built-in I/O formats, but currently the only supported format for saving DataFrames with Text Extensions for Pandas arrays to disk is with [Feather](https://arrow.apache.org/docs/python/feather.html) files. Text Extensions for Pandas arrays can also be converted to Apache Arrow format, see https://arrow.apache.org/docs/python/pandas.html#dataframes for more information. # In[30]: # Dummy function to create some features. def hasher(span, num_features=4): arr = np.zeros(num_features, dtype="int8") arr[hash(span.covered_text) % 4] = 1 return arr # In[31]: # Create our feature vector. features = tp.TensorArray([hasher(span) for span in tokens]) features.to_numpy().shape # In[32]: # Add tokens and features to a DataFrame. df = pd.DataFrame({"span": tokens, "features": features}) df.head() # In[33]: # Save DataFrame to a feather file. # Feather is a lightweight, fast binary columnar format, with basic # compression and support built into Pandas. df.to_feather("outputs/tp_overview.feather") # In[34]: # Read the file back into a new DataFrame. df_load = pd.read_feather("outputs/tp_overview.feather") df_load.head() # ## NLP Library Input/Output Integration # # Text Extensions for Pandas also provides integration with other NLP libraries and datasets. It takes care of processing the inputs and outputs using Pandas DataFrame as a standard data structure and automatically producing the above extension arrays where applicable. Below is an overview of what each module provides along with more notebooks with example usage. # # ### Watson # # The `io.watson` sub-package provides functions to process and help analyze responses the IBM Waton Cloud service APIs. # # In the module `io.watson.nlu` you can use Watson Natural Language Understanding to analyze text and then process the response into Pandas DataFrames containing `SpanArray`s for tokens, sentences and relations. See [getting started on Watson NLU](https://cloud.ibm.com/docs/natural-language-understanding?topic=natural-language-understanding-getting-started) for setting up the Watson NLU Cloud Service, and the notebook [Analyze_Text](./Analyze_Text.ipynb) for in-depth examples of using the `io.watson.nlu` module. # # In the module `io.watson.table` you can use Watson Discovery to extract and analyze tables within documents and web pages, and then process the response into Pandas DataFrames that make it easy to reconstruct and work with the extracted tables. See [Waston Discovery Installation](https://cloud.ibm.com/docs/discovery-data?topic=discovery-data-install) and [IBM Cloud Pak for Data](https://www.ibm.com/products/cloud-pak-for-data) for getting started with Watson Discovery, and the notebook [Understand_Tables](./Understand_Tables.ipynb) for an in-depth example of using the `watson.table` module. # # ### SpaCy # # The `io.spacy` module contains functions to integrate with the popular NLP library [SpaCy](https://spacy.io/). This allows you to use a [SpaCy tokenizer](https://spacy.io/usage/spacy-101#annotations-token) on text and return the tokens as a `SpanArray` in a Pandas DataFrame with `io.spacy.make_tokens()` or with additional token features with `io.spacy.make_tokens_and_features()`. See the notebook [Integrate_NLP_Libraries](./Integrate_NLP_Libraries.ipynb) for more examples with the `io.spacy` module. # # ### BERT # # The BERT model is originally from the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. The model is pre-trained with masked language modeling and next sentence prediction objectives, which make it effective for masked token prediction and NLU. # # Text Extension for Pandas integrates with the [Huggingface Transformers](https://huggingface.co/transformers/index.html) library to process the result of BERT tokenization into a Pandas DataFrame with tokens as a`SpanArray` column and compute BERT embbeddings that can also be added to a DataFrame as a `TensorArray`. The embeddings can be used for model training in your NLP application. See the notebook [Model_Training_with_BERT](./Model_Training_with_BERT.ipynb) for an example of tokenizing text with BERT and computing embeddings for model training/scoring. # # ### CoNLL # # [CoNLL](https://www.conll.org/), the SIGNLL Conference on Computational Natural Language Learning, is an annual academic conference for natural language processing researchers. Each year's conference features a competition involving a challenging NLP task. The task for the 2003 competition involved identifying mentions of [named entities](https://en.wikipedia.org/wiki/Named-entity_recognition) in English and German news articles from the late 1990's. The corpus for this 2003 competition is one of the most widely-used benchmarks for the performance of named entity recognition models. # # Text Extensions for Pandas contains the module `io.conll` that can help work with an analyze the CoNLL-2003 corpus. The provided functions can help convert between the [IOB2 format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) used in the corpus, and `SpanArray` with entity type for easier analysis. See the notebooks [Analyze_Model_Outputs](./Analyze_Model_Outputs.ipynb) for an in-depth analysis of the corpus and the 2003 competition results, and [Model_Training_with_BERT](./Model_Training_with_BERT.ipynb) for using the corpus to train a named entity recognition (NER) model.