#!/usr/bin/env python # coding: utf-8 # # Analyze_Text.ipynb: Analyze Text with Pandas and Watson Natural Language Understanding # # # Introduction # # This notebook shows how the open source library [Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas) lets you use [Pandas](https://pandas.pydata.org/) DataFrames and the [Watson Natural Language Understanding](https://www.ibm.com/cloud/watson-natural-language-understanding) service to analyze natural language text. # # We start out with an excerpt from the [plot synopsis from the Wikipedia page # for *Monty Python and the Holy Grail*](https://en.wikipedia.org/wiki/Monty_Python_and_the_Holy_Grail#Plot). # We pass this example document to the Watson Natural Language # Understanding (NLU) service. Then we use Text Extensions for Pandas to convert the output of the # Watson NLU service to Pandas DataFrames. Next, we perform an example analysis task both with # and without Pandas to show how Pandas makes analyzing NLP information easier. Finally, we # walk through all the different DataFrames that Text Extensions for Pandas can extract from # the output of Watson Natural Language Understanding. # # Environment Setup # # This notebook requires a Python 3.7 or later environment with the following packages: # * The dependencies listed in the ["requirements.txt" file for Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas/blob/master/requirements.txt) # * The "[ibm-watson](https://pypi.org/project/ibm-watson/)" package, available via `pip install ibm-watson` # * `text_extensions_for_pandas` # # You can satisfy the dependency on `text_extensions_for_pandas` in either of two ways: # # * Run `pip install text_extensions_for_pandas` before running this notebook. This command adds the library to your Python environment. # * Run this notebook out of your local copy of the Text Extensions for Pandas project's [source tree](https://github.com/CODAIT/text-extensions-for-pandas). In this case, the notebook will use the version of Text Extensions for Pandas in your local source tree **if the package is not installed in your Python environment**. # In[1]: # Core Python libraries import json import os import sys import pandas as pd from typing import * # IBM Watson libraries import ibm_watson import ibm_watson.natural_language_understanding_v1 as nlu import ibm_cloud_sdk_core # And of course we need the text_extensions_for_pandas library itself. try: import text_extensions_for_pandas as tp except ModuleNotFoundError as e: # If we're running from within the project source tree and the parent Python # environment doesn't have the text_extensions_for_pandas package, use the # version in the local source tree. if not os.getcwd().endswith("notebooks"): raise e if ".." not in sys.path: sys.path.insert(0, "..") import text_extensions_for_pandas as tp # # Set up the Watson Natural Language Understanding Service # # In this part of the notebook, we will use the Watson Natural Language Understanding (NLU) service to extract key features from our example document. # # You can create an instance of Watson NLU on the IBM Cloud for free by navigating to [this page](https://www.ibm.com/cloud/watson-natural-language-understanding) and clicking on the button marked "Get started free". You can also install your own instance of Watson NLU on [OpenShift](https://www.openshift.com/) by using [IBM Watson Natural Language Understanding for IBM Cloud Pak for Data]( # https://catalog.redhat.com/software/operators/detail/5e9873e13f398525a0ceafe5). # # You'll need two pieces of information to access your instance of Watson NLU: An **API key** and a **service URL**. If you're using Watson NLU on the IBM Cloud, you can find your API key and service URL in the IBM Cloud web UI. Navigate to the [resource list](https://cloud.ibm.com/resources) and click on your instance of Natural Language Understanding to open the management UI for your service. Then click on the "Manage" tab to show a page with your API key and service URL. # # The cell that follows assumes that you are using the environment variables `IBM_API_KEY` and `IBM_SERVICE_URL` to store your credentials. If you're running this notebook in Jupyter on your laptop, you can set these environment variables while starting up `jupyter notebook` or `jupyter lab`. For example: # ``` console # IBM_API_KEY='' \ # IBM_SERVICE_URL='' \ # jupyter lab # ``` # # Alternately, you can uncomment the first two lines of code below to set the `IBM_API_KEY` and `IBM_SERVICE_URL` environment variables directly. # **Be careful not to store your API key in any publicly-accessible location!** # In[2]: # If you need to embed your credentials inline, uncomment the following two lines and # paste your credentials in the indicated locations. # os.environ["IBM_API_KEY"] = "" # os.environ["IBM_SERVICE_URL"] = "" # Retrieve the API key for your Watson NLU service instance if "IBM_API_KEY" not in os.environ: raise ValueError("Expected Watson NLU api key in the environment variable 'IBM_API_KEY'") api_key = os.environ.get("IBM_API_KEY") # Retrieve the service URL for your Watson NLU service instance if "IBM_SERVICE_URL" not in os.environ: raise ValueError("Expected Watson NLU service URL in the environment variable 'IBM_SERVICE_URL'") service_url = os.environ.get("IBM_SERVICE_URL") # # Connect to the Watson Natural Language Understanding Python API # # This notebook uses the IBM Watson Python SDK to perform authentication on the IBM Cloud via the # `IAMAuthenticator` class. See [the IBM Watson Python SDK documentation](https://github.com/watson-developer-cloud/python-sdk#iam) for more information. # # We start by using the API key and service URL from the previous cell to create an instance of the # Python API for Watson NLU. # In[3]: natural_language_understanding = ibm_watson.NaturalLanguageUnderstandingV1( version="2019-07-12", authenticator=ibm_cloud_sdk_core.authenticators.IAMAuthenticator(api_key) ) natural_language_understanding.set_service_url(service_url) natural_language_understanding # # Pass a Document through the Watson NLU Service # # Once you've opened a connection to the Watson NLU service, you can pass documents through # the service by invoking the [`analyze()` method](https://cloud.ibm.com/apidocs/natural-language-understanding?code=python#analyze). # # The [example document](https://raw.githubusercontent.com/CODAIT/text-extensions-for-pandas/master/resources/holy_grail_short.txt) that we use here is an excerpt from # the plot summary for *Monty Python and the Holy Grail*, drawn from the [Wikipedia entry](https://en.wikipedia.org/wiki/Monty_Python_and_the_Holy_Grail) for that movie. # # Let's show what the raw text looks like: # In[4]: from IPython.display import display, HTML doc_file = "../resources/holy_grail_short.txt" with open(doc_file, "r") as f: doc_text = f.read() display(HTML(f"Document Text:
{doc_text}
")) # In the code below, we instruct Watson Natural Language Understanding to perform five different kinds of analysis on the example document: # * entities (with sentiment) # * keywords (with sentiment and emotion) # * relations # * semantic_roles # * syntax (with sentences, tokens, and part of speech) # # See [the Watson NLU documentation](https://cloud.ibm.com/apidocs/natural-language-understanding?code=python#text-analytics-features) for a full description of the types of analysis that NLU can perform. # In[5]: # Make the request response = natural_language_understanding.analyze( text=doc_text, # TODO: Use this URL once we've pushed the shortened document to Github #url="https://raw.githubusercontent.com/CODAIT/text-extensions-for-pandas/master/resources/holy_grail_short.txt", return_analyzed_text=True, features=nlu.Features( entities=nlu.EntitiesOptions(sentiment=True, mentions=True), keywords=nlu.KeywordsOptions(sentiment=True, emotion=True), relations=nlu.RelationsOptions(), semantic_roles=nlu.SemanticRolesOptions(), syntax=nlu.SyntaxOptions(sentences=True, tokens=nlu.SyntaxOptionsTokens(lemma=True, part_of_speech=True)) )).get_result() # The response from the `analyze()` method is a Python dictionary. The dictionary contains an entry # for each pass of analysis requested, plus some additional entries with metadata about the API request # itself. Here's a list of the keys in `response`: # In[6]: response.keys() # # Perform an Example Task # # Let's use the information that Watson Natural Language Understanding has extracted from our example document to perform an example task: *Find all the pronouns in each sentence, broken down by sentence.* # # This task could serve as first step to a number of more complex tasks, such as # resolving anaphora (for example, associating "King Arthur" with "his" in the phrase "King Arthur and his squire, Patsy") or analyzing the relationship between sentiment and the gender of pronouns. # # We'll start by doing this task using straight Python code that operates directly over the output of Watson NLU's `analyze()` method. Then we'll redo the task using Pandas DataFrames and Text Extensions for Pandas. This exercise will show how Pandas DataFrames can represent the intermediate data structures of an NLP application in a way that is both easier to understand and easier to manipulate with less code. # # Let's begin. # ## Perform the Task Without Using Pandas # # All the information that we need to perform our task is in the "syntax" section of the response # we captured above from Watson NLU's `analyze()` method. Syntax analysis captures a large amount # of information, so the "syntax" section of the response is very verbose. # # For reference, here's the text of our example document again: # # # In[7]: display(HTML(f"Document Text:
{doc_text}
")) # And here's the output of Watson NLU's syntax analysis, converted to a string: # In[8]: response["syntax"] # Buried in the above data structure is all the information we need to perform our example task: # * The location of every token in the document. # * The part of speech of every token in the document. # * The location of every sentence in the document. # # The Python code in the next cell uses this information to construct a list of pronouns # in each sentence in the document. # In[9]: import collections # Create a data structure to hold a mapping from sentence identifier # to a list of pronouns. This step requires defining sentence ids. def sentence_id(sentence_record: Dict[str, Any]): return tuple(sentence_record["location"]) pronouns_by_sentence_id = collections.defaultdict(list) # Pass 1: Use nested for loops to identify pronouns and match them with # their containing sentences. # Running time: O(num_tokens * num_sentences), i.e. O(document_size^2) for t in response["syntax"]["tokens"]: pos_str = t["part_of_speech"] # Decode numeric POS enum if pos_str == "PRON": found_sentence = False for s in response["syntax"]["sentences"]: if (t["location"][0] >= s["location"][0] and t["location"][1] <= s["location"][1]): found_sentence = True pronouns_by_sentence_id[sentence_id(s)].append(t) if not found_sentence: raise ValueError(f"Token {t} is not in any sentence") pass # Make JupyterLab syntax highlighting happy # Pass 2: Translate sentence identifiers to full sentence metadata. sentence_id_to_sentence = {sentence_id(s): s for s in response["syntax"]["sentences"]} result = [ { "sentence": sentence_id_to_sentence[key], "pronouns": pronouns } for key, pronouns in pronouns_by_sentence_id.items() ] result # The code above is quite complex given the simplicity of the task. You would need to stare at the previous cell for a few minutes to convince yourself that the algorithm is correct. This implementation also has scalability issues: The worst-case running time of the nested for loops section is proportional to the square of the document length. # # We can do better. # ## Repeat the Example Task Using Pandas # # Let's revisit the example task we just performed in the previous cell. Again, the task is: *Find all the pronouns in each sentence, broken down by sentence.* This time around, let's perform this task using Pandas. # # Text Extensions for Pandas includes a function `parse_response()` that turns the output of Watson NLU's `analyze()` function into a dictionary of Pandas DataFrames. Let's run our response object through that conversion. # In[10]: dfs = tp.io.watson.nlu.parse_response(response) dfs.keys() # The output of each analysis pass that Watson NLU performed is now a DataFrame. # Let's look at the DataFrame for the "syntax" pass: # In[11]: syntax_df = dfs["syntax"] syntax_df # The DataFrame has one row for every token in the document. Each row has information on # the span of the token, its part of speech, its lemmatized form, and the span of the # containing sentence. # # Let's use this DataFrame to perform our example task a second time. # In[12]: pronouns_by_sentence = syntax_df[syntax_df["part_of_speech"] == "PRON"][["sentence", "span"]] pronouns_by_sentence # That's it. With the DataFrame version of this data, we can perform our example task with **one line of code**. # # Specifically, we use a Pandas selection condition to filter out the tokens that aren't pronouns, and then we # project down to the columns containing sentence and token spans. The result is another DataFrame that # we can display directly in our Jupyter notebook. # # How it Works # # # Let's take a moment to drill into the internals of the DataFrames we just used. # For reference, here are the first three rows of the syntax analysis DataFrame: # In[13]: syntax_df.head(3) # And here is that DataFrame's data type information: # In[14]: syntax_df.dtypes # Two of the columns in this DataFrame — "span" and "sentence" — contain # extension types from the Text Extensions for Pandas library. Let's look first at the "span" # column. # # The "span" column is stored internally using the class `SpanArray` from # Text Extensions for Pandas. # `SpanArray` is a subclass of # [`ExtensionArray`]( # https://pandas.pydata.org/docs/reference/api/pandas.api.extensions.ExtensionArray.html), # the base class for custom 1-D array types in Pandas. # # You can use the property [`pandas.Series.array`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.array.html) to access the `ExtensionArray` behind any Pandas extension type: # In[15]: print(syntax_df["span"].array) # Internally, a `SpanArray` is stored as Numpy arrays of begin and end offsets, plus a Python string # containing the target text. You can access this internal data as properties if your application needs that # information: # In[16]: syntax_df["span"].array.begin[:10], syntax_df["span"].array.end[:10] # You can also convert an individual element of the array into a Python object of type `Span`: # In[17]: span_obj = syntax_df["span"].array[0] print(f"\"{span_obj}\" is an object of type {type(span_obj)}") # Or you can convert the entire array (or a slice of it) into Python objects, one object per span: # In[18]: syntax_df["span"].iloc[:10].to_numpy() # A `SpanArray` can also render itself using [Jupyter Notebook callbacks](https://ipython.readthedocs.io/en/stable/config/integrating.html). To # see the HTML representation of the `SpanArray`, pass the array object # to Jupyter's [`display()`](https://ipython.readthedocs.io/en/stable/api/generated/IPython.display.html#IPython.display.display) # function; or make that object be the last line of the cell, as in the following example: # In[19]: # Show the first 10 tokens in context syntax_df["span"].iloc[:10].array # Let's take another look at our DataFrame of syntax information: # In[20]: syntax_df.head(3) # The "sentence" column is backed by an object of type `TokenSpanArray`. # `TokenSpanArray`, another extension type from Text Extensions for Pandas, # is a version of `SpanArray` for representing a set of spans that are # constrained to begin and end on token boundaries. In addition to all the # functionality of a `SpanArray`, a `TokenSpanArray` encodes additional # information about the relationships between its spans and a tokenization # of the document. # # Here are the distinct elements of the "sentence" column rendered as HTML: # In[21]: syntax_df["sentence"].unique() # As the table in the previous cell's output shows, each span in the `TokenSpanArray` has begin and end offsets in terms # of both characters and tokens. Internally, the `TokenSpanArray` is stored as follows: # * A Numpy array of begin offsets, measured in tokens # * A Numpy array of end offsets in tokens # * A reference to a `SpanArray` of spans representing the tokens # # The `TokenSpanArray` object computes the character offsets and covered text of its spans on demand. # # Applications can access the internals of a `TokenSpanArray` via the properties `begin_token`, `end_token`, and `document_tokens`: # In[22]: token_span_array = syntax_df["sentence"].unique() print(f""" Offset information (stored in the TokenSpanArray): `begin_token` property: {token_span_array.begin_token} `end_token` property: {token_span_array.end_token} Token information (`document_tokens` property, shared among mulitple TokenSpanArrays): {token_span_array.document_tokens} """) # The extension types in Text Extensions for Pandas support the full set of Pandas array operations. For example, we can build up a DataFrame of the spans of all sentences in the document by applying `pandas.DataFrame.drop_duplicates()` to the `sentence` column: # In[23]: syntax_df[["sentence"]].drop_duplicates() # # A More Complex Example # # Now that we've had an introduction to the Text Extensions for Pandas span types, let's take another # look at the DataFrame that our "find pronouns by sentence" code produced: # In[24]: pronouns_by_sentence # This DataFrame contains two columns backed by Text Extensions for Pandas span types: # In[25]: pronouns_by_sentence.dtypes # That means that we can use the full power of Pandas' high-level operations on this DataFrame. # Let's use the output of our earlier task to build up a more complex task: # *Highlight all pronouns in sentences containing the word "Arthur"* # In[26]: mask = pronouns_by_sentence["sentence"].map(lambda s: s.covered_text).str.contains("Arthur") pronouns_by_sentence["span"][mask].values # Here's another variation: *Pair each instance of the word "Arthur" with the pronouns that occur in the same sentence.* # In[27]: ( syntax_df[syntax_df["span"].array.covered_text == "Arthur"] # Find instances of "Arthur" .merge(pronouns_by_sentence, on="sentence") # Match with pronouns in the same sentence .rename(columns={"span_x": "arthur_span", "span_y": "pronoun_span"}) [["arthur_span", "pronoun_span", "sentence"]] # Reorder columns ) # # Other Outputs of Watson NLU as DataFrames # # The examples so far have used the DataFrame representation of Watson Natural Language Understanding's syntax analysis. # In addition to syntax analysis, Watson NLU can perform several other types of analysis. Let's take a look at the # DataFrames that Text Extensions for Pandas can produce from the output of Watson NLU. # # We'll start by revisiting the results of our earlier code that ran # ```python # dfs = tp.io.watson.nlu.parse_response(response) # ``` # over the `response` object that the Watson NLU's Python API returned. `dfs` is a dictionary of DataFrames. # In[28]: dfs.keys() # The "syntax" element of `dfs` contains the syntax analysis DataFrame that we showed earlier. # Let's take a look at the other elements. # The "entities" element of `dfs` contains the named entities that Watson Natural Language # Understanding found in the document. # In[29]: dfs["entities"].head() # The "entity_mentions" element of `dfs` contains the locations of individual mentions of # entities from the "entities" DataFrame. # In[30]: dfs["entity_mentions"].head() # Note that the DataFrame under "entitiy_mentions" may contain multiple mentions of the same # name: # In[31]: arthur_mentions = dfs["entity_mentions"][dfs["entity_mentions"]["text"] == "Arthur"] arthur_mentions # The "type" and "text" columns of the "entity_mentions" DataFrame refer back to the # "entities" DataFrame columns of the same names. # You can combine the global and local information about entities into a single DataFrame # using Pandas' `DataFrame.merge()` method: # In[32]: arthur_mentions.merge(dfs["entities"], on=["type", "text"], suffixes=["_mention", "_entity"]) # Watson Natural Language Understanding has several other models besides the `entities` and `syntax` models. Text Extensions for Pandas can also convert these other outputs. Here's the output of the `keywords` model on our example document: # In[33]: dfs["keywords"].head() # Take a look at the notebook [Sentiment_Analysis.ipynb](./Sentiment_Analysis.ipynb) for more information on the `keywords` model and its sentiment-related outputs. # # Watson Natural Language Understanding also has a `relations` model that finds relationships between pairs of nouns: # In[34]: dfs["relations"].head() # The `semantic_roles` model identifies places where the document describes events and extracts a subject-verb-object triple for each such event: # In[35]: dfs["semantic_roles"].head() # Take a look at our [market intelligence tutorial](../tutorials/market/Market_Intelligence_Part1.ipynb) to learn more about the `semantic_roles` model.