Using LX-Suite to annotate a text from the BDCamões corpus

This is an example notebook that illustrates how you can use the LX-Suite web service to annotate a sample text from the BDCamões corpus (the full corpus is available from the PORTULAN CLARIN repository).

Before you run this example, replace access_key_goes_here by your webservice access key, below:

In [1]:
LXSUITE_WS_API_KEY = 'access_key_goes_here'
LXSUITE_WS_API_URL = 'https://portulanclarin.net/workbench/lx-suite/api/'

Importing required Python modules

The next cell will take care of installing the requests and matplotlib packages, if not already installed, and make them available to use in this notebook.

In [2]:
try:
    import requests
except:
    !pip3 install requests
    import requests
try:
    import matplotlib.pyplot as plt
except:
    !pip3 install matplotlib
    import matplotlib.pyplot as plt
import collections

Wrapping the complexities of the JSON-RPC API in a simple, easy to use function

The WSException class defined below, will be used later to identify errors from the webservice.

In [3]:
class WSException(Exception):
    'Webservice Exception'
    def __init__(self, errordata):
        "errordata is a dict returned by the webservice with details about the error"
        super().__init__(self)
        assert isinstance(errordata, dict)
        self.message = errordata["message"]
        # see https://json-rpc.readthedocs.io/en/latest/exceptions.html for more info
        # about JSON-RPC error codes
        if -32099 <= errordata["code"] <= -32000:  # Server Error
            if errordata["data"]["type"] == "WebServiceException":
                self.message += f": {errordata['data']['message']}"
            else:
                self.message += f": {errordata['data']!r}"
    def __str__(self):
        return self.message

The next function invokes the LX-Suite webservice through its public JSON-RPC API.

In [4]:
def annotate(text, format):
    '''
    Arguments
        text: a string with a maximum of 4000 characters, Portuguese text, with
             the input to be processed
        format: either 'CINTIL', 'CONLL' or 'JSON'

    Returns a string with the output according to specification in
       https://portulanclarin.net/workbench/lx-suite/
    
    Raises a WSException if an error occurs.
    '''

    request_data = {
        'method': 'annotate',
        'jsonrpc': '2.0',
        'id': 0,
        'params': {
            'text': text,
            'format': format,
            'key': LXSUITE_WS_API_KEY,
        },
    }
    request = requests.post(LXSUITE_WS_API_URL, json=request_data)
    response_data = request.json()
    if "error" in response_data:
        raise WSException(response_data["error"])
    else:
        return response_data["result"]

Let us test the function we just defined:

In [5]:
text = '''Esta frase serve para testar o funcionamento da suite. Esta outra
frase faz o mesmo.'''
# the CONLL annotation format is a popular format for annotating part of speech
result = annotate(text, format="CONLL")
print(result)
#id	form	lemma	cpos	pos	feat	head	deprel	phead	pdeprel
1	Esta	-	DEM	DEM	fs	-	-	-	-
2	frase	FRASE	CN	CN	fs	-	-	-	-
3	serve	SERVIR	V	V	pi-3s	-	-	-	-
4	para	-	PREP	PREP	-	-	-	-	-
5	testar	TESTAR	V	V	INF-nInf	-	-	-	-
6	o	-	DA	DA	ms	-	-	-	-
7	funcionamento	FUNCIONAMENTO	CN	CN	ms	-	-	-	-
8	de_	-	PREP	PREP	-	-	-	-	-
9	a	-	DA	DA	fs	-	-	-	-
10	suite	SUITE	CN	CN	fs	-	-	-	-
11	.	-	PNT	PNT	-	-	-	-	-


#id	form	lemma	cpos	pos	feat	head	deprel	phead	pdeprel
1	Esta	-	DEM	DEM	fs	-	-	-	-
2	outra	OUTRO	ADJ	ADJ	fs	-	-	-	-
3	frase	FRASE	CN	CN	fs	-	-	-	-
4	faz	FAZER	V	V	pi-3s	-	-	-	-
5	o	-	LDEM1	LDEM1	-	-	-	-	-
6	mesmo	-	LDEM2	LDEM2	-	-	-	-	-
7	.	-	PNT	PNT	-	-	-	-	-


The JSON output format

The JSON format (which we obtain by passing format="JSON" into the annotate function) is more convenient when we need to further process the annotations, because each abstraction is mapped directly into a Python native object (lists, dicts, strings, etc) as follows:

  • The returned object is a list, where each element corresponds to a paragraph of the given text;
  • In turn, each paragraph is a list where each element represents a sentence;
  • Each sentence is a list where each element represents a token;
  • Each token is a dict where each key-value pair is an attribute of the token.
In [6]:
annotated_text = annotate(text, format="JSON")
for pnum, paragraph in enumerate(annotated_text, start=1): # enumerate paragraphs in text, starting at 1
    print(f"paragraph {pnum}:")
    for snum, sentence in enumerate(paragraph, start=1): # enumerate sentences in paragraph, starting at 1
        print(f"  sentence {snum}:")
        for tnum, token in enumerate(sentence, start=1): # enumerate tokens in sentence, starting at 1
            print(f"    token {tnum}: {token!r}")  # print a token representation
paragraph 1:
  sentence 1:
    token 1: {'form': 'Esta', 'space': 'LR', 'pos': 'DEM', 'infl': 'fs'}
    token 2: {'form': 'frase', 'space': 'LR', 'pos': 'CN', 'lemma': 'FRASE', 'infl': 'fs'}
    token 3: {'form': 'serve', 'space': 'LR', 'pos': 'V', 'lemma': 'SERVIR', 'infl': 'pi-3s'}
    token 4: {'form': 'para', 'space': 'LR', 'pos': 'PREP'}
    token 5: {'form': 'testar', 'space': 'LR', 'pos': 'V', 'lemma': 'TESTAR', 'infl': 'INF-nInf'}
    token 6: {'form': 'o', 'space': 'LR', 'pos': 'DA', 'infl': 'ms'}
    token 7: {'form': 'funcionamento', 'space': 'LR', 'pos': 'CN', 'lemma': 'FUNCIONAMENTO', 'infl': 'ms'}
    token 8: {'form': 'de_', 'space': 'L', 'raw': 'da', 'pos': 'PREP'}
    token 9: {'form': 'a', 'space': 'R', 'pos': 'DA', 'infl': 'fs'}
    token 10: {'form': 'suite', 'space': 'L', 'pos': 'CN', 'lemma': 'SUITE', 'infl': 'fs'}
    token 11: {'form': '.', 'space': 'R', 'pos': 'PNT'}
  sentence 2:
    token 1: {'form': 'Esta', 'space': 'LR', 'pos': 'DEM', 'infl': 'fs'}
    token 2: {'form': 'outra', 'space': 'LR', 'pos': 'ADJ', 'lemma': 'OUTRO', 'infl': 'fs'}
    token 3: {'form': 'frase', 'space': 'LR', 'pos': 'CN', 'lemma': 'FRASE', 'infl': 'fs'}
    token 4: {'form': 'faz', 'space': 'LR', 'pos': 'V', 'lemma': 'FAZER', 'infl': 'pi-3s'}
    token 5: {'form': 'o', 'space': 'LR', 'pos': 'LDEM1'}
    token 6: {'form': 'mesmo', 'space': 'L', 'pos': 'LDEM2'}
    token 7: {'form': '.', 'space': 'R', 'pos': 'PNT'}

Downloading and preparing our working text

In the next code cell, we will download a copy of the book "Viagens na minha terra" and prepare it to be used as our working text.

In [7]:
# A plain text version of this book is available from our Gitbub repository:
sample_text_url = "https://github.com/portulanclarin/jupyter-notebooks/raw/main/sample-data/viagensnaminhaterra.txt"

req = requests.get(sample_text_url)
sample_text_lines = req.text.splitlines()

num_lines = len(sample_text_lines)
print(f"The downloaded text contains {num_lines} lines")

# discard whitespace at beginning and end of each line:
sample_text_lines = [line.strip() for line in sample_text_lines]

# discard empty lines
sample_text_lines = [line for line in sample_text_lines if line]

# how many lines do we have left?
num_lines = len(sample_text_lines)
print(f"After discarding empty lines we are left with {num_lines} non-empty lines")
The downloaded text contains 2509 lines
After discarding empty lines we are left with 2205 non-empty lines

Annotating with the LX-Suite web service

There is a limit on the number of web service requests per hour that can be made in association with any given key. Thus, we should send as much text as possible in each request while also conforming with the 4000 characters per request limit.

To this end, the following function slices our text into chunks smaller than 4K:

In [8]:
def slice_into_chunks(lines, max_chunk_size=4000):
    chunk, chunk_size = [], 0
    for lnum, line in enumerate(lines, start=1):
        if (chunk_size + len(line)) <= max_chunk_size:
            chunk.append(line)
            chunk_size += len(line) + 1
            # the + 1 above is for the newline character terminating each line
        else:
            yield "\n".join(chunk)
            if len(line) > max_chunk_size:
                print(f"line {lnum} is longer than 4000 characters; truncating")
                line = line[:4000]
            chunk, chunk_size = [line], len(line) + 1
    if chunk:
        yield "\n".join(chunk)

Next, we will apply slice_into_chunks to the sample text to get the chunks to be annotated.

In [9]:
chunks = list(slice_into_chunks(sample_text_lines))
annotated_text = [] # annotated paragraphs will be stored here
chunks_processed = 0  # this variable keeps track of which chunks have been processed already
print(f"There are {len(chunks)} chunks to be annotated")
There are 105 chunks to be annotated

Next, we will invoke annotate on each chunk. If we get an exception while annotating a chunk:

  • check the exception message to determine what was the cause;
  • if the maximum number of requests per hour has been exceeded, then wait some time before retrying;
  • if a temporary error occurred in the webservice, try again later.

In any case, as long as the notebook is not shutdown or restarted, the text that has been annotated thus far is not lost, and re-running the following cell will pick up from the point where the exception occurred.

In [10]:
for cnum, chunk in enumerate(chunks[chunks_processed:], start=chunks_processed+1):
    try:
        annotated_text.extend(annotate(chunk, format="JSON"))
        chunks_processed = cnum
        # print one dot for each annotated chunk to get some progress feedback
        print(".", end="", flush=True)
    except Exception as exc:
        chunk_preview = chunk[:100] + "[...]" if len(chunk) > 100 else chunk
        print(
            f"\nError: annotation of chunk {cnum} failed ({exc}); chunk contents:\n\n{chunk_preview}\n\n"
        )
        break
.........................................................................................................

Let's create a pie chart with the most common part-of-speech tags

In [11]:
%matplotlib inline

tag_frequencies = collections.Counter(
        token["pos"]
        for paragraph in annotated_text
        for sentence in paragraph
        for token in sentence
).most_common()

tags = [tag for tag, _ in tag_frequencies[:9]]
freqs = [freq for _, freq in tag_frequencies[:9]]

tags.append("other")
freqs.append(sum(freq for _, freq in tag_frequencies[10:]))

plt.rcParams['figure.figsize'] = [10, 10]
fig1, ax1 = plt.subplots()
ax1.pie(freqs, labels=tags, autopct='%1.1f%%', startangle=90)
ax1.axis('equal')  # equal aspect ratio ensures that pie is drawn as a circle.

plt.show()
# To learn more about matplotlib visit https://matplotlib.org/

Getting the status of a webservice access key

In [12]:
def get_key_status():
    '''Returns a string with the detailed status of the webservice access key'''
    
    request_data = {
        'method': 'key_status',
        'jsonrpc': '2.0',
        'id': 0,
        'params': {
            'key': LXSUITE_WS_API_KEY,
        },
    }
    request = requests.post(LXSUITE_WS_API_URL, json=request_data)
    response_data = request.json()
    if "error" in response_data:
        raise WSException(response_data["error"])
    else:
        return response_data["result"]
In [13]:
get_key_status()
Out[13]:
{'requests_remaining': 99999140,
 'chars_remaining': 998236690,
 'expiry': '2030-01-10T00:00+00:00'}