“Natural Language Processing” is a field at the intersection of computer science, linguistics and artificial intelligence which aims to make the underlying structure of language available to computer programs for analysis and manipulation. It’s a vast and vibrant field with a long history! New research and techniques are being developed constantly.
The aim of this notebook is to introduce a few simple concepts and techniques from NLP—just the stuff that’ll help you do creative things quickly, and maybe open the door for you to understand more sophisticated NLP concepts that you might encounter elsewhere. We'll start with simple extraction tasks: isolating words, sentences, and parts of speech. By the end, we'll have a few working systems for creating sophisticated text generators that function by remixing texts based on their constituent linguistic units. This tutorial is written for Python 3.6+.
There are a number of libraries for performing natural language processing tasks in Python, including:
But we'll be using a library called spaCy, which very powerful and easy for newcomers to understand. It's been among the most important tools in my text processing toolbox for many years!
“Natural language” is a loaded phrase: what makes one stretch of language “natural” while another stretch is not? NLP techniques are opinionated about what language is and how it works; as a consequence, you’ll sometimes find yourself having to conceptualize your text with uncomfortable abstractions in order to make it work with NLP. (This is especially true of poetry, which almost by definition breaks most “conventional” definitions of how language behaves and how it’s structured.)
Of course, a computer can never really fully “understand” human language. Even when the text you’re using fits the abstractions of NLP perfectly, the results of NLP analysis are always going to be at least a little bit inaccurate. But often even inaccurate results can be “good enough”—and in any case, inaccurate output from NLP procedures can be an excellent source of the sublime and absurd juxtapositions that we (as poets) are constantly in search of.
Historically, most NLP researchers have focused their efforts on English specifically. But many natural language processing libraries now support a wide range of languages. You can find the full list of supported languages on their website, though the robustness of these models varies from one language to the next, as does the specifics of how the model works. (For example, different languages have different ideas about what a "part of speech" is.) The examples in this notebook are primarily in English. If you're having trouble applying these techniques to other languages, send me an e-mail—I'd be happy to help you figure out how to get things working for languages other than English!
The only thing I believe about English grammar is this:
"Oh yes, the sentence," Creeley once told the critic Burton Hatlen, "that's what we call it when we put someone in jail."
There is no such thing as a sentence, or a phrase, or a part of speech, or even a "word"---these are all pareidolic fantasies occasioned by glints of sunlight we see on reflected on the surface of the ocean of language; fantasies that we comfort ourselves with when faced with language's infinite and unknowable variability.
Regardless, we may find it occasionally helpful to think about language using these abstractions. The following is a gross oversimplification of both how English grammar works, and how theories of English grammar work in the context of NLP. But it should be enough to get us going!
English texts can roughly be divided into "sentences." Sentences are themselves composed of individual words, each of which has a function in expressing the meaning of the sentence. The function of a word in a sentence is called its "part of speech"—i.e., a word functions as a noun, a verb, an adjective, etc. Here's a sentence, with words marked for their part of speech:
I really love entrees from the new cafeteria.
pronoun adverb verb noun (plural) preposition determiner adjective noun
Of course, the "part of speech" of a word isn't a property of the word itself. We know this because a single "word" can function as two different parts of speech:
I love cheese.
The word "love" here is a verb. But here:
Love is a battlefield.
... it's a noun. For this reason (and others), it's difficult for computers to accurately determine the part of speech for a word in a sentence. (It's difficult sometimes even for humans to do this.) But NLP procedures do their best!
There are several different ways for talking about larger syntactic structures in sentences. The scheme used by spaCy is called a "dependency grammar." We'll talk about the details of this below.
There are instructions for installing spaCy on the spaCy web page. You can also install it by running the following cell in this notebook:
import sys
!conda install -c conda-forge -y --prefix {sys.prefix} spacy
Collecting package metadata (current_repodata.json): done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 4.10.1 latest version: 4.10.3 Please update conda by running $ conda update -n base -c defaults conda # All requested packages already installed.
You'll also need to download a language model. You can download the default language model for English by running the cell below:
import sys
!{sys.executable} -m spacy download en_core_web_md
Collecting en_core_web_md==2.3.1
Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_md-2.3.1/en_core_web_md-2.3.1.tar.gz (50.8 MB)
|████████████████████████████████| 50.8 MB 10.4 MB/s eta 0:00:01 |█ | 1.6 MB 5.1 MB/s eta 0:00:10 |███████████████████████ | 36.4 MB 12.6 MB/s eta 0:00:02 |███████████████████████▎ | 36.9 MB 12.6 MB/s eta 0:00:02
Requirement already satisfied: spacy<2.4.0,>=2.3.0 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from en_core_web_md==2.3.1) (2.3.2)
Requirement already satisfied: srsly<1.1.0,>=1.0.2 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (1.0.2)
Requirement already satisfied: plac<1.2.0,>=0.9.6 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (0.9.6)
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (2.25.1)
Requirement already satisfied: thinc==7.4.1 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (7.4.1)
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (4.61.1)
Requirement already satisfied: blis<0.5.0,>=0.4.0 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (0.4.1)
Requirement already satisfied: numpy>=1.15.0 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (1.20.2)
Requirement already satisfied: wasabi<1.1.0,>=0.4.0 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (0.8.2)
Requirement already satisfied: catalogue<1.1.0,>=0.0.7 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (1.0.0)
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (3.0.2)
Requirement already satisfied: setuptools in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (52.0.0.post20210125)
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (2.0.3)
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (1.0.0)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from requests<3.0.0,>=2.13.0->spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (1.26.6)
Requirement already satisfied: chardet<5,>=3.0.2 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from requests<3.0.0,>=2.13.0->spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (4.0.0)
Requirement already satisfied: idna<3,>=2.5 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from requests<3.0.0,>=2.13.0->spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (from requests<3.0.0,>=2.13.0->spacy<2.4.0,>=2.3.0->en_core_web_md==2.3.1) (2021.5.30)
✔ Download and installation successful
You can now load the model via spacy.load('en_core_web_md')
Replace en_core_web_md
with the name of the model you want to install. The spaCy documentation explains the difference between the various models.
The language model contains machine learning models for splitting texts into sentences and words, tagging words with their parts of speech, identifying entities, and discovering the syntactic structure of sentences.
Import spacy
like any other Python module:
import spacy
Create a new spaCy object using spacy.load(...)
. The name in the parentheses is the same as the name of the model you downloaded above. If you downloaded a different model, you can put its name here instead.
nlp = spacy.load('en_core_web_md')
It's more fun doing natural language processing on text that you're interested in. I recommend grabbing a something from Project Gutenberg. Download a plain text file and put it in the same directory as this notebook, taking care to replace the filename in the cell below with the name of the file you downloaded.
# replace "84-0.txt" with the name of your own text file
text = open("84-0.txt").read()
Now, use spaCy to parse it. (This might take a while, depending on the size of your text.)
doc = nlp(text)
Right off the bat, the spaCy library gives us access to a number of interesting units of text:
doc.sents
)doc
)doc.ents
)The cell below, we extract these into variables so we can play around with them a little bit.
sentences = list(doc.sents)
words = [w for w in list(doc) if w.is_alpha]
noun_chunks = list(doc.noun_chunks)
entities = list(doc.ents)
With this information in hand, we can answer interesting questions like: how many sentences are in the text?
len(sentences)
3873
Using random.sample()
, we can get a small, randomly-selected sample from these lists. Here are five random sentences:
import random
for item in random.sample(sentences, 5):
print(item.text.strip().replace("\n", " "))
print()
But even human sympathies were not sufficient to satisfy his eager mind. The magistrate listened to me with attention and kindness. It was, in fact, a sledge, like that we had seen before, which had drifted towards us in the night on a large fragment of ice. The blue Mediterranean appeared, and by a strange chance, I saw the fiend enter by night and hide himself in a vessel bound for the Black Sea. I myself was about to sink under the accumulation of distress when I saw your vessel riding at anchor and holding forth to me hopes of succour and life.
Ten random words:
for item in random.sample(words, 10):
print(item.text)
said it after day not for the did hastened eradicated
Ten random noun chunks:
for item in random.sample(noun_chunks, 10):
print(item.text)
men the boat the spot intervals It that passion he the best houses my tranquillity their path
Ten random entities:
for item in random.sample(entities, 10):
print(item.text)
United States next day M. Clerval Elizabeth England Justine M. Duvillard Shelley first the next morning
Note that the values that spaCy returns belong to specific spaCy data types. You can read more about these data types in the spaCy documentation, in particular spans and tokens. (Spans represent sequences of tokens; a sentence in spaCy is a span, and a word is a token.) If you want a list of strings instead of a list of spaCy objects, use the .text
attribute, which works for spans and tokens alike. For example:
sentence_strs = [item.text for item in doc.sents]
random.sample(sentence_strs, 10)
['“I continued to wind among the paths of the wood, until I came to its\nboundary, which was skirted by a deep and rapid river, into which many\nof the trees bent their branches, now budding with the fresh spring.\n', 'Oh, that some encouraging\nvoice would answer in the affirmative!', 'Not the\nten-thousandth portion of the anguish that was mine during the\nlingering detail of its execution. ', 'You may remember that a\nhistory of all the voyages made for purposes of discovery composed the\nwhole of our good Uncle Thomas’ library. ', 'Adieu, my dear Margaret. ', 'In some degree, also, they\ndiverted my mind from the thoughts over which it had brooded for the\nlast month. ', 'He was for ever busy, and the only check to his\nenjoyments was my sorrowful and dejected mind. ', 'Ah! ', 'I wait', 'I was\nnourished with high thoughts of honour and devotion.']
The spaCy parser allows us to check what part of speech a word belongs to. In the cell below, we create four different lists—nouns
, verbs
, adjs
and advs
—that contain only words of the specified parts of speech. (There's a full list of part of speech tags here).
nouns = [w for w in words if w.pos_ == "NOUN"]
verbs = [w for w in words if w.pos_ == "VERB"]
adjs = [w for w in words if w.pos_ == "ADJ"]
advs = [w for w in words if w.pos_ == "ADV"]
And now we can print out a random sample of any of these:
for item in random.sample(nouns, 20): # change "nouns" to "verbs" or "adjs" or "advs" to sample from those lists!
print(item.text)
books child difficulty deserts functions purpose desponding Foundation uncle father father enchantment cause conception beings errors town state partiality child
The parser in spaCy not only identifies "entities" but also assigns them to a particular type. See a full list of entity types here. Using this information, the following cell builds lists of the people, locations, and times mentioned in the text:
people = [e for e in entities if e.label_ == "PERSON"]
locations = [e for e in entities if e.label_ == "LOC"]
times = [e for e in entities if e.label_ == "TIME"]
And then you can print out a random sample:
for item in random.sample(times, 20): # change "times" to "people" or "locations" to sample those lists
print(item.text.strip())
nearly two hours the night the morning the sixth hour the morning a few moments the night about eight o’clock eight o’clock midnight this hour a few sad hours that hour a few hours night an hour a few minutes this night before morning night
After we've parsed the text out into meaningful units, it might be interesting to see which examples of those units are the most common in a text.
One of the most common tasks in text analysis is counting how many times things occur in a text. The easiest way to do this in Python is with the Counter
object, contained in the collections
module. Run the following cell to create a Counter
object to count your words.
from collections import Counter
word_count = Counter([w.text for w in words])
Once you've created the counter, you can check to see how many times any word occurs like so:
word_count['heaven']
15
The Counter
object's .most_common()
method gives you access to a list of tuples with words and their counts, sorted in reverse order by count:
word_count.most_common(10)
[('the', 4070), ('and', 3006), ('I', 2847), ('of', 2746), ('to', 2155), ('my', 1635), ('a', 1402), ('in', 1135), ('was', 1019), ('that', 1018)]
The code in the following cell prints this out nicely:
for word, count in word_count.most_common(20):
print(word, count)
the 4070 and 3006 I 2847 of 2746 to 2155 my 1635 a 1402 in 1135 was 1019 that 1018 me 867 with 705 had 684 not 576 which 565 but 552 you 550 his 502 for 494 as 492
You'll note that the list of most frequent words here likely reflects the overall frequency of words in English. Consult my Quick and dirty keywords tutorial for some simple strategies for extracting words that are most unique to a text (rather than simply the most frequent words). You may also consider removing stop words from the list.
You might want to export lists of words or other things that you make with spaCy to a file, so that you can bring them into other Python programs (or just other programs that form a part of your workflow). One way to do this is to write each item to a single line in a text file. The code in the following cell does exactly this for the word list that we just created:
with open("words.txt", "w") as fh:
fh.write("\n".join([w.text for w in words]))
The following cell defines a function that performs this for any list of spaCy values you pass to it:
def save_spacy_list(filename, t):
with open(filename, "w") as fh:
fh.write("\n".join([item.text for item in t]))
Here's how to use it:
save_spacy_list("words.txt", words)
Since we're working with Counter objects a bunch in this notebook, it makes sense to find a way to save these as files too. The following cell defines a function for writing data from a Counter
object to a file. The file is in "tab-separated values" format, which you can open using most spreadsheet programs. Execute it before you continue:
def save_counter_tsv(filename, counter, limit=1000):
with open(filename, "w") as outfile:
outfile.write("key\tvalue\n")
for item, count in counter.most_common():
outfile.write(item.strip() + "\t" + str(count) + "\n")
Now, run the following cell. You'll end up with a file in the same directory as this notebook called 100_common_words.tsv
that has two columns, one for the words and one for their associated counts:
save_counter_tsv("100_common_words.tsv", word_count, 100)
Try opening this file in Excel or Google Docs or Numbers!
If you want to write the data from another Counter
object to a file:
.tsv
extension)word_count
with the name of any of the Counter
objects we've made in this sheet and use it in place of word_count
Here's another example. Using the times
entities, we can make a spreadsheet of how often particular "times" (durations, times of day, etc.) are mentioned in the text.
time_counter = Counter([e.text.lower().strip() for e in times])
save_counter_tsv("time_count.tsv", time_counter, 100)
Do the same thing, but with people:
people_counter = Counter([e.text.lower() for e in people])
save_counter_tsv("people_count.tsv", people_counter, 100)
for word in random.sample(words, 12):
print(word.text, "→", word.lemma_)
projects → project own → own personal → personal in → in attentions → attention remotest → remote is → be his → -PRON- lay → lie than → than full → full a → a
A word's "lemma" is its most "basic" form, the form without any morphology applied to it. "Sing," "sang," "singing," are all different "forms" of the lemma sing. Likewise, "octopi" is the plural of "octopus"; the "lemma" of "octopi" is octopus.
"Lemmatizing" a text is the process of going through the text and replacing each word with its lemma. This is often done in an attempt to reduce a text to its most "essential" meaning, by eliminating pesky things like verb tense and noun number.
Individual sentences can also be iterated over to get a list of words in that sentence:
sentence = random.choice(sentences)
for word in sentence:
print(word.text)
I hastened to return home , and Elizabeth eagerly demanded the result .
Token objects are tagged with their part of speech. For whatever reason, spaCy gives you this part of speech information in two different formats. The pos_
attribute gives the part of speech using the universal POS tag system, while the tag_
attribute gives a more specific designation, using the Penn Treebank system. (Models for different languages will use different schemes; consult the documentation for your model for more information). We used this attribute earlier in the notebook to extract lists of words that had particular parts of speech, but you can access the attribute in other contexts as well:
for item in random.sample(words, 24):
print(item.text, "/", item.pos_, "/", item.tag_)
and / CCONJ / CC the / DET / DT the / DET / DT work / NOUN / NN madman / NOUN / NN one / NUM / CD tears / NOUN / NNS they / PRON / PRP hearing / VERB / VBG of / ADP / IN with / ADP / IN nature / NOUN / NN that / DET / WDT the / DET / DT we / PRON / PRP which / DET / WDT me / PRON / PRP the / DET / DT creators / NOUN / NNS near / SCONJ / IN inanimate / ADJ / JJ my / DET / PRP$ soon / ADV / RB by / ADP / IN
The spacy.explain()
function also gives information about what part of speech tags mean:
spacy.explain('VBP')
'verb, non-3rd person singular present'
.tag_
¶The .pos_
attribute only gives us general information about the part of speech. The .tag_
attribute allows us to be more specific about the kinds of verbs we want. For example, this code gives us only the verbs in past participle form:
only_past = [item.text for item in doc if item.tag_ == 'VBN']
random.sample(only_past, 12)
['attached', 'spent', 'manacled', 'facilitated', 'debilitated', 'occupied', 'confessed', 'been', 'expected', 'seized', 'endowed', 'blasted']
Or only plural nouns:
only_plural = [item.text for item in doc if item.tag_ == 'NNS']
random.sample(only_plural, 12)
['plants', 'impressions', 'scents', 'muscles', 'eyes', 'purposes', 'looks', 'towns', 'causes', 'waters', 'labours', 'papers']
Okay, so we can get individual words and small phrases, like named entities and noun chunks. Great! But what if we want larger chunks, based on their syntactic role in the sentence? For this, we'll need to learn about how spaCy parses sentences into its syntactic components.
The spaCy library parses the underlying sentences using a dependency grammar. Dependency grammars look different from the kinds of sentence diagramming you may have done in high school, and even from tree-based phrase structure grammars commonly used in descriptive linguistics. The idea of a dependency grammar is that every word in a sentence is a "dependent" of some other word, which is that word's "head." Those "head" words are in turn dependents of other words. The finite verb in the sentence is the ultimate "head" of the sentence, and is not itself dependent on any other word. The dependents of a particular head are sometimes called its "children."
The question of how to know what constitutes a "head" and a "dependent" is complicated. For more details, consult Dependency Grammar and Dependency Parsing. But here are some simple guidelines:
For example, in the sentence "Large contented bears hibernate peacefully," bears is the head (a noun in this case) and large and contented are dependents (adjectives). The head of the phrase large contented bears is a noun, so the entire phrase is a noun. You could also rewrite the sentence to omit the dependents altogether, and it would still make sense: "Bears hibernate peacefully." Likewise, the adverb peacefully is a dependent of the head hibernate; the sentence could be rewritten as simply "Bears hibernate."
Dependents are related to their heads by a syntactic relation. The name of the syntactic relation describes the relationship between the head and the dependent. Every token object in a spaCy document or sentence has attributes that tell you what the word's head is, what the dependency relationship is between that word and its head, and a list of that word's children (dependents).
The developers of spaCy included a little tool for visualizing the dependency relations of a particular sentence. Let's look at the sentence "I have eaten the plums that were in the icebox" as an example:
spacy.displacy.render(nlp("I have eaten the plums that were in the icebox."), style='dep')
The arcs you see originate at a head and terminate at dependencies (or children). If you follow all of the arcs back from dependent to head, you'll eventually get back to eaten, which is the root of the sentence. Each arc is labelled with the dependency relation, which tells us what role the dependent fills in the syntax and meaning of the parent word. For example, I is related to eaten by the nsubj
relation, which means that I is the "nominal subject" of the verb. The word icebox is related to the head in via the pobj
relation, meaning that icebox is the object of the preposition in An exhaustive list of the meanings of these relations can be found in the Stanford Dependencies Manual.
The following code prints out each word in the sentence, the tag, the word's head, the word's dependency relation with its head, and the word's children (i.e., dependent words). (This code isn't especially useful on its own, it's just here to help show you how this functionality works.)
sent = random.choice(sentences)
print("Original sentence:", sent.text.replace("\n", " "))
for word in sent:
print()
print("Word:", word.text)
print("Tag:", word.tag_)
print("Head:", word.head.text)
print("Dependency relation:", word.dep_)
print("Children:", list(word.children))
Original sentence: He is now much recovered from his illness and is continually on the deck, apparently watching for the sledge that preceded his own. Word: He Tag: PRP Head: recovered Dependency relation: nsubjpass Children: [] Word: is Tag: VBZ Head: recovered Dependency relation: auxpass Children: [] Word: now Tag: RB Head: recovered Dependency relation: advmod Children: [] Word: much Tag: RB Head: recovered Dependency relation: advmod Children: [] Word: recovered Tag: VBN Head: recovered Dependency relation: ROOT Children: [He, is, now, much, from, and, is, .] Word: from Tag: IN Head: recovered Dependency relation: prep Children: [illness] Word: his Tag: PRP$ Head: illness Dependency relation: poss Children: [] Word: illness Tag: NN Head: from Dependency relation: pobj Children: [his] Word: and Tag: CC Head: recovered Dependency relation: cc Children: [] Word: is Tag: VBZ Head: recovered Dependency relation: conj Children: [continually, on, ,, watching] Word: continually Tag: RB Head: is Dependency relation: advmod Children: [] Word: on Tag: IN Head: is Dependency relation: prep Children: [deck] Word: the Tag: DT Head: deck Dependency relation: det Children: [] Word: deck Tag: NN Head: on Dependency relation: pobj Children: [the] Word: , Tag: , Head: is Dependency relation: punct Children: [ ] Word: Tag: _SP Head: , Dependency relation: Children: [] Word: apparently Tag: RB Head: watching Dependency relation: advmod Children: [] Word: watching Tag: VBG Head: is Dependency relation: advcl Children: [apparently, for] Word: for Tag: IN Head: watching Dependency relation: prep Children: [sledge] Word: the Tag: DT Head: sledge Dependency relation: det Children: [] Word: sledge Tag: NN Head: for Dependency relation: pobj Children: [the, preceded] Word: that Tag: WDT Head: preceded Dependency relation: nsubj Children: [] Word: preceded Tag: VBD Head: sledge Dependency relation: relcl Children: [that, own] Word: his Tag: PRP$ Head: own Dependency relation: poss Children: [] Word: own Tag: JJ Head: preceded Dependency relation: dobj Children: [his] Word: . Tag: . Head: recovered Dependency relation: punct Children: []
Here's a list of a few dependency relations and what they mean, for quick reference:
nsubj
: this word's head is a verb, and this word is itself the subject of the verbnsubjpass
: same as above, but for subjects in sentences in the passive voicedobj
: this word's head is a verb, and this word is itself the direct object of the verbiobj
: same as above, but indirect objectaux
: this word's head is a verb, and this word is an "auxiliary" verb (like "have", "will", "be")attr
: this word's head is a copula (like "to be"), and this is the description attributed to the subject of the sentence (e.g., in "This product is a global brand", brand
is dependent on is
with the attr
dependency relation)det
: this word's head is a noun, and this word is a determiner of that noun (like "the," "this," etc.)amod
: this word's head is a noun, and this word is an adjective describing that nounprep
: this word is a preposition that modifies its headpobj
: this word is a dependent (object) of a prepositionThat's all pretty abstract, so let's get a bit more concrete, and write some code that will let us extract syntactic units based on their dependency relation. There are a couple of things we need in order to do this. The .subtree
attribute I used in the code above evaluates to a generator that can be flatted by passing it to list()
. This is a list of the word's syntactic dependents—essentially, the "clause" that the word belongs to.
This function merges a subtree and returns a string with the text of the words contained in it:
def flatten_subtree(st):
return ''.join([w.text_with_ws for w in list(st)]).strip()
With this function in our toolbox, we can write a loop that prints out the subtree for each word in a sentence. (Again, this code is just here to demonstrate what the process of grabbing subtrees looks like—it doesn't do anything useful yet!)
sent = random.choice(sentences)
print("Original sentence:", sent.text.replace("\n", " "))
for word in sent:
print()
print("Word:", word.text.replace("\n", " "))
print("Flattened subtree: ", flatten_subtree(word.subtree).replace("\n", " "))
Original sentence: I saw an insurmountable barrier placed between me and my fellow men; this barrier was sealed with the blood of William and Justine, and to reflect on the events connected with those names filled my soul with anguish. Word: I Flattened subtree: I Word: saw Flattened subtree: I saw an insurmountable barrier placed between me and my fellow men Word: an Flattened subtree: an Word: insurmountable Flattened subtree: insurmountable Word: barrier Flattened subtree: an insurmountable barrier placed between me and my fellow men Word: placed Flattened subtree: placed between me and my fellow men Word: between Flattened subtree: between me and my fellow men Word: me Flattened subtree: me and my fellow men Word: and Flattened subtree: and Word: my Flattened subtree: my Word: Flattened subtree: Word: fellow Flattened subtree: fellow Word: men Flattened subtree: my fellow men Word: ; Flattened subtree: ; Word: this Flattened subtree: this Word: barrier Flattened subtree: this barrier Word: was Flattened subtree: was Word: sealed Flattened subtree: I saw an insurmountable barrier placed between me and my fellow men; this barrier was sealed with the blood of William and Justine, and to reflect on the events connected with those names filled my soul with anguish. Word: with Flattened subtree: with the blood of William and Justine Word: the Flattened subtree: the Word: blood Flattened subtree: the blood of William and Justine Word: of Flattened subtree: of William and Justine Word: William Flattened subtree: William and Justine Word: and Flattened subtree: and Word: Flattened subtree: Word: Justine Flattened subtree: Justine Word: , Flattened subtree: , Word: and Flattened subtree: and Word: to Flattened subtree: to Word: reflect Flattened subtree: to reflect on the events connected with those names filled my soul with anguish Word: on Flattened subtree: on the events connected with those names filled my soul with anguish Word: the Flattened subtree: the Word: events Flattened subtree: the events connected with those names filled my soul with anguish Word: connected Flattened subtree: connected with those names filled my soul with anguish Word: with Flattened subtree: with those names filled my soul with anguish Word: those Flattened subtree: those Word: names Flattened subtree: those names filled my soul with anguish Word: filled Flattened subtree: filled my soul with anguish Word: Flattened subtree: Word: my Flattened subtree: my Word: soul Flattened subtree: my soul Word: with Flattened subtree: with anguish Word: anguish Flattened subtree: anguish Word: . Flattened subtree: . Word: Flattened subtree:
Using the subtree and our knowledge of dependency relation types, we can write code that extracts larger syntactic units based on their relationship with the rest of the sentence. For example, to get all of the noun phrases that are subjects of a verb:
subjects = []
for word in doc:
if word.dep_ in ('nsubj', 'nsubjpass'):
subjects.append(flatten_subtree(word.subtree))
random.sample(subjects, 12)
['Justine', 'that', 'which', 'The hour of my irresolution', 'Beaufort', 'the wind that blew me from the detested\nshore of Ireland, and the sea which surrounded me', 'Immense and\nrugged mountains of ice', 'I', 'that', 'I', 'I', 'I']
Or every prepositional phrase:
prep_phrases = []
for word in doc:
if word.dep_ == 'prep':
prep_phrases.append(flatten_subtree(word.subtree).replace("\n", " "))
random.sample(prep_phrases, 12)
['of the copyright holder', 'of preservation', 'in torrents', 'in England', 'from my sight', 'at a short distance from the shore', 'of', 'as an occurrence which no accident could possibly prevent', 'by one', 'of the beauty', 'with the tenderest compassion', 'of revenge and hatred']
One thing I like to do is put together text from parts we've disarticulated with spaCy. Let's use Tracery to do this. If you don't know how to use Tracery, feel free to consult my Tracery tutorial before continuing.
So I want to generate sentences based on things that I've extracted from my text. My first idea: get subjects of sentences, verbs of sentences, nouns and adjectives, and prepositional phrases:
subjects = [flatten_subtree(word.subtree).replace("\n", " ")
for word in doc if word.dep_ in ('nsubj', 'nsubjpass')]
past_tense_verbs = [word.text for word in words if word.tag_ == 'VBD' and word.lemma_ != 'be']
adjectives = [word.text for word in words if word.tag_.startswith('JJ')]
nouns = [word.text for word in words if word.tag_.startswith('NN')]
prep_phrases = [flatten_subtree(word.subtree).replace("\n", " ")
for word in doc if word.dep_ == 'prep']
Notes on the code above:
.replace("\n", " ")
is in there because spaCy treats linebreaks as normal whitespace, and retains them when we ask for the span's text. For formatting reasons, we want to get rid of this..startswith()
in the checks for parts of speech in order to capture other related parts of speech (e.g., JJR
is comparative adjectives, NNS
is plural nouns).Now I'll import Tracery. If you haven't already installed it, you can do so using the following cell:
import sys
!{sys.executable} -m pip install tracery
Requirement already satisfied: tracery in /Users/allison/opt/miniconda3/envs/rwet-2022/lib/python3.8/site-packages (0.1.1)
import tracery
from tracery.modifiers import base_english
... and define a grammar. The "trick" of this example is that I grab entire rule expansions from the units extracted from the text using spaCy. The grammar itself is built around producing sentences that look and feel like English.
rules = {
"origin": [
"#subject.capitalize# #predicate#.",
"#subject.capitalize# #predicate#.",
"#prepphrase.capitalize#, #subject# #predicate#."
],
"predicate": [
"#verb#",
"#verb# #nounphrase#",
"#verb# #prepphrase#"
],
"nounphrase": [
"the #noun#",
"the #adj# #noun#",
"the #noun# #prepphrase#",
"the #noun# and the #noun#",
"#noun.a#",
"#adj.a# #noun#",
"the #noun# that #predicate#"
],
"subject": subjects,
"verb": past_tense_verbs,
"noun": nouns,
"adj": adjectives,
"prepphrase": prep_phrases
}
grammar = tracery.Grammar(rules)
grammar.add_modifiers(base_english)
grammar.flatten("#origin#")
'Of my creature, he knew an immutable victim.'
Let's generate a whole paragraph of this and format it nicely:
from textwrap import fill
output = " ".join([grammar.flatten("#origin#") for i in range(12)])
print(fill(output, 60))
The very accents of love seemed. Of some uncontrollable passion, his hope and his dream dreaded upon. I had. For many years, I had without further opportunities to fix the problem. The relation of my disasters assumed the vengeance that had in you. This history made the contrast and the hope. We placed. For nearly any purpose such as creation of derivative works, reports, performances and research, I said the blackest Begone. I did of this tour about the end of July. I disinclined the September into his heart. The moon gained the spring. His countenance looked in other respects.
I like this approach for a number of reasons. Because I'm using a hand-written grammar, I have a great deal of control over the shape and rhythm of the sentences that are generated. But spaCy lets me pre-populate my grammar's vocabulary without having to write each item by hand.
We've barely scratched the surface of what it's possible to do with spaCy. The official site has a good list of guides to various natural language processing tasks that you should check out, and there are also a handful of books that dig deeper into using spaCy for natural language processing tasks.