All the IPython Notebooks in Python Natural Language Processing lecture series by Dr. Milaan Parmar are available @ GitHub
# Split by Whitespace
import re
text = "I\'ll always be there with you forever in your heart.!"
words = re.split(r'\W+', text)
print(words[:100])
['I', 'll', 'always', 'be', 'there', 'with', 'you', 'forever', 'in', 'your', 'heart', '']
Here It didn't recognise .
at the last of the sentence.
Remove punctuations and separate the word
import string
import re
# split into words by white space
words = text.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(string.punctuation))
# remove punctuation from each word
stripped = [re_punc.sub('', w) for w in words]
print(stripped[:100])
['Ill', 'always', 'be', 'there', 'with', 'you', 'forever', 'in', 'your', 'heart']
# string.printable inverse of string.punctuation
re_print = re.compile('[^%s]' % re.escape(string.printable))
result = [re_print.sub('', w) for w in words]
print(result)
["I'll", 'always', 'be', 'there', 'with', 'you', 'forever', 'in', 'your', 'heart!']
# Normalizing Case
# split into words by white space
words = text.split()
# convert to lower case
words = [word.lower() for word in words]
print(words[:100])
["i'll", 'always', 'be', 'there', 'with', 'you', 'forever', 'in', 'your', 'heart!']
Spacy is an open-source software python library used in advanced natural language processing and machine learning. It will be used to build information extraction, natural language understanding systems, and to pre-process text for deep learning.
Install:
!pip install -U spacy
!python -m spacy download en_core_web_sm
!pip install -U spacy
!python -m spacy download en_core_web_sm
import spacy
nlp = spacy.load('en_core_web_sm')
# Here en_core means core english,web_sm means small language,We import core english language here.
# This is spacy's internal english language
Requirement already satisfied: spacy in /usr/local/lib/python3.7/dist-packages (2.2.4)
Collecting spacy
Downloading spacy-3.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.0 MB)
|████████████████████████████████| 6.0 MB 8.3 MB/s
Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from spacy) (2.11.3)
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (1.0.5)
Collecting thinc<8.1.0,>=8.0.12
Downloading thinc-8.0.13-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (628 kB)
|████████████████████████████████| 628 kB 33.5 MB/s
Collecting typer<0.5.0,>=0.3.0
Downloading typer-0.4.0-py3-none-any.whl (27 kB)
Collecting langcodes<4.0.0,>=3.2.0
Downloading langcodes-3.2.1.tar.gz (173 kB)
|████████████████████████████████| 173 kB 59.4 MB/s
Collecting spacy-loggers<2.0.0,>=1.0.0
Downloading spacy_loggers-1.0.1-py3-none-any.whl (7.0 kB)
Collecting spacy-legacy<3.1.0,>=3.0.8
Downloading spacy_legacy-3.0.8-py2.py3-none-any.whl (14 kB)
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy) (3.0.5)
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (2.23.0)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (21.0)
Collecting pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4
Downloading pydantic-1.8.2-cp37-cp37m-manylinux2014_x86_64.whl (10.1 MB)
|████████████████████████████████| 10.1 MB 45.8 MB/s
Collecting pathy>=0.3.5
Downloading pathy-0.6.1-py3-none-any.whl (42 kB)
|████████████████████████████████| 42 kB 1.3 MB/s
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy) (57.4.0)
Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (1.19.5)
Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (0.4.1)
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (4.62.3)
Requirement already satisfied: typing-extensions<4.0.0.0,>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from spacy) (3.7.4.3)
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy) (2.0.5)
Collecting catalogue<2.1.0,>=2.0.6
Downloading catalogue-2.0.6-py3-none-any.whl (17 kB)
Requirement already satisfied: wasabi<1.1.0,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from spacy) (0.8.2)
Collecting srsly<3.0.0,>=2.4.1
Downloading srsly-2.4.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (451 kB)
|████████████████████████████████| 451 kB 48.3 MB/s
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from catalogue<2.1.0,>=2.0.6->spacy) (3.6.0)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->spacy) (2.4.7)
Requirement already satisfied: smart-open<6.0.0,>=5.0.0 in /usr/local/lib/python3.7/dist-packages (from pathy>=0.3.5->spacy) (5.2.1)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (2021.5.30)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (2.10)
Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.7/dist-packages (from typer<0.5.0,>=0.3.0->spacy) (7.1.2)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->spacy) (2.0.1)
Building wheels for collected packages: langcodes
Building wheel for langcodes (setup.py) ... done
Created wheel for langcodes: filename=langcodes-3.2.1-py3-none-any.whl size=169396 sha256=9531f6c74ca24a2e58cea16da88a53faefff68155c8068bb2c3399a6b8c5a15f
Stored in directory: /root/.cache/pip/wheels/12/9c/b3/d42c928e622075d3b6056733125190086e44c9230878e6eb2b
Successfully built langcodes
Installing collected packages: catalogue, typer, srsly, pydantic, thinc, spacy-loggers, spacy-legacy, pathy, langcodes, spacy
Attempting uninstall: catalogue
Found existing installation: catalogue 1.0.0
Uninstalling catalogue-1.0.0:
Successfully uninstalled catalogue-1.0.0
Attempting uninstall: srsly
Found existing installation: srsly 1.0.5
Uninstalling srsly-1.0.5:
Successfully uninstalled srsly-1.0.5
Attempting uninstall: thinc
Found existing installation: thinc 7.4.0
Uninstalling thinc-7.4.0:
Successfully uninstalled thinc-7.4.0
Attempting uninstall: spacy
Found existing installation: spacy 2.2.4
Uninstalling spacy-2.2.4:
Successfully uninstalled spacy-2.2.4
Successfully installed catalogue-2.0.6 langcodes-3.2.1 pathy-0.6.1 pydantic-1.8.2 spacy-3.2.0 spacy-legacy-3.0.8 spacy-loggers-1.0.1 srsly-2.4.2 thinc-8.0.13 typer-0.4.0
Collecting en-core-web-sm==3.2.0
Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.2.0/en_core_web_sm-3.2.0-py3-none-any.whl (13.9 MB)
|████████████████████████████████| 13.9 MB 94 kB/s
Requirement already satisfied: spacy<3.3.0,>=3.2.0 in /usr/local/lib/python3.7/dist-packages (from en-core-web-sm==3.2.0) (3.2.0)
Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (1.0.1)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (21.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.11.3)
Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.8 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (3.0.8)
Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (0.4.1)
Requirement already satisfied: srsly<3.0.0,>=2.4.1 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.4.2)
Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (3.2.1)
Requirement already satisfied: thinc<8.1.0,>=8.0.12 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (8.0.13)
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (4.62.3)
Requirement already satisfied: wasabi<1.1.0,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (0.8.2)
Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.0.6)
Requirement already satisfied: pathy>=0.3.5 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (0.6.1)
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.23.0)
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (3.0.5)
Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (1.19.5)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (57.4.0)
Requirement already satisfied: pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (1.8.2)
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (1.0.5)
Requirement already satisfied: typer<0.5.0,>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (0.4.0)
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.0.5)
Requirement already satisfied: typing-extensions<4.0.0.0,>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (3.7.4.3)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from catalogue<2.1.0,>=2.0.6->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (3.6.0)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.4.7)
Requirement already satisfied: smart-open<6.0.0,>=5.0.0 in /usr/local/lib/python3.7/dist-packages (from pathy>=0.3.5->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (5.2.1)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2021.5.30)
Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.7/dist-packages (from typer<0.5.0,>=0.3.0->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (7.1.2)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.0.1)
Installing collected packages: en-core-web-sm
Attempting uninstall: en-core-web-sm
Found existing installation: en-core-web-sm 2.2.5
Uninstalling en-core-web-sm-2.2.5:
Successfully uninstalled en-core-web-sm-2.2.5
Successfully installed en-core-web-sm-3.2.0
✔ Download and installation successful
You can now load the package via spacy.load('en_core_web_sm')
string = '"I\'ll always be there with you forever in your mind!"'
print(string)
"I'll always be there with you forever in your mind!"
# Here we will break the string into token and print in text
doc = nlp(string)
for token in doc:
print(token.text, end=' | ')
" | I | 'll | always | be | there | with | you | forever | in | your | mind | ! | " |
doc
"I'll always be there with you forever in your mind!"
# Here we break the string into unicode token
doc2 = nlp(u"I'm always here to help you all! Email:milaanparmar9@gmail.com or visit more at https://github.com/milaan9!")
for t in doc2:
print(t)
I 'm always here to help you all ! Email:milaanparmar9@gmail.com or visit more at https://github.com/milaan9 !
doc3 = nlp(u'A 5km NYC cab ride costs $10.30')
for t in doc3:
print(t)
A 5 km NYC cab ride costs $ 10.30
doc4 = nlp(u"Let's visit St. Louis in the U.S. next year.")
for t in doc4:
print(t)
Let 's visit St. Louis in the U.S. next year .
see the number of word used in the string
len(doc4)
11
see the number of vocabolary or character used in the string
# How many vocabolary present in token
len(doc.vocab)
802
Print the 3rd word from the string
doc5 = nlp(u'It is better to give than to receive.')
# Retrieve the third token:
doc5[2]
better
Print the words from 3rd word to 4th word
# Retrieve three tokens from the middle:
doc5[2:5]
better to give
# Retrieve the last four tokens:
doc5[-4:]
than to receive.
doc6 = nlp(u'My dinner was horrible.')
doc7 = nlp(u'Your dinner was delicious.')
We can't store any word from one line to the any word of other line/sentence
# Try to change "My dinner was horrible" to "My dinner was delicious"
doc6[3] = doc7[3]
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-18-d4fb8c39c40b> in <module>() 1 # Try to change "My dinner was horrible" to "My dinner was delicious" ----> 2 doc6[3] = doc7[3] TypeError: 'spacy.tokens.doc.Doc' object does not support item assignment
# We can explain which type of token is this
doc8 = nlp(u'Apple to build a Hong Kong factory for $6 million')
for token in doc8:
print(token.text, end=' | ')
print('\n----')
# ents shows all entities present in all tokens
for ent in doc8.ents:
print(ent.text+' - '+ent.label_+' - '+str(spacy.explain(ent.label_)))
Apple | to | build | a | Hong | Kong | factory | for | $ | 6 | million | ---- Apple - ORG - Companies, agencies, institutions, etc. Hong Kong - GPE - Countries, cities, states $6 million - MONEY - Monetary values, including unit
Here it shows "Apple" may be a companies,agencies,institution.Similarly like Hong Kong may be a countries,cities.....etc
len(doc8.ents)
3
doc9 = nlp(u"Autonomous cars shift insurance liability toward manufacturers.")
# noun_chunks finds all noun present in all tokens
for chunk in doc9.noun_chunks:
print(chunk.text)
Autonomous cars insurance liability manufacturers
doc10 = nlp(u"Red cars do not carry higher insurance rates.")
for chunk in doc10.noun_chunks:
print(chunk.text)
Red cars higher insurance rates
doc11 = nlp(u"He was a one-eyed, one-horned, flying, purple people-eater.")
for chunk in doc11.noun_chunks:
print(chunk.text)
He a one-eyed, one-horned, flying, purple people-eater
# Here we can see how "displacy render" shows which type of token are these
# and relationships between them
from spacy import displacy
doc = nlp(u'Apple is going to build a U.K. factory for $6 million.')
displacy.render(doc, style='dep', jupyter=True, options={'distance': 110})
# Showing types of token and there relationships in a line
doc = nlp(u'Over the last quarter Apple sold nearly 20 thousand iPods for a profit of $6 million.')
displacy.render(doc, style='ent', jupyter=True)
doc = nlp(u'This is a sentence.')
displacy.serve(doc, style='dep')
# In-case ▶ runtime is too long press ■ Stop
Using the 'dep' visualizer Serving on http://0.0.0.0:5000 ... Shutting down server on port 5000.
!pip install nltk
Requirement already satisfied: nltk in /usr/local/lib/python3.7/dist-packages (3.2.5) Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from nltk) (1.15.0)
# Tokenization of paragraphs/sentences
import nltk
# nltk.download("popular") # use this to download all popular libraries in nltk
nltk.download('all')
[nltk_data] Downloading collection 'all' [nltk_data] | [nltk_data] | Downloading package abc to /root/nltk_data... [nltk_data] | Unzipping corpora/abc.zip. [nltk_data] | Downloading package alpino to /root/nltk_data... [nltk_data] | Unzipping corpora/alpino.zip. [nltk_data] | Downloading package biocreative_ppi to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/biocreative_ppi.zip. [nltk_data] | Downloading package brown to /root/nltk_data... [nltk_data] | Unzipping corpora/brown.zip. [nltk_data] | Downloading package brown_tei to /root/nltk_data... [nltk_data] | Unzipping corpora/brown_tei.zip. [nltk_data] | Downloading package cess_cat to /root/nltk_data... [nltk_data] | Unzipping corpora/cess_cat.zip. [nltk_data] | Downloading package cess_esp to /root/nltk_data... [nltk_data] | Unzipping corpora/cess_esp.zip. [nltk_data] | Downloading package chat80 to /root/nltk_data... [nltk_data] | Unzipping corpora/chat80.zip. [nltk_data] | Downloading package city_database to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/city_database.zip. [nltk_data] | Downloading package cmudict to /root/nltk_data... [nltk_data] | Unzipping corpora/cmudict.zip. [nltk_data] | Downloading package comparative_sentences to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/comparative_sentences.zip. [nltk_data] | Downloading package comtrans to /root/nltk_data... [nltk_data] | Downloading package conll2000 to /root/nltk_data... [nltk_data] | Unzipping corpora/conll2000.zip. [nltk_data] | Downloading package conll2002 to /root/nltk_data... [nltk_data] | Unzipping corpora/conll2002.zip. [nltk_data] | Downloading package conll2007 to /root/nltk_data... [nltk_data] | Downloading package crubadan to /root/nltk_data... [nltk_data] | Unzipping corpora/crubadan.zip. [nltk_data] | Downloading package dependency_treebank to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/dependency_treebank.zip. [nltk_data] | Downloading package dolch to /root/nltk_data... [nltk_data] | Unzipping corpora/dolch.zip. [nltk_data] | Downloading package europarl_raw to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/europarl_raw.zip. [nltk_data] | Downloading package floresta to /root/nltk_data... [nltk_data] | Unzipping corpora/floresta.zip. [nltk_data] | Downloading package framenet_v15 to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/framenet_v15.zip. [nltk_data] | Downloading package framenet_v17 to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/framenet_v17.zip. [nltk_data] | Downloading package gazetteers to /root/nltk_data... [nltk_data] | Unzipping corpora/gazetteers.zip. [nltk_data] | Downloading package genesis to /root/nltk_data... [nltk_data] | Unzipping corpora/genesis.zip. [nltk_data] | Downloading package gutenberg to /root/nltk_data... [nltk_data] | Unzipping corpora/gutenberg.zip. [nltk_data] | Downloading package ieer to /root/nltk_data... [nltk_data] | Unzipping corpora/ieer.zip. [nltk_data] | Downloading package inaugural to /root/nltk_data... [nltk_data] | Unzipping corpora/inaugural.zip. [nltk_data] | Downloading package indian to /root/nltk_data... [nltk_data] | Unzipping corpora/indian.zip. [nltk_data] | Downloading package jeita to /root/nltk_data... [nltk_data] | Downloading package kimmo to /root/nltk_data... [nltk_data] | Unzipping corpora/kimmo.zip. [nltk_data] | Downloading package knbc to /root/nltk_data... [nltk_data] | Downloading package lin_thesaurus to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/lin_thesaurus.zip. [nltk_data] | Downloading package mac_morpho to /root/nltk_data... [nltk_data] | Unzipping corpora/mac_morpho.zip. [nltk_data] | Downloading package machado to /root/nltk_data... [nltk_data] | Downloading package masc_tagged to /root/nltk_data... [nltk_data] | Downloading package moses_sample to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping models/moses_sample.zip. [nltk_data] | Downloading package movie_reviews to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/movie_reviews.zip. [nltk_data] | Downloading package names to /root/nltk_data... [nltk_data] | Unzipping corpora/names.zip. [nltk_data] | Downloading package nombank.1.0 to /root/nltk_data... [nltk_data] | Downloading package nps_chat to /root/nltk_data... [nltk_data] | Unzipping corpora/nps_chat.zip. [nltk_data] | Downloading package omw to /root/nltk_data... [nltk_data] | Unzipping corpora/omw.zip. [nltk_data] | Downloading package opinion_lexicon to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/opinion_lexicon.zip. [nltk_data] | Downloading package paradigms to /root/nltk_data... [nltk_data] | Unzipping corpora/paradigms.zip. [nltk_data] | Downloading package pil to /root/nltk_data... [nltk_data] | Unzipping corpora/pil.zip. [nltk_data] | Downloading package pl196x to /root/nltk_data... [nltk_data] | Unzipping corpora/pl196x.zip. [nltk_data] | Downloading package ppattach to /root/nltk_data... [nltk_data] | Unzipping corpora/ppattach.zip. [nltk_data] | Downloading package problem_reports to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/problem_reports.zip. [nltk_data] | Downloading package propbank to /root/nltk_data... [nltk_data] | Downloading package ptb to /root/nltk_data... [nltk_data] | Unzipping corpora/ptb.zip. [nltk_data] | Downloading package product_reviews_1 to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/product_reviews_1.zip. [nltk_data] | Downloading package product_reviews_2 to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/product_reviews_2.zip. [nltk_data] | Downloading package pros_cons to /root/nltk_data... [nltk_data] | Unzipping corpora/pros_cons.zip. [nltk_data] | Downloading package qc to /root/nltk_data... [nltk_data] | Unzipping corpora/qc.zip. [nltk_data] | Downloading package reuters to /root/nltk_data... [nltk_data] | Downloading package rte to /root/nltk_data... [nltk_data] | Unzipping corpora/rte.zip. [nltk_data] | Downloading package semcor to /root/nltk_data... [nltk_data] | Downloading package senseval to /root/nltk_data... [nltk_data] | Unzipping corpora/senseval.zip. [nltk_data] | Downloading package sentiwordnet to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/sentiwordnet.zip. [nltk_data] | Downloading package sentence_polarity to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/sentence_polarity.zip. [nltk_data] | Downloading package shakespeare to /root/nltk_data... [nltk_data] | Unzipping corpora/shakespeare.zip. [nltk_data] | Downloading package sinica_treebank to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/sinica_treebank.zip. [nltk_data] | Downloading package smultron to /root/nltk_data... [nltk_data] | Unzipping corpora/smultron.zip. [nltk_data] | Downloading package state_union to /root/nltk_data... [nltk_data] | Unzipping corpora/state_union.zip. [nltk_data] | Downloading package stopwords to /root/nltk_data... [nltk_data] | Unzipping corpora/stopwords.zip. [nltk_data] | Downloading package subjectivity to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/subjectivity.zip. [nltk_data] | Downloading package swadesh to /root/nltk_data... [nltk_data] | Unzipping corpora/swadesh.zip. [nltk_data] | Downloading package switchboard to /root/nltk_data... [nltk_data] | Unzipping corpora/switchboard.zip. [nltk_data] | Downloading package timit to /root/nltk_data... [nltk_data] | Unzipping corpora/timit.zip. [nltk_data] | Downloading package toolbox to /root/nltk_data... [nltk_data] | Unzipping corpora/toolbox.zip. [nltk_data] | Downloading package treebank to /root/nltk_data... [nltk_data] | Unzipping corpora/treebank.zip. [nltk_data] | Downloading package twitter_samples to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/twitter_samples.zip. [nltk_data] | Downloading package udhr to /root/nltk_data... [nltk_data] | Unzipping corpora/udhr.zip. [nltk_data] | Downloading package udhr2 to /root/nltk_data... [nltk_data] | Unzipping corpora/udhr2.zip. [nltk_data] | Downloading package unicode_samples to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/unicode_samples.zip. [nltk_data] | Downloading package universal_treebanks_v20 to [nltk_data] | /root/nltk_data... [nltk_data] | Downloading package verbnet to /root/nltk_data... [nltk_data] | Unzipping corpora/verbnet.zip. [nltk_data] | Downloading package verbnet3 to /root/nltk_data... [nltk_data] | Unzipping corpora/verbnet3.zip. [nltk_data] | Downloading package webtext to /root/nltk_data... [nltk_data] | Unzipping corpora/webtext.zip. [nltk_data] | Downloading package wordnet to /root/nltk_data... [nltk_data] | Unzipping corpora/wordnet.zip. [nltk_data] | Downloading package wordnet31 to /root/nltk_data... [nltk_data] | Unzipping corpora/wordnet31.zip. [nltk_data] | Downloading package wordnet_ic to /root/nltk_data... [nltk_data] | Unzipping corpora/wordnet_ic.zip. [nltk_data] | Downloading package words to /root/nltk_data... [nltk_data] | Unzipping corpora/words.zip. [nltk_data] | Downloading package ycoe to /root/nltk_data... [nltk_data] | Unzipping corpora/ycoe.zip. [nltk_data] | Downloading package rslp to /root/nltk_data... [nltk_data] | Unzipping stemmers/rslp.zip. [nltk_data] | Downloading package maxent_treebank_pos_tagger to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping taggers/maxent_treebank_pos_tagger.zip. [nltk_data] | Downloading package universal_tagset to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping taggers/universal_tagset.zip. [nltk_data] | Downloading package maxent_ne_chunker to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping chunkers/maxent_ne_chunker.zip. [nltk_data] | Downloading package punkt to /root/nltk_data... [nltk_data] | Unzipping tokenizers/punkt.zip. [nltk_data] | Downloading package book_grammars to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping grammars/book_grammars.zip. [nltk_data] | Downloading package sample_grammars to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping grammars/sample_grammars.zip. [nltk_data] | Downloading package spanish_grammars to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping grammars/spanish_grammars.zip. [nltk_data] | Downloading package basque_grammars to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping grammars/basque_grammars.zip. [nltk_data] | Downloading package large_grammars to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping grammars/large_grammars.zip. [nltk_data] | Downloading package tagsets to /root/nltk_data... [nltk_data] | Unzipping help/tagsets.zip. [nltk_data] | Downloading package snowball_data to [nltk_data] | /root/nltk_data... [nltk_data] | Downloading package bllip_wsj_no_aux to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping models/bllip_wsj_no_aux.zip. [nltk_data] | Downloading package word2vec_sample to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping models/word2vec_sample.zip. [nltk_data] | Downloading package panlex_swadesh to [nltk_data] | /root/nltk_data... [nltk_data] | Downloading package mte_teip5 to /root/nltk_data... [nltk_data] | Unzipping corpora/mte_teip5.zip. [nltk_data] | Downloading package averaged_perceptron_tagger to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping taggers/averaged_perceptron_tagger.zip. [nltk_data] | Downloading package averaged_perceptron_tagger_ru to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping [nltk_data] | taggers/averaged_perceptron_tagger_ru.zip. [nltk_data] | Downloading package perluniprops to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping misc/perluniprops.zip. [nltk_data] | Downloading package nonbreaking_prefixes to [nltk_data] | /root/nltk_data... [nltk_data] | Unzipping corpora/nonbreaking_prefixes.zip. [nltk_data] | Downloading package vader_lexicon to [nltk_data] | /root/nltk_data... [nltk_data] | Downloading package porter_test to /root/nltk_data... [nltk_data] | Unzipping stemmers/porter_test.zip. [nltk_data] | Downloading package wmt15_eval to /root/nltk_data... [nltk_data] | Unzipping models/wmt15_eval.zip. [nltk_data] | Downloading package mwa_ppdb to /root/nltk_data... [nltk_data] | Unzipping misc/mwa_ppdb.zip. [nltk_data] | [nltk_data] Done downloading collection all
True
paragraph = """I have three visions for India. In 3000 years of our history, people from all over
the world have come and invaded us, captured our lands, conquered our minds.
From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British,
the French, the Dutch, all of them came and looted us, took over what was ours.
Yet we have not done this to any other nation. We have not conquered anyone.
We have not grabbed their land, their culture,
their history and tried to enforce our way of life on them.
Why? Because we respect the freedom of others.That is why my
first vision is that of freedom. I believe that India got its first vision of
this in 1857, when we started the War of Independence. It is this freedom that
we must protect and nurture and build on. If we are not free, no one will respect us.
My second vision for India’s development. For fifty years we have been a developing nation.
It is time we see ourselves as a developed nation. We are among the top 5 nations of the world
in terms of GDP. We have a 10 percent growth rate in most areas. Our poverty levels are falling.
Our achievements are being globally recognised today. Yet we lack the self-confidence to
see ourselves as a developed nation, self-reliant and self-assured. Isn’t this incorrect?
I have a third vision. India must stand up to the world. Because I believe that unless India
stands up to the world, no one will respect us. Only strength respects strength. We must be
strong not only as a military power but also as an economic power. Both must go hand-in-hand.
My good fortune was to have worked with three great minds. Dr. Vikram Sarabhai of the Dept. of
space, Professor Satish Dhawan, who succeeded him and Dr. Brahm Prakash, father of nuclear material.
I was lucky to have worked with all three of them closely and consider this the great opportunity of my life.
I see four milestones in my career"""
# Tokenizing sentences
sentences = nltk.sent_tokenize(paragraph)
# Tokenizing words
words = nltk.word_tokenize(paragraph)
sentences
['I have three visions for India.', 'In 3000 years of our history, people from all over \n the world have come and invaded us, captured our lands, conquered our minds.', 'From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British,\n the French, the Dutch, all of them came and looted us, took over what was ours.', 'Yet we have not done this to any other nation.', 'We have not conquered anyone.', 'We have not grabbed their land, their culture, \n their history and tried to enforce our way of life on them.', 'Why?', 'Because we respect the freedom of others.That is why my \n first vision is that of freedom.', 'I believe that India got its first vision of \n this in 1857, when we started the War of Independence.', 'It is this freedom that\n we must protect and nurture and build on.', 'If we are not free, no one will respect us.', 'My second vision for India’s development.', 'For fifty years we have been a developing nation.', 'It is time we see ourselves as a developed nation.', 'We are among the top 5 nations of the world\n in terms of GDP.', 'We have a 10 percent growth rate in most areas.', 'Our poverty levels are falling.', 'Our achievements are being globally recognised today.', 'Yet we lack the self-confidence to\n see ourselves as a developed nation, self-reliant and self-assured.', 'Isn’t this incorrect?', 'I have a third vision.', 'India must stand up to the world.', 'Because I believe that unless India \n stands up to the world, no one will respect us.', 'Only strength respects strength.', 'We must be \n strong not only as a military power but also as an economic power.', 'Both must go hand-in-hand.', 'My good fortune was to have worked with three great minds.', 'Dr. Vikram Sarabhai of the Dept.', 'of \n space, Professor Satish Dhawan, who succeeded him and Dr. Brahm Prakash, father of nuclear material.', 'I was lucky to have worked with all three of them closely and consider this the great opportunity of my life.', 'I see four milestones in my career']
words
['I', 'have', 'three', 'visions', 'for', 'India', '.', 'In', '3000', 'years', 'of', 'our', 'history', ',', 'people', 'from', 'all', 'over', 'the', 'world', 'have', 'come', 'and', 'invaded', 'us', ',', 'captured', 'our', 'lands', ',', 'conquered', 'our', 'minds', '.', 'From', 'Alexander', 'onwards', ',', 'the', 'Greeks', ',', 'the', 'Turks', ',', 'the', 'Moguls', ',', 'the', 'Portuguese', ',', 'the', 'British', ',', 'the', 'French', ',', 'the', 'Dutch', ',', 'all', 'of', 'them', 'came', 'and', 'looted', 'us', ',', 'took', 'over', 'what', 'was', 'ours', '.', 'Yet', 'we', 'have', 'not', 'done', 'this', 'to', 'any', 'other', 'nation', '.', 'We', 'have', 'not', 'conquered', 'anyone', '.', 'We', 'have', 'not', 'grabbed', 'their', 'land', ',', 'their', 'culture', ',', 'their', 'history', 'and', 'tried', 'to', 'enforce', 'our', 'way', 'of', 'life', 'on', 'them', '.', 'Why', '?', 'Because', 'we', 'respect', 'the', 'freedom', 'of', 'others.That', 'is', 'why', 'my', 'first', 'vision', 'is', 'that', 'of', 'freedom', '.', 'I', 'believe', 'that', 'India', 'got', 'its', 'first', 'vision', 'of', 'this', 'in', '1857', ',', 'when', 'we', 'started', 'the', 'War', 'of', 'Independence', '.', 'It', 'is', 'this', 'freedom', 'that', 'we', 'must', 'protect', 'and', 'nurture', 'and', 'build', 'on', '.', 'If', 'we', 'are', 'not', 'free', ',', 'no', 'one', 'will', 'respect', 'us', '.', 'My', 'second', 'vision', 'for', 'India', '’', 's', 'development', '.', 'For', 'fifty', 'years', 'we', 'have', 'been', 'a', 'developing', 'nation', '.', 'It', 'is', 'time', 'we', 'see', 'ourselves', 'as', 'a', 'developed', 'nation', '.', 'We', 'are', 'among', 'the', 'top', '5', 'nations', 'of', 'the', 'world', 'in', 'terms', 'of', 'GDP', '.', 'We', 'have', 'a', '10', 'percent', 'growth', 'rate', 'in', 'most', 'areas', '.', 'Our', 'poverty', 'levels', 'are', 'falling', '.', 'Our', 'achievements', 'are', 'being', 'globally', 'recognised', 'today', '.', 'Yet', 'we', 'lack', 'the', 'self-confidence', 'to', 'see', 'ourselves', 'as', 'a', 'developed', 'nation', ',', 'self-reliant', 'and', 'self-assured', '.', 'Isn', '’', 't', 'this', 'incorrect', '?', 'I', 'have', 'a', 'third', 'vision', '.', 'India', 'must', 'stand', 'up', 'to', 'the', 'world', '.', 'Because', 'I', 'believe', 'that', 'unless', 'India', 'stands', 'up', 'to', 'the', 'world', ',', 'no', 'one', 'will', 'respect', 'us', '.', 'Only', 'strength', 'respects', 'strength', '.', 'We', 'must', 'be', 'strong', 'not', 'only', 'as', 'a', 'military', 'power', 'but', 'also', 'as', 'an', 'economic', 'power', '.', 'Both', 'must', 'go', 'hand-in-hand', '.', 'My', 'good', 'fortune', 'was', 'to', 'have', 'worked', 'with', 'three', 'great', 'minds', '.', 'Dr.', 'Vikram', 'Sarabhai', 'of', 'the', 'Dept', '.', 'of', 'space', ',', 'Professor', 'Satish', 'Dhawan', ',', 'who', 'succeeded', 'him', 'and', 'Dr.', 'Brahm', 'Prakash', ',', 'father', 'of', 'nuclear', 'material', '.', 'I', 'was', 'lucky', 'to', 'have', 'worked', 'with', 'all', 'three', 'of', 'them', 'closely', 'and', 'consider', 'this', 'the', 'great', 'opportunity', 'of', 'my', 'life', '.', 'I', 'see', 'four', 'milestones', 'in', 'my', 'career']