Today we gonna play with word embeddings: train our own little embedding, load one from gensim model zoo and use it to visualize text corpora.
This whole thing is gonna happen on top of embedding dataset.
Requirements: pip install --upgrade nltk gensim bokeh
, but only if you're running locally.
# download the data:
!wget https://www.dropbox.com/s/obaitrix9jyu84r/quora.txt?dl=1 -O ./quora.txt
# alternative download link: https://yadi.sk/i/BPQrUu1NaTduEw
import numpy as np
data = list(open("./quora.txt", encoding="utf-8"))
data[50]
Tokenization: a typical first step for an nlp task is to split raw data into words. The text we're working with is in raw format: with all the punctuation and smiles attached to some words, so a simple str.split won't do.
Let's use nltk
- a library that handles many nlp tasks like tokenization, stemming or part-of-speech tagging.
from nltk.tokenize import WordPunctTokenizer
tokenizer = WordPunctTokenizer()
print(tokenizer.tokenize(data[50]))
# TASK: lowercase everything and extract tokens with tokenizer.
# data_tok should be a list of lists of tokens for each line in data.
data_tok = # YOUR CODE
assert all(isinstance(row, (list, tuple)) for row in data_tok), "please convert each line into a list of tokens (strings)"
assert all(all(isinstance(tok, str) for tok in row) for row in data_tok), "please convert each line into a list of tokens (strings)"
is_latin = lambda tok: all('a' <= x.lower() <= 'z' for x in tok)
assert all(map(lambda l: not is_latin(l) or l.islower(), map(' '.join, data_tok))), "please make sure to lowercase the data"
print([' '.join(row) for row in data_tok[:2]])
Word vectors: as the saying goes, there's more than one way to train word embeddings. There's Word2Vec and GloVe with different objective functions. Then there's fasttext that uses character-level models to train word embeddings.
The choice is huge, so let's start someplace small: gensim is another nlp library that features many vector-based models incuding word2vec.
from gensim.models import Word2Vec
model = Word2Vec(data_tok,
size=32, # embedding vector size
min_count=5, # consider words that occured at least 5 times
window=5).wv # define context as a 5-word window around the target word
# now you can get word vectors !
model.get_vector('anything')
# or query similar words directly. Go play with it!
model.most_similar('bread')
Took it a while, huh? Now imagine training life-sized (100~300D) word embeddings on gigabytes of text: wikipedia articles or twitter posts.
Thankfully, nowadays you can get a pre-trained word embedding model in 2 lines of code (no sms required, promise).
import gensim.downloader as api
model = api.load('glove-twitter-100')
model.most_similar(positive=["coder", "money"], negative=["brain"])
One way to see if our vectors are any good is to plot them. Thing is, those vectors are in 30D+ space and we humans are more used to 2-3D.
Luckily, we machine learners know about dimensionality reduction methods.
Let's use that to plot 1000 most frequent words
words = sorted(model.vocab.keys(),
key=lambda word: model.vocab[word].count,
reverse=True)[:1000]
print(words[::100])
# for each word, compute it's vector with model
word_vectors = # YOUR CODE
assert isinstance(word_vectors, np.ndarray)
assert word_vectors.shape == (len(words), 100)
assert np.isfinite(word_vectors).all()
The simplest linear dimensionality reduction method is __P__rincipial __C__omponent __A__nalysis.
In geometric terms, PCA tries to find axes along which most of the variance occurs. The "natural" axes, if you wish.
Under the hood, it attempts to decompose object-feature matrix $X$ into two smaller matrices: $W$ and $\hat W$ minimizing mean squared error:
$$\|(X W) \hat{W} - X\|^2_2 \to_{W, \hat{W}} \min$$from sklearn.decomposition import PCA
# map word vectors onto 2d plane with PCA. Use good old sklearn api (fit, transform)
# after that, normalize vectors to make sure they have zero mean and unit variance
word_vectors_pca = # YOUR CODE
# and maybe MORE OF YOUR CODE here :)
assert word_vectors_pca.shape == (len(word_vectors), 2), "there must be a 2d vector for each word"
assert max(abs(word_vectors_pca.mean(0))) < 1e-5, "points must be zero-centered"
assert max(abs(1.0 - word_vectors_pca.std(0))) < 1e-2, "points must have unit variance"
import bokeh.models as bm, bokeh.plotting as pl
from bokeh.io import output_notebook
output_notebook()
def draw_vectors(x, y, radius=10, alpha=0.25, color='blue',
width=600, height=400, show=True, **kwargs):
""" draws an interactive plot for data points with auxilirary info on hover """
if isinstance(color, str): color = [color] * len(x)
data_source = bm.ColumnDataSource({ 'x' : x, 'y' : y, 'color': color, **kwargs })
fig = pl.figure(active_scroll='wheel_zoom', width=width, height=height)
fig.scatter('x', 'y', size=radius, color='color', alpha=alpha, source=data_source)
fig.add_tools(bm.HoverTool(tooltips=[(key, "@" + key) for key in kwargs.keys()]))
if show: pl.show(fig)
return fig
draw_vectors(word_vectors_pca[:, 0], word_vectors_pca[:, 1], token=words)
# hover a mouse over there and see if you can identify the clusters
PCA is nice but it's strictly linear and thus only able to capture coarse high-level structure of the data.
If we instead want to focus on keeping neighboring points near, we could use TSNE, which is itself an embedding method. Here you can read more on TSNE.
from sklearn.manifold import TSNE
# map word vectors onto 2d plane with TSNE. hint: use verbose=100 to see what it's doing.
# normalize them as just lke with pca
word_tsne = #YOUR CODE
draw_vectors(word_tsne[:, 0], word_tsne[:, 1], color='green', token=words)
Word embeddings can also be used to represent short phrases. The simplest way is to take an average of vectors for all tokens in the phrase with some weights.
This trick is useful to identify what data are you working with: find if there are any outliers, clusters or other artefacts.
Let's try this new hammer on our data!
def get_phrase_embedding(phrase):
"""
Convert phrase to a vector by aggregating it's word embeddings. See description above.
"""
# 1. lowercase phrase
# 2. tokenize phrase
# 3. average word vectors for all words in tokenized phrase
# skip words that are not in model's vocabulary
# if all words are missing from vocabulary, return zeros
vector = np.zeros([model.vector_size], dtype='float32')
# YOUR CODE
return vector
vector = get_phrase_embedding("I'm very sure. This never happened to me before...")
assert np.allclose(vector[::10],
np.array([ 0.31807372, -0.02558171, 0.0933293 , -0.1002182 , -1.0278689 ,
-0.16621883, 0.05083408, 0.17989802, 1.3701859 , 0.08655966],
dtype=np.float32))
# let's only consider ~5k phrases for a first run.
chosen_phrases = data[::len(data) // 1000]
# compute vectors for chosen phrases
phrase_vectors = # YOUR CODE
assert isinstance(phrase_vectors, np.ndarray) and np.isfinite(phrase_vectors).all()
assert phrase_vectors.shape == (len(chosen_phrases), model.vector_size)
# map vectors into 2d space with pca, tsne or your other method of choice
# don't forget to normalize
phrase_vectors_2d = TSNE(verbose=1000).fit_transform(phrase_vectors)
phrase_vectors_2d = (phrase_vectors_2d - phrase_vectors_2d.mean(axis=0)) / phrase_vectors_2d.std(axis=0)
draw_vectors(phrase_vectors_2d[:, 0], phrase_vectors_2d[:, 1],
phrase=[phrase[:50] for phrase in chosen_phrases],
radius=20,)
Finally, let's build a simple "similar question" engine with phrase embeddings we've built.
# compute vector embedding for all lines in data
data_vectors = np.array([get_phrase_embedding(l) for l in data])
def find_nearest(query, k=10):
"""
given text line (query), return k most similar lines from data, sorted from most to least similar
similarity should be measured as cosine between query and line embedding vectors
hint: it's okay to use global variables: data and data_vectors. see also: np.argpartition, np.argsort
"""
# YOUR CODE
return <YOUR CODE: top-k lines starting from most similar>
results = find_nearest(query="How do i enter the matrix?", k=10)
print(''.join(results))
assert len(results) == 10 and isinstance(results[0], str)
assert results[0] == 'How do I get to the dark web?\n'
assert results[3] == 'What can I do to save the world?\n'
find_nearest(query="How does Trump?", k=10)
find_nearest(query="Why don't i ask a question myself?", k=10)