To illustrate how to use pyLDAvis
's gensim helper funtions we will create a model from the 20 Newsgroup corpus. Minimal preprocessing is done and so the model is not the best. However, the goal of this notebook is to demonstrate the helper functions.
%%bash
mkdir -p data
pushd data
if [ -d "20news-bydate-train" ]
then
echo "The data has already been downloaded..."
else
wget http://qwone.com/%7Ejason/20Newsgroups/20news-bydate.tar.gz
tar xfv 20news-bydate.tar.gz
rm 20news-bydate.tar.gz
fi
echo "Lets take a look at the groups..."
ls 20news-bydate-train/
popd
~/DataScience/GitHub/pyLDAvis/notebooks/data ~/DataScience/GitHub/pyLDAvis/notebooks The data has already been downloaded... Lets take a look at the groups... alt.atheism comp.graphics comp.os.ms-windows.misc comp.sys.ibm.pc.hardware comp.sys.mac.hardware comp.windows.x misc.forsale rec.autos rec.motorcycles rec.sport.baseball rec.sport.hockey sci.crypt sci.electronics sci.med sci.space soc.religion.christian talk.politics.guns talk.politics.mideast talk.politics.misc talk.religion.misc ~/DataScience/GitHub/pyLDAvis/notebooks
Each group dir has a set of files:
!ls -lah data/20news-bydate-train/sci.space | tail -n 5
-rw-r--r-- 1 marksusol staff 1.4K Mar 18 2003 61250 -rw-r--r-- 1 marksusol staff 889B Mar 18 2003 61252 -rw-r--r-- 1 marksusol staff 1.2K Mar 18 2003 61264 -rw-r--r-- 1 marksusol staff 1.6K Mar 18 2003 61308 -rw-r--r-- 1 marksusol staff 1.3K Mar 18 2003 61422
Lets take a peak at one email:
!head data/20news-bydate-train/sci.space/61422
From: ralph.buttigieg@f635.n713.z3.fido.zeta.org.au (Ralph Buttigieg) Subject: Why not give $1 billion to first year-lo Organization: Fidonet. Gate admin is fido@socs.uts.edu.au Lines: 34 Original to: keithley@apple.com G'day keithley@apple.com 21 Apr 93 22:25, keithley@apple.com wrote to All:
from glob import glob
import re
import string
import funcy as fp
from gensim import models
from gensim.corpora import Dictionary, MmCorpus
import nltk
import pandas as pd
nltk.download('stopwords')
[nltk_data] Downloading package stopwords to [nltk_data] /Users/marksusol/nltk_data... [nltk_data] Package stopwords is already up-to-date!
True
# quick and dirty....
EMAIL_REGEX = re.compile(r"[a-z0-9\.\+_-]+@[a-z0-9\._-]+\.[a-z]*")
FILTER_REGEX = re.compile(r"[^a-z '#]")
TOKEN_MAPPINGS = [(EMAIL_REGEX, "#email"), (FILTER_REGEX, ' ')]
def tokenize_line(line):
res = line.lower()
for regexp, replacement in TOKEN_MAPPINGS:
res = regexp.sub(replacement, res)
return res.split()
def tokenize(lines, token_size_filter=2):
tokens = fp.mapcat(tokenize_line, lines)
return [t for t in tokens if len(t) > token_size_filter]
def load_doc(filename):
group, doc_id = filename.split('/')[-2:]
with open(filename, errors='ignore') as f:
doc = f.readlines()
return {'group': group,
'doc': doc,
'tokens': tokenize(doc),
'id': doc_id}
docs = pd.DataFrame(list(map(load_doc, glob('data/20news-bydate-train/*/*')))).set_index(['group','id'])
docs.head()
doc | tokens | ||
---|---|---|---|
group | id | ||
talk.politics.mideast | 75895 | [From: hm@cs.brown.edu (Harry Mamaysky)\n, Sub... | [from, #email, harry, mamaysky, subject, heil,... |
76248 | [From: waldo@cybernet.cse.fau.edu (Todd J. Dic... | [from, #email, todd, dicker, subject, israel's... | |
76277 | [From: C.L.Gannon@newcastle.ac.uk (Space Cadet... | [from, #email, space, cadet, subject, exact, m... | |
76045 | [From: shaig@Think.COM (Shai Guday)\n, Subject... | [from, #email, shai, guday, subject, basil, op... | |
76283 | [From: koc@rize.ECE.ORST.EDU (Cetin Kaya Koc)\... | [from, #email, cetin, kaya, koc, subject, seve... |
def nltk_stopwords():
return set(nltk.corpus.stopwords.words('english'))
def prep_corpus(docs, additional_stopwords=set(), no_below=5, no_above=0.5):
print('Building dictionary...')
dictionary = Dictionary(docs)
stopwords = nltk_stopwords().union(additional_stopwords)
stopword_ids = map(dictionary.token2id.get, stopwords)
dictionary.filter_tokens(stopword_ids)
dictionary.compactify()
dictionary.filter_extremes(no_below=no_below, no_above=no_above, keep_n=None)
dictionary.compactify()
print('Building corpus...')
corpus = [dictionary.doc2bow(doc) for doc in docs]
return dictionary, corpus
dictionary, corpus = prep_corpus(docs['tokens'])
Building dictionary... Building corpus...
MmCorpus.serialize('newsgroups.mm', corpus)
dictionary.save('newsgroups.dict')
%%time
lda = models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=50, passes=10)
lda.save('newsgroups_50_lda.model')
CPU times: user 2min 21s, sys: 5min 36s, total: 7min 58s Wall time: 41.1 s
Okay, the moment we have all been waiting for is finally here! You'll notice in the visualization that we have a few junk topics that would probably disappear after better preprocessing of the corpus. This is left as an exercises to the reader. :)
import pyLDAvis.gensim_models as gensimvis
import pyLDAvis
vis_data = gensimvis.prepare(lda, corpus, dictionary)
pyLDAvis.display(vis_data)
We can both visualize LDA models as well as gensim HDP models with pyLDAvis.
The difference between HDP and LDA is that HDP is a non-parametric method. Which means that we don't need to specify the number of topics. HDP will fit as many topics as it can and find the optimal number of topics by itself.
%%time
# The optional parameter T here indicates that HDP should find no more than 50 topics
# if there exists any.
hdp = models.hdpmodel.HdpModel(corpus, dictionary, T=50)
hdp.save('newsgroups_hdp.model')
CPU times: user 30.2 s, sys: 1min 40s, total: 2min 10s Wall time: 12.3 s
As for the LDA model, in order to prepare the visualization you only need to pass it your model, the corpus, and the associated dictionary.
vis_data = gensimvis.prepare(hdp, corpus, dictionary)
pyLDAvis.display(vis_data)