#!/usr/bin/env python # coding: utf-8 # # Evaluating Topic Models # # ## PyData Berlin 2017 Talk # # #### This notebook is a companion to the talk I gave at the PyData Berlin 2017 conference on evaluating topic models # # Unsupervised models in natural language processing (NLP) have a long history but have recently become very popular. Word2vec, GloVe, LSI and LDA provide powerful computational tools to deal with natural language and make exploring and modelling large document collections feasible. # # Often evaluating the model output requires an existing understanding of what should come out. For topic models the output should reflect our understanding of the relatedness of topical categories, for instance **sports**, **travel** or **machine learning**. Distributional models of language such as `word2vec` and `GloVe` should capture some, or ideally all, of the semantics of how language is used. # # This is a lot to ask! Not necessarily because it isn't learneable, after all we've learned it, but because we are not necessarily able to represent the desired output as an evaluation function and data set that can be optimised. As an example topic models are often evaluated with respect to the semantic coherence of the topics based on a set of top words from the topics. It is not clear if a set of words such as `{cat, dog, horse, pet}` captures the semantics of an animalness or a petsiness fully. Nevertheless these methods are useful in determining if the distributed word representation are capturing some of the information conveyed by words and if a topic model is understandable to a human. # # This notebook explores a number of these issues in context and aims to provide an overview of the research that has been done in the past 10 or so years, mostly focusing on topic models. # # The notebook is split into three parts # # 1. Eye Balling models # - ways of making visual, manual inspection of models easier # 2. Intrinsic Evaluation Methods # - how to measure the internal coherence of topic models # 3. Putting a Number on Human Judgements # - quantitative methods for evaluating human judgement # --- # **Random collection of other stuff** # # While preparing the talk and the notebook I experimented with a lot of different software packages and corpora. These are dumped as a somewhat unorganised collection of "other things" at the end of the notebook # # Why Evaluate Models # # We would like to be able to say if a model is objectively good or bad, and compare different models to each other. This requires us to have an objective measure for the quality of the model but many of the tasks mentioned above require subjective evaluation. # # In practical applications one needs to evaluate if "the correct thing" has been learned, often this means applying implicit knowledge and "eye-balling". Documents that talk about *football* should be in the same category and *cat* is more similar to *dog* than to *pen*. Ideally this information should be captured in a single metric that can be maximised. It is not clear how to formulate such a metric however, over the years there has been numerous attempts from various different angles at formulating semantic coherence, none capture the desired outcome fully and there are numerous issues one should be aware of in applying those metrics. # # Some of the issues are related to the metrics being used or issues one should be aware of when applying those metrics, but others are related to external factors, like which kind of held out data to use. Natural language is messy, ambiguous and full of interpretation, that's where a lot of the expressive richness comes from. Sometimes trying to cleanse the ambiguity also reduces language to an unnatural form. # # ---- # # Topic Models # Topic models aim to capture the way in which words co-occur in the context of a document and divide the source corpus into some number of (soft) clusters. There are a large number of variations on the topic model, initial work was done be Deerwester in developing Latent Semantic Analysis (LSA/LSI), now the canonical example is Latent Dirichlet Allocation or LDA. The unit upon which topic models work is a sparse document-term matrix depicted below. # # Each row is a document, each column is a term and the entries in each cell usually represent the frequency of each term in each document. # In[127]: import pandas as pd from sklearn.feature_extraction.text import CountVectorizer documents = ['The cat sat on the mat', 'A red cat sat on the mat', 'No cat sat on the mat'] vectoriser = CountVectorizer().fit(documents) X = vectoriser.transform(documents) pd.DataFrame(X.A, columns=sorted(vectoriser.vocabulary_.keys(), key=lambda k: vectoriser.vocabulary_[k])) # In the case of LDA a Bayesian model is fitted using the data from this matrix. Each topic in the model becomes a probability distribution over terms (the columns). Conceptually this is saying that semantic concepts can be represented as probabilities over a set of words. This makes sense as the topic of discussion acts as a limiting factor on the vocabulary one is likely to use, hear or read in the context of that dicussion. # # Words relating to political campaining are much less likely to be observed in documents that discuss ice hockey. Notice however that is is *unlikely* not impossible, it is not the case that it can not ever happen, it is simply statistically less likely to be the case that *caucus* or *polling* will be in a document that otherwise discusses the Teemu Selänne retiring. A topic therefore is a probability distribution over the entire vocabulary, indicating how likely each word is to occur within that topic. # # The documents the model is built over can be as short as a single sentence (a tweet) or as long as a chapter in a book. Typically very short documents tend to be more difficult to built coherent models over than slightly longer documents. # # Open source implementation of the models are readily available # ---- # # - Latent Semantic Indexing / Latent Semantic Analysis # - http://radimrehurek.com/gensim/models/lsimodel.html # - Latent Dirichlet Allocation (LDA) # - and its many many many variants # - http://radimrehurek.com/gensim/models/ldamodel.html # - http://mallet.cs.umass.edu/topics.php # - Hierarchical Dirichlet Process (HDP) # - http://radimrehurek.com/gensim/models/hdpmodel.html # - Spherical Hierarchical Dirichlet Process (sHDP) # - http://arxiv.org/pdf/1604.00126v1.pdf # - https://github.com/Ardavans/sHDP # # A Model # # In order to evaluate a model, we must of course have one. I'll use the same model(s), built from the Fake News data set on Kaggle, throughout this notebook. # In[4]: import pandas as pd df_fake = pd.read_csv('/usr/local/scratch/data/kaggle/fake.csv') df_fake[['title', 'text', 'language']].head() # In[5]: import numpy as np df_fake = df_fake.loc[(pd.notnull(df_fake.text)) & (df_fake.language == 'english')] df_fake.shape # There is a total of 12357 non empty english language documents, should be enough to build a model. Let's parse the documents using `spacy`, getting rid of some non content words and chuck that into `gensim`. I'll use the `gensim.corpora.MmCorpus` to serialise the text onto disk, this both saves memory and allows random access to the corpus, which will become useful later for creating different splits of the data. # In[4]: import spacy import gensim from gensim.models import LdaModel from gensim.corpora import Dictionary, MmCorpus spc = spacy.load('en') KEEP_POS = set([90, 98, 82, 84, 94]) # NOUN, VERB, ADJ, ADV, PROPN pipe = spc.pipe(df_fake.text, parse=False, entity=False, n_threads=8) processed = [[token.lemma_ for token in document if token.pos in KEEP_POS] for document in pipe] vocabulary = Dictionary(processed) vocabulary.filter_extremes(no_below=3, no_above=0.5) # In[133]: MmCorpus.serialize('./fake_news.mm', (vocabulary.doc2bow(doc) for doc in processed)) # In[8]: vocabulary.save('./fake_news.vocab') # In[ ]: del processed # In[5]: corpus_fake = MmCorpus('./fake_news.mm') # In[137]: lda_fake = LdaModel(corpus=corpus_fake, id2word=vocabulary, num_topics=35, chunksize=1500, iterations=200, alpha='auto') # Inspecting the top 6 six word from the model we can certainly identify some structure. Ther are topic about the Flint Michigan water crisis (Topic 11), the Dakota Access Pipeline (Topic 9) protests and the US elections. # In[7]: pd.DataFrame([[word for rank, (word, prob) in enumerate(words)] for topic_id, words in lda_fake.show_topics(formatted=False, num_words=6, num_topics=35)].iloc[]) # In[7]: pd.DataFrame([[word for rank, (word, prob) in enumerate(words)] for topic_id, words in lda_fake.show_topics(formatted=False, num_words=6, num_topics=35)]) # In[9]: lda_fake.save('./fake_news_35.lda') # # Eye Ballin' # # - usually ML models are evaluated and improved based on a scoring function whose gradient can be followed to a *hopefully* global minimum # - unsupervised models are tricky to evaluate because there usually isn't a suitable error function to optimise # - the unsupervised models still come with hyperparameters so how do you know when you've set them *correctly* # - furthermore, how do you know if model A is better than model B # ### Termite # # The visualisation in the `Termite` [paper](http://vis.stanford.edu/papers/termite) look very promising, but I've been unable to run the code. The original project has been split into two separate projects a *data server* and a *visualisation client*. Unfortunately the data server uses an unknown data format in SQLite databases, and the host server where the data sets ought to be is not operational anymore and the project hasn't been maintained since 2014. # # The project also relies on `web2py` which at the moment only supports python 2 and there doesn't seem to be any interest in porting it to python 3. # # Anyhow, it would seem to be possible to run the project under a python 2 environment. # # - [Original project](https://github.com/StanfordHCI/termite) # - [Termite data server](https://github.com/uwdata/termite-data-server) # - [Termite visualisation](https://github.com/uwdata/termite-visualizations) # # --- # # - modify `read_gensim.py` to add --sentence-splitter cmd arg # - modify `bin/apps/SplitSentences.py` to have an extra param for sentence_splitter jar location # - update code for `gensim` API breaking changes # - `bin/readers/GensimReader.py` line 47 `ldamodel.show_topics` # - `bin/readers/GensimReader.py` line 51 topic/term distribution does not neet `enumerate` anymore # - `bin/readers/GensimReader.py` line 52 swap `term` and `value` around - they are the wrong way around # # `termite` makes a lot of assumptions about paths, one needs to be quite careful what the root directory is for running the commands # # --- # In[22]: import sys sys.path.append('/home/matti/termite-data-server/bin/') from modellers import GensimLDA # In[29]: import re df_fake['text_oneline'] = df_fake.text.apply(lambda s: re.sub(r'\s+', ' ', str(s))) # In[30]: df_fake[['uuid', 'text_oneline']].to_csv('./fakenews.termite.tsv', sep='\t', header=False, index=False) # In[82]: py27 = '/home/matti/miniconda3/envs/py27/bin/python' termite_server_root = '/home/matti/termite-data-server/' # First we need to import the corpus into `termite`'s own special SQLite format # In[76]: get_ipython().system('mkdir termite; cp ./fakenews.termite.tsv ./termite;') cd termite; $py27 /home/matti/termite-data-server/bin/import_corpus.py ./db ./fakenews.termite.tsv # Then we need to export that SQLite DB back into a text corpus, there's some magic file names and path structures that happens here so you can't just use the original file # In[79]: get_ipython().system('cd termite; mkdir corpus; $py27 /home/matti/termite-data-server/bin/export_corpus.py ./db ./corpus/corpus.txt') # Then train the LDAModel, is should be possible to skip this and just use any model trained with `gensim` # In[81]: get_ipython().run_line_magic('capture', '') get_ipython().system('cd termite; $py27 /home/matti/termite-data-server/bin/train_gensim.py --overwrite ./corpus ./models/') # Finally, read in the trained `gensim` LDA model to `termite` creating all the necessary data structures for the visualisations to work. This computes, among other thigs, term collocations ($N^2$) so it's going to take a while to run, especially for large vocabularies. # # If you set all the paths consistently during the previous steps, this should just work. If not, it's likely there will be some `FileNotFound` errors. # In[88]: get_ipython().run_line_magic('capture', '') get_ipython().system('cd termite; cp -r $termite_server_root/tools ./; $py27 /home/matti/termite-data-server/bin/read_gensim.py --overwrite --sentence-split /home/matti/termite-data-server/utils/corenlp/SentenceSplitter.jar gensim_termite ./models/ ./corpus ./db') # To start the server and see the visualisations # In[91]: get_ipython().system('$py27 $termite_server_root/web2py/web2py.py') # --- # # ### pyLDAVis # # Some of the work from `Termite` has been integrated into `pyLDAVis` which is being maintained and has good interoperability with `gensim`. Below is an interactive visualisation of the fake news model trained earlier. Just to see how informative the visualisation is overall, I'll train another model on the same dataset but increaase the number of topics quite a lot. # # For a good description of what you see in the visualisation you can look at the presenation from the creator himself # - https://www.youtube.com/watch?v=tGxW2BzC_DU&index=4&list=PLykRMO7ZuHwP5cWnbEmP_mUIVgzd5DZgH # In[6]: lda_fake = LdaModel.load('./fake_news_35.lda') # In[15]: from gensim.models import LdaModel import pyLDAvis as ldavis import pyLDAvis.gensim ldavis.enable_notebook() prepared_data = ldavis.gensim.prepare(lda_fake, corpus_fake, vocabulary) with open('./fake_news_35.lda-LDAVIS.json', 'w') as fh: fh.write(prepared_data.to_json()) prepared_data # In[6]: lda_fake_100 = LdaModel(corpus=corpus_fake, id2word=vocabulary, num_topics=100, alpha='auto') # In[8]: lda_fake_100.save('./fake_news_100.lda') # In[10]: prepared_data = ldavis.gensim.prepare(lda_fake_100, corpus_fake, vocabulary) with open('./fake_news_100.lda-LDAVIS.json', 'w') as fh: fh.write(prepared_data.to_json()) # In[14]: prepared_data # Comparing the two visualisation one can make some comforting observations. In the bottom left right corner in both visualisation there is a cluster of topics relating to the 2016 U.S. presidential election. The 100 topic model has split the documents up in slightly more specific terms but otherwise both models have captured those semantics and more importantly both visualisations display those topics consistently in a cluster. # # Similarly in the visualisation of the 100 topic model the cluster in the top right hand corner is semantically coherent and similar to the cluster in the bottom left hand corner in the visualisation for the 35 topic model. Again both models have captured the Syrian civil war and related issues and consistently placed those topics close together in the topic panel. # The main problem I find the LDAVis is that the spatial dimensions on the left hand side panel are somewhat meaningless. # # The area of the circle shows the prevalence of a topic, but visually determining the relative sizes of circles is difficult to do, so while you do get an understanding of which topics are the most important you can't really determine how much more important those topics are compared to the others. # # The second problem is the distance between the topics. While the positioning of the topics to some exent preserves semantic similarity allowing some related topics to form clusters, it is a little difficult to determine exactly how similar the topics are. To be fair this is not something that can be blamed on LDAVis as measuring the semantic similarity of topics and then collapsing the multidimensional similarity vectors into 2 dimensions is not an easy task to do. Nevertheless, one shouldn't read too much into the topic distances. Different algorithms for computing the locations - essentially doing multidimensional scaling. # # Intrinsic Evaluation # # Perplexity is often used as an example of an intrinsic evaluation measure. It comes from the language modelling community and aims to capture how suprised a model is of new data it has not seen before. This is commonly measured as the normalised log-likelihood of a held out test set # # $$ # \begin{align} # \mathcal{L}(D') &= \frac{\sum_D \log_2 p(w_d;\Theta)}{\mbox{count of tokens}}\\\\ # perplexity(D') &= 2^{-\mathcal{L}(D')} # \end{align} # $$ # # Focussing on the log-likelihood part, this metric is measuring how probable some new unseen data is given the model that was learned earlier. That is to say, how well does the model represent or reproduce the statistics of the held out data. # # Thinking back to what we would like the topic model to do, this makes no sense at all. Let's put aside any specific algorithm for inferring a topic model and focus on what it is that we'd like the model to capture. More often than not the desire is for the model to capture *concepts* that exist in a particular dataset. Well what is a concept and how can it be represented given the pieces we have? # # Let me offer a way of thinking about this that would not pass the mustard in a bachelor's class in philosophy. Luckily we're not in philopshy class at the moment. # # Take the following two documents that talk about ice hockey. I've highlighted terms that **I** think are related to the subject matter, you may disagree with my judgement. Notice that among the terms that I've highlighted as being part of the *topic* of Ice Hockey are words such as `Penguin`, `opposing` and `shots`. None of these on the face of it would appear to "belong" to Ice Hockey, but seeing them in context makes it clear that `Penguin` refers to the ice hockey team, `shots` refers to disk shaped pieces of vulcanised rubber being launched at the goal at various different speeds and `opposing` refers to the opposing team although it might more commonly be thought to belong politics or the debate club. # # > ... began his professional **career** in 1989–90 with **Jokerit** of the **SM-liiga** and **played** 21 **seasons** in the **National Hockey League** (**NHL**) for the **Winnipeg Jets** ... # # > **Rinne** **stopped** 27 of 28 **shots** from the **Penguins** in **Game** 6 at home Sunday, but that lone **goal** allowed was enough for the **opposition** to break out the **Stanley Cup** **trophy** for the second straight **season**. # # Given the terms that I've determined to be a partial description of Ice Hockey (the concept), one could conceivably measure the coherence of that concept by counting how many times those terms occur with each other - co-occur that is - in some sufficiently large reference corpus. # # One of course encounters a problem should the reference corpus never refer to ice hockey. A poorly selected reference corpus could for instance be patent applications from the 1800s, it would be unlikely to find those word pairs in that text. # # This is precisely what several research papers have aimed to do. To take the top words from the topics in a topic model and measure the *support* for those words forming a coherent concept / topic by looking at the co-occurrences of those terms in a reference corpus. The research up to now was finally wrapped up into a single paper where the authors develop a *coherence pipeline*, which allows plugging in all the different methods into a single framework. This *coherence pipeline* is partially implemented in `gensim`, below is a few examples on how to use it. # In[6]: import spacy import gensim from gensim.models import LdaModel from gensim.corpora import Dictionary, MmCorpus spc = spacy.load('en') KEEP_POS = set([90, 98, 82, 84, 94]) # NOUN, VERB, ADJ, ADV, PROPN pipe = spc.pipe(df_fake.text, parse=False, entity=False, n_threads=8) processed = [[token.lemma_ for token in document if token.pos in KEEP_POS] for document in pipe] vocabulary = Dictionary(processed) vocabulary.filter_extremes(no_below=3, no_above=0.5) # In[10]: corpus = MmCorpus('./models/fake_news.mm') # In[14]: lda_fake_35 = LdaModel.load('./models/fake_news_35.lda') lda_fake_100 = LdaModel.load('./models/fake_news_100.lda') # In[11]: from gensim.models import CoherenceModel cm = CoherenceModel(model=lda_fake_35, corpus=corpus, dictionary=vocabulary, coherence='c_v', texts=[[w for w in d if w in vocabulary.token2id] for d in processed]) cm.get_coherence() # In[19]: cm = CoherenceModel(model=lda_fake_100, corpus=corpus, dictionary=vocabulary, coherence='c_v', texts=[[w for w in d if w in vocabulary.token2id] for d in processed]) cm.get_coherence() # In[12]: cm = CoherenceModel(model=lda_fake_35, corpus=corpus, dictionary=vocabulary, coherence='c_uci', texts=[[w for w in d if w in vocabulary.token2id] for d in processed]) cm.get_coherence() # In[15]: cm = CoherenceModel(model=lda_fake_100, corpus=corpus, dictionary=vocabulary, coherence='c_uci', texts=[[w for w in d if w in vocabulary.token2id] for d in processed]) cm.get_coherence() # In[20]: cm = CoherenceModel(model=lda_fake_35, corpus=corpus, dictionary=vocabulary, coherence='u_mass', texts=[[w for w in d if w in vocabulary.token2id] for d in processed]) cm.get_coherence() # In[21]: cm = CoherenceModel(model=lda_fake_100, corpus=corpus, dictionary=vocabulary, coherence='u_mass', texts=[[w for w in d if w in vocabulary.token2id] for d in processed]) cm.get_coherence() # --- # # References # # ## Papers # - Chang et. al *Reading Tea Leaves: How Humans Interpret Topic Models*, NIPS 2009 # - Wallach et. al *Evaluation Methods for Topic Models*, ICML 2009 # - Lau et. al *Machine Reading Tea Leaves: Automatically Evaluating Topic Coherence and Topic Model Quality*, ACL 2014 # - Röder et. al *Exploring the Space of Topic Coherence Methods*, Web Search and Data Mining 2015 # # - Sievert et. al *LDAvis: A method for visualizing and interpreting topics* ACL 2014 Workshop on Interactive Language Learning, Visualization, and Interfaces # # - Chuang et. al *Termite: Visualization Techniques for Assessing Textual Topic Models*, AVI 2012 [link](http://vis.stanford.edu/papers/termite) # - Chuang et. al *Topic Model Diagnostics: Assessing Domain Relevance via Topical Alignment*, ICML 2013 [link](http://vis.stanford.edu/papers/topic-model-diagnostics) # # ## Software # # - [gensim Topic Modelling for Humans](http://radimrehurek.com/gensim) (Python) # - [UMass Machine Learning for Language - Mallet](http://mallet.cs.umass.edu/) (Java) # - [Stanford Topic Modelling Toolbox](https://nlp.stanford.edu/software/tmt/tmt-0.3/) (Java) # - [Spherical Hierarchical Dirichlet Processes](https://github.com/Ardavans/sHDP) # - Termite # - [Original project](https://github.com/StanfordHCI/termite) # - [Data server](https://github.com/uwdata/termite-data-server) # - [Visualisation](https://github.com/uwdata/termite-visualizations) # - scattertext # - scattertext allows you to plot differential word usage patterns from two corpora into an interactive display. It's not exactly an evaluation method for topic models but can be quite useful for analysing corpora # - there's a talk by the creator at PyData Seattle 2017 [link](https://pydata.org/seattle2017/schedule/presentation/69/) # # ## Datasets # # The model used in this notebook is built on the Kaggle Fake News dataset available [here](https://www.kaggle.com/mrisdal/fake-news). # # ## Interwebs # # - http://qpleple.com/perplexity-to-evaluate-topic-models/ # - http://qpleple.com/topic-coherence-to-evaluate-topic-models/ # # ## General stuff about NLP you might be interested in # - Yoav Goldberg on evaluating NNLMs # - [Original post](https://medium.com/@yoav.goldberg/an-adversarial-review-of-adversarial-generation-of-natural-language-409ac3378bd7) # - [Addendum](https://medium.com/@yoav.goldberg/clarifications-re-adversarial-review-of-adversarial-learning-of-nat-lang-post-62acd39ebe0d) # - [Yann le Cunn on the matter](https://www.facebook.com/yann.lecun/posts/10154498539442143) # - [A response to le Cunn](https://medium.com/@yoav.goldberg/a-response-to-yann-lecuns-response-245125295c02) # ---- # ---- # # # Even Remotely Intelligent Stuff Ends Here # # ---- # ---- # I am going to start with a slightly silly example that nonetheless nicely illustrates a few important points about evaluating unsupervised models # # - define what you want out of the model # - running an algorithm won't solve anything unless you have an expectation of what the output should / could look like # - if you don't explicitly know what the output should look like, you probably know it implicitly # # - applying subjective judgement # # I did some analysis on the accepted talks to PyData Berlin 2017 to find out what kind of talks were accepted this year. I plotted the results in a wordcloud (github.com/amueller/word_cloud), but was disappointed that the first approach didn't really reveal _the thing I was hoping to analyse_. The plot just showed general patterns of english language use. # # ![Raw Frequency of Words](../assets/unsupervised-models/wordcloud.1.png) # # Filtering out high frequency words helped a little bit but the wordcloud still wasn't that informative, it is hardly a surprise that _data_ is a central theme at a PyData conference. # # ![Raw Frequency of Words, 0.5 lt doc_freq removed](../assets/unsupervised-models/wordcloud.2.png) # # So I made some more adjustments to the model and got something that looks more reasonable. # # ![TFIDF filtered scores](../assets/unsupervised-models/wordcloud.4.png) # # I am not trying to claim that the last model is a good one, or even a valid one, but it does correspond to my previously held beliefs of the contents of the conference. It is not surprising that is the case since I arrived at the model by iterating through a number of models that I found to be unsatisfactory, the problem is that I never I actually defined what satisfactory means, there was never an explicitly stated goal towards which I was driving. # # This is extremely important to keep in mind as evaluation metrics for unsupervised models often have in built assumptions about what a _good model_ looks like. Those assumptions may or may not be true for your use case. Some metrics aim to satisfy internal constraints # So let's start with what we would like to model about text in an unsupervised manner # # - the distribution of terms # - the co-occurence of terms # - within documents (topic modelling) # - within "sentences" (distributional semantics, word2vec, GloVe) # - sequences of terms (language models) # # I will focus on evaluation topic models and models of distributional semantics. # # - open source tools # - open access research papers # # - data visualisation is not my core research area # - I am not a political analyst or social scientist, my background is in computer science # # --- # In[104]: import numpy as np import scipy from matplotlib import pyplot as plt fig, ax = plt.subplots(figsize=(1000/72, 750/72), dpi=72) topics = ['Sports', 'Machine Learning', 'Celebrity', 'Fashion', 'Current Affairs', 'Tennis', 'Medicine', 'Technology', 'Security'] # centers = np.random.randint(low=0, high=20, size=(len(topics), 2)) for topic_name, center in zip(topics, centers): topic = np.random.normal(loc=center, scale=1.0, size=(10, 2)) dots = ax.scatter(topic[:, 0], topic[:, 1], alpha=0.4) bbox_props = dict(boxstyle="circle, pad=0.3", fc=dots.get_facecolor().ravel(), ec="none", alpha=0.1, lw=1) t = ax.text(*topic.mean(axis=0), topic_name, ha="center", va="center", rotation=0, size=16, bbox=bbox_props) plt.axis("off"); plt.show() # plt.savefig('../assets/unsupervised-models/ideal-topics.png') # This is what could be called a coherent interpretable model # # - all clusters are more or less self contained # - related clusters _seem_ to be close together # # The problem here is that the "model" above is entirely made up, and the division is somewhat non sensical. # # *Topic Models* have a number of ways of being evaluated, including # # - perplexity (might not be such a great measure) # - Chang et. al Reading Tea Leaves: How Humans Interpret Topic Models, NIPS 2009 # - Wallach et. al Evaluation Methods for Topic Models, ICML 2009 # - Lau et. al Machine Reading Tea Leaves: Automatically Evaluating Topic Coherence and Topic Model Quality, ACL 2014 # - topic coherence # - Röder et. al Exploring the Space of Topic Coherence Methods, Web Search and Data Mining 2015 # - human interpretability (word or topic intrusion) # - Machine Reading Tea Leaves: Automatically Evaluating Topic Coherence and Topic Model Quality # - ontological similarity to link overlap and term co-occurrence (WordNet) # - inter-annotator agreement on labels for topics # - an external task # - information retrieval (Wei et. al LDA-Based Document Models for Ad-hoc Retrieval, 2006 SIGIR) # - sentiment analysis (Titov et. al A Joint Model of Text and Aspect Ratings for Sentiment Summarization, ACL 2008) # # Perplexity and Other Internal Evaluation Metrics # # Perplexity is a metric for the goodness of fit, it measures the log likelihood of held out data. # # $$ 2^{-\sum_x \tilde p(x)\,log_2\,p(x)}$$ # # The aim is to capture how well the current estimated probability of words predicts the probability of words in a held out dataset. This measure is used internally by topic models the measure the progress of learning the topics. It is not suitable for human evaluation as a model with low perplexity does not necessarily correspond to a model that is interpretable or informative (Reading Tea Leaves: How Humans Interpret Topic Models). # # There is a review of internal evaluation measures in Wallach et. al _Evaluation methods for topic models. In ICML. 2009_ - these measures borrow from the language modelling research. #
Topic ID 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
model
1 get i'll light at he come got go was blues will oh dance over she
1 gonna never night down his me, they we're now are let do (i been her
1 yeah, see shine old him better up hey never as heart want back oh, she's
1 yeah you're sun out man if ain't - have day our baby ah long girl
1 wanna way tonight run he's do out let's were good are can't bring gone woman
NaN
2 oh want get out she down light am wanna blues if are hey, he was
2 baby do ya gloom her look our thro' up old you, they ah his now
2 gonna if ain't off she's down, will lord get new would where ha him out
2 oh, can't got black girl at as run la - could home ah, man one
2 yeah i'll na them got stop rain jesus let's hey me, people my, he's at
# # # Topic Coherence Model # # Röder et. al Exploring the Space of Topic Coherence Measures, WSDM 2015 # The topic coherence model combines a number of papers into one framework that allows evaluating the *coherence* of topics inferred by a topic model. In the context of this work coherence is defined as the mutual support of sets of facts - facts are represented by the top N *words* from a topic. # # 1. create tuples from the top N words in a topic # - pairs of single words `{(game), (ball)}, {(team), (ball)}` # - pairs of pairs of words `{(game, ball)}, {(team, ball)}` # - ... # # 2. measure the probability of those from a reference corpus # - document probability # - word probability # - ... # # 3. calculate a *confirmation measure* per tuple # - UCI normalised sum over PMI values # - UMASS # - NPMI # - ... # # 4. aggregate over all the tuples *mean* # $$ C_{UCI} = \frac{2}{N * (N-1)} \sum_{i-1}^{N-1} \sum_{j = i+1}^{N} PMI(w_i, w_j) $$ # # where PMI is # # $$ PMI(w_i, w_j) = \log \frac{P(w_i, w_j) + \epsilon}{P(w_i)P(w_j)}$$ # As pointed out in Reading Tea Leaves: How Humans Interpret Topic Models [emphasis mine]. # # > We emphasize that not measuring the **internal representation** of topic models is at odds with their presentation and development. Most topic modeling papers display qualitative assessments of the inferred topics or simply **assert that topics are semantically meaningful** ... # # As we can see above, it is not immediately clear how the topics are semantically meaningful, even though the fit to the training data is good. #
Topic ID 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
model
1 get i'll light at he come got go was blues will oh dance over she
1 gonna never night down his me, they we're now are let do (i been her
1 yeah, see shine old him better up hey never as heart want back oh, she's
1 yeah you're sun out man if ain't - have day our baby ah long girl
1 wanna way tonight run he's do out let's were good are can't bring gone woman
NaN
2 oh want get out she down light am wanna blues if are hey, he was
2 baby do ya gloom her look our thro' up old you, they ah his now
2 gonna if ain't off she's down, will lord get new would where ha him out
2 oh, can't got black girl at as run la - could home ah, man one
2 yeah i'll na them got stop rain jesus let's hey me, people my, he's at
# # # Eye Balling the Model # # - topic - word distributions # - document topic distributions (spiky, not spiky) # - ldavis / pyldavis # - termite (http://vis.stanford.edu) # # ### demo of pyLDAvis # ## pyLDAvis problems # # - visual / eye-balling # - pyLDAvis (PCoA / mmds are topics 15 and 10 close to each other?) http://nbviewer.jupyter.org/github/bmabey/pyLDAvis/blob/master/notebooks/pyLDAvis_overview.ipynb # - is this a good model? (20 topics, 2000 documents) # - how much does topic 2 cover? what about topic 1? # - left = t-SNE, right = MMDS # # # Stanford VIS group # http://vis.stanford.edu/papers/topic-model-diagnostics # # Word Intrusion and Topic Intrusion # # # Chang et. al Reading Tea Leaves: How Humans Interpret Topic Models # # ### word intrusion # # Find the intruding word in sets of top words picked from a topic in a topic model plus an intruder that has low probability for the current topic but high probability for some other topic. The more the intruder words as judged by humans varies, the less coherent the model is. # # `{dog, cat, horse, apple, pig, cow}` # # ### topic intrusion # --- # # WARNING - WE ARE VEERING INTO PHILOSOPHY # # ## What is The Meaning of Meaning?? # # Douven et. al Measuring Coherence # https://www.researchgate.net/publication/220607660_Measuring_coherence # Chang et. al Reading Tea Leaves: How Humans Interpret Topic Models # # *The more the intruder words as judged by humans varies, the less coherent the model is.* # I need word sets that have several equally plausible interpretations # `{dog, cat, horse, apple, pig, cow}` # `{dog, carrot, horse, apple, pig, corn}` # `{cat, tuna, yarn, horse, stable, hay}` # `{cat, airport, yarn, horse, security, hay}` # ---- # Supervised models are trained on labelled data and optimised to maximise an external metric such as `log loss` or `accuracy`. Unsupersived models on the other hand at their simplest do frequency counting of terms in **context** possibly aiming to fit a predefined parameterized distribution to be consistent with the statistics of some unlabelled data set. # # More recently maximising the similarity of words that appear in similar contexts have been put into a neural network context. Evaluating the trained model often starts by "eye-balling" the results, i.e. checking that your own expections of similarity are fullfilled by the model. # # Documents that talk about football should be in the same category and "cat" is more similar with "dog" than with "pen". Tools such as `pyLDAvis` and `gensim` provide many different ways to get an overview of the learned model or a single metric that can be maximised: `topic coherence`, `perplexity`, `ontological similarity`, `term co-occurrence`, `word analogy`. Using these methods without a good understanding of what the metric represents can give misleading results. The unsupervised models are also often used as part of larger processing pipelines, it is not clear if these intrinsic evaluation measures are approriate in such cases, perhaps the models should instead be evaluated against an external metric like `accuracy` for the entire pipeline. # # In this talk I will give an intuition of what the evaluation metrics are trying to achieve, give some recommendations for when to use them, what kind of pitfalls one should be aware of when using LDA or word emdeddings and the inherent difficulty in measuring or even defining semantic similarity concisely. # --- # Is "cat" more similar to "tiger" than to "dog"? Ideally this information should be captured in a single metric that can be maximised. # # # # Models like word2vec and GloVe are common ways of creating dense vector representations of word meaning. These allow # # # You will learn: # # # # # Questions and Comments: # # - what will I learn by attending? # - the single metric is interesting! can you incorporate that in the shorter abstract as well? # - im not sure you need to tell ppl unsupervised learning is popular, at least not in the shorter abstract imo # - i am maybe not target audience but I only half-understand the bullet points. are there less academic or scientific words that can describe the same thing? or perhaps a paraphrase or question next to them like - perplexity: how we do X with Y? # ---- # # It all depends on what *correct* and *better* means # # Ideally we would be able to say whether a model is intrinsically -- or objectively -- good or bad. Measuring the quality of a topic model, or some other distributional/distributed model, is difficult to do intrinsically, mainly because an objective view for the goodness of the model is elusive. The similarity of pairs of words, or the assignment of documents into topics is contextual; *cat* is close to *dog* if the context is *bicycle* but what if the context is *kitten*, *mouse* or *ball*. # # This may seem like a silly example but this is how a distributional composition was evaluated not that long ago. Evaluating topic models is easier than evaluating the similarity of certain word pairs as the topic model itself provides some context. Typically the evalution of model is done using a list of top words from topics. # # - topic models # - topic coherence # - human interpretability # - what do topics mean for humans # - a document that talks about regulation being considered for vaping equipment: does it belong in the lifestyle topic or politics? # - a document that talks about the negotiation between Lufthansa pilots and the company: is the document about travel or politics? # - distributional semantics and what does it mean for something to mean something # - how do individual words get their meaning? # - what about sentences? # - what about documents? # In[6]: from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt import numpy as np def randrange(n, vmin, vmax): ''' Helper function to make an array of random numbers having shape (n, ) with each number distributed Uniform(vmin, vmax). ''' return (vmax - vmin)*np.random.rand(n) + vmin fig = plt.figure(figsize=(25, 8)) ax = fig.add_subplot(121, projection='3d') ax2 = fig.add_subplot(132) ax3 = fig.add_subplot(133) n, b = 2, 8 # For each set of style and range settings, plot n random points in the box # defined by x in [23, 32], y in [0, 100], z in [zlow, zhigh]. for c, m, zlow, zhigh in [('r', 'o', -50, -25), ('b', '^', -b, -5)]: xs = randrange(n, 23, 32) ys = randrange(n, 0, 100) zs = randrange(n, zlow, zhigh) ax.scatter(xs, ys, zs, c=c, marker=m) ax2.scatter(xs, ys, c=c, marker=m) ax3.scatter(xs, np.zeros(ys.shape), c=c, marker=m) # In[7]: plt.show() # http://distill.pub/2016/misread-tsne/ # # other measures # - topic coherence (Reading Tea Leaves: How Humans Interpret Topic Models) # - word / topic intrusion # - perplexity # - Automatic Word Sense Discrimination Schütze 1998 # - Automatic Evaluation of Topic Coherence Newman et. al 2010 # - ontological similarity to link overlap and term co-occurrence # - wordnet (path distance, Leacock-Chodorow, Wu-Palmer, Hirst-St Onge, Resnik Information Content, Lin's measure, Jiang-Conrath, LESK) # - inter-annotator agreement, but of what exactly (useful == coherent, unuseful==incoherent) # - inter-annotator agreement on labels for topics # - the measures take into account only information that is present, not information that isn't present # - do you want topics that are highly separated or largely overlapping # In[13]: x # In[46]: np.arange(binom.ppf(0.01, n, p), binom.ppf(0.99, n, p)) # In[67]: from scipy.stats import binom import numpy as np fig, ax = plt.subplots(1, 1) for n, p in [(20, 0.5), (30, 0.5), (40, 0.5)]: x = np.arange(binom.ppf(0.01, n, p), binom.ppf(0.99, n, p), step=1) ax.plot(np.arange(0, 30), binom.pmf(np.arange(0, 30), n, p), label=f'(n={n}, p={p})') plt.legend() plt.show() # --- # ## distributional semantics # # - word2vec, glove, APTs and distributional composition # - the meaninig of a word is *"the company it keeps"* - what's the meaning of two or more words put together # - intrinsic evaluation is nearly impossible # # - river delta, river estuary (suisto, estuaari?) - why doesn't finnish have an equivalent for estuary # - good, bad, pear, apple # - blue, red, green? # # ## analogy task # # `king - man + woman == queen` # # `cider - alchohol == applejuice` # # `apple + drink = cider` # # `cider - apple + (hops + barley) == beer` # # - is `good` closer to `apple` than it is to `bad`? # - is `good` closer to `maybe` than it is to `and`? # # # ## forget trying to define what the meaning of meaning is and use the damn thing # # - none of the deep learning models are attempting to understand *language*, they are all treying to solve a task by possibly understanding language # - evaluating on a task is also not always easy because humans tend to be messy creatures (multi-label classification, while *sports* is clear *celebriry gossip* is less clear) # # ## what does this mean for NLP? # In[1]: from spacy import en # In[3]: spc = en.English() # In[11]: len(spc.vocab) # In[10]: lex_good = spc.vocab['good'] lex_bad = spc.vocab['bad'] lex_good.vector - lex_bad.vector # - https://www.youtube.com/watch?v=uLgn3geod9Q (How a dictionary writer defines English) # # - https://youtu.be/uLgn3geod9Q?t=2m3s" # # "*when we revise a dictionary, you go through it A-Z and you take all of the instances for the word that you're looking at. You're mathing up the word and its **contextual use** ... *" # # *antidisestablishmentarianism* # # # Demonstration of the Topic Coherence model in `gensim`. # - https://nbviewer.jupyter.org/github/dsquareindia/gensim/blob/280375fe14adea67ce6384ba7eabf362b05e6029/docs/notebooks/topic_coherence_tutorial.ipynb # # Topic Coherence # - http://qpleple.com/topic-coherence-to-evaluate-topic-models/