Another popular text analysis technique is called topic modeling. The ultimate goal of topic modeling is to find various topics that are present in your corpus. Each document in the corpus will be made up of at least one topic, if not multiple topics.
In this notebook, we will be covering the steps on how to do Latent Dirichlet Allocation (LDA), which is one of many topic modeling techniques. It was specifically designed for text data.
To use a topic modeling technique, you need to provide (1) a document-term matrix and (2) the number of topics you would like the algorithm to pick up.
Once the topic modeling technique is applied, your job as a human is to interpret the results and see if the mix of words in each topic make sense. If they don't make sense, you can try changing up the number of topics, the terms in the document-term matrix, model parameters, or even try a different model.
# Let's read in our document-term matrix
import pandas as pd
import pickle
data = pd.read_pickle('dtm_stop.pkl')
data
aaaaah | aaaaahhhhhhh | aaaaauuugghhhhhh | aaaahhhhh | aaah | aah | abc | abcs | ability | abject | ... | zee | zen | zeppelin | zero | zillion | zombie | zombies | zoning | zoo | éclair | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ali | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
anthony | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
bill | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | ... | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 |
bo | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | ... | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
dave | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
hasan | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 2 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
jim | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
joe | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
john | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
louis | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 |
mike | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
ricky | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
12 rows × 7468 columns
# # Uncomment to setuo LDA logging to a file
# import logging
# logging.basicConfig(filename='lda_model.log', format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
# Import the necessary modules for LDA with gensim
# Terminal / Anaconda Navigator: conda install -c conda-forge gensim
from gensim import matutils, models
import scipy.sparse # sparse matrix format is required for gensim
# One of the required inputs is a term-document matrix (transpose of document-term)
tdm = data.transpose()
tdm.head()
ali | anthony | bill | bo | dave | hasan | jim | joe | john | louis | mike | ricky | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
aaaaah | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
aaaaahhhhhhh | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
aaaaauuugghhhhhh | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
aaaahhhhh | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
aaah | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
# We're going to put the term-document matrix into a new gensim format, from df --> sparse matrix --> gensim corpus
sparse_counts = scipy.sparse.csr_matrix(tdm)
corpus = matutils.Sparse2Corpus(sparse_counts)
# Gensim also requires a dictionary of all the terms and their respective location in the term-document matrix
cv = pickle.load(open("cv_stop.pkl", "rb")) # CountVectorizor creates dtm
id2word = dict((v, k) for k, v in cv.vocabulary_.items())
Now that we have the corpus (term-document matrix) and id2word (dictionary of location: term), we're ready to train the LDA model. We need to specify two other parameters - the number of topics and the number of training passes. Let's start the number of topics at 2, see if the results make sense, and increase the number from there.
# Now that we have the corpus (term-document matrix) and id2word (dictionary of location: term),
# we need to specify two other parameters as well - the number of topics and the number of passes.
# *Note: gensim refers to it as corpus, we call it term-document matrix
# passes is how many times the algorithm is supposed to pass over the whole corpus
import numpy as np
lda = models.LdaModel(corpus=corpus,
id2word=id2word,
num_topics=2,
passes=10,
random_state=np.random.RandomState(seed=10))
for topic, topwords in lda.show_topics():
print("Topic", topic, "\n", topwords, "\n")
Topic 0 0.009*"shit" + 0.008*"fucking" + 0.007*"fuck" + 0.005*"theyre" + 0.005*"didnt" + 0.005*"man" + 0.004*"cause" + 0.004*"hes" + 0.004*"say" + 0.004*"did" Topic 1 0.006*"fucking" + 0.006*"say" + 0.005*"going" + 0.005*"went" + 0.005*"want" + 0.005*"thing" + 0.005*"good" + 0.005*"day" + 0.005*"love" + 0.004*"hes"
Increment the number of topics to see if it improves
# LDA for num_topics = 3
lda = models.LdaModel(corpus=corpus,
id2word=id2word,
num_topics=3,
passes=10,
random_state=np.random.RandomState(seed=10))
for topic, topwords in lda.show_topics():
print("Topic", topic, "\n", topwords, "\n")
Topic 0 0.008*"shit" + 0.006*"fucking" + 0.005*"didnt" + 0.005*"fuck" + 0.005*"did" + 0.005*"say" + 0.005*"day" + 0.004*"hes" + 0.004*"little" + 0.004*"guys" Topic 1 0.008*"love" + 0.007*"want" + 0.007*"dad" + 0.005*"going" + 0.005*"say" + 0.004*"stuff" + 0.004*"good" + 0.004*"shes" + 0.004*"bo" + 0.004*"did" Topic 2 0.010*"fucking" + 0.007*"theyre" + 0.006*"fuck" + 0.006*"went" + 0.006*"theres" + 0.006*"cause" + 0.006*"say" + 0.006*"thing" + 0.005*"going" + 0.005*"hes"
Increment the number of topics again
# LDA for num_topics = 4
lda = models.LdaModel(corpus=corpus,
id2word=id2word,
num_topics=4,
passes=10)
for topic, topwords in lda.show_topics():
print("Topic", topic, "\n", topwords, "\n")
Topic 0 0.010*"fucking" + 0.006*"fuck" + 0.006*"shit" + 0.006*"going" + 0.006*"theyre" + 0.006*"say" + 0.005*"went" + 0.005*"day" + 0.005*"hes" + 0.005*"want" Topic 1 0.006*"didnt" + 0.005*"want" + 0.005*"fucking" + 0.005*"shit" + 0.005*"good" + 0.005*"love" + 0.004*"really" + 0.004*"fuck" + 0.004*"man" + 0.004*"says" Topic 2 0.009*"life" + 0.007*"thing" + 0.006*"hes" + 0.006*"theres" + 0.006*"cause" + 0.005*"shit" + 0.005*"good" + 0.005*"theyre" + 0.005*"tit" + 0.004*"really" Topic 3 0.008*"joke" + 0.006*"anthony" + 0.006*"day" + 0.006*"say" + 0.005*"guys" + 0.004*"tell" + 0.004*"grandma" + 0.004*"thing" + 0.004*"good" + 0.004*"did"
These topics aren't looking too meaningful, and there's a lot of overlap between the topics. We've tried modifying our parameters. Let's try modifying our terms list as well.
One popular trick is to look only at terms that are from one part of speech (only nouns, only adjectives, etc.). Check out the UPenn tag set: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html.
For the 2nd attempt let's look at nouns only. The tag for nouns is NN.
# Let's create a function to pull out nouns from a string of text
from nltk import word_tokenize, pos_tag
def nouns(text):
'''Given a string of text, tokenize the text and pull out only the nouns.'''
is_noun = lambda pos: pos[:2] == 'NN' # pos = part-of-speech
tokenized = word_tokenize(text)
all_nouns = [word for (word, pos) in pos_tag(tokenized) if is_noun(pos)]
return ' '.join(all_nouns)
# Read in the cleaned data, before the CountVectorizer step
data_clean = pd.read_pickle('data_clean.pkl')
# Apply the nouns function to the transcripts to filter only on nouns
data_nouns = pd.DataFrame(data_clean.transcript.apply(nouns))
data_nouns
transcript | |
---|---|
ali | ladies gentlemen stage ali hi thank hello na s... |
anthony | thank thank people i em i francisco city world... |
bill | thank thank pleasure georgia area oasis i june... |
bo | macdonald farm e i o farm pig e i i snort macd... |
dave | jokes living stare work profound train thought... |
hasan | whats davis whats home i netflix la york i son... |
jim | ladies gentlemen stage mr jim jefferies thank ... |
joe | ladies gentlemen joe fuck thanks phone fuckfac... |
john | petunia thats hello hello chicago thank crowd ... |
louis | music lets lights lights thank i i place place... |
mike | wow hey thanks look insane years everyone i id... |
ricky | hello thank fuck thank im gon youre weve money... |
# Create a new document-term matrix using only nouns
from sklearn.feature_extraction import text
from sklearn.feature_extraction.text import CountVectorizer
# Re-add the additional stop words since we are recreating the document-term matrix
add_stop_words = ['like', 'im', 'know', 'just', 'dont', 'thats', 'right', 'people',
'youre', 'got', 'gonna', 'time', 'think', 'yeah', 'said']
stop_words = text.ENGLISH_STOP_WORDS.union(add_stop_words)
# Recreate a document-term matrix with only nouns
cv_nouns = CountVectorizer(stop_words=stop_words)
data_cv_nouns = cv_nouns.fit_transform(data_nouns.transcript)
data_dtm_nouns = pd.DataFrame(data_cv_nouns.toarray(), columns=cv_nouns.get_feature_names())
data_dtm_nouns.index = data_nouns.index
data_dtm_nouns
aaaaahhhhhhh | aaaaauuugghhhhhh | aaaahhhhh | aah | abc | abcs | ability | abortion | abortions | abuse | ... | yummy | ze | zealand | zee | zeppelin | zillion | zombie | zombies | zoo | éclair | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ali | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
anthony | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | ... | 0 | 0 | 10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
bill | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | ... | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
bo | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
dave | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
hasan | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
jim | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
joe | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
john | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
louis | 0 | 0 | 0 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
mike | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 | 0 | 0 |
ricky | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | ... | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
12 rows × 4635 columns
# Create the gensim corpus - this time with nouns only
corpus_nouns = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtm_nouns.transpose()))
# Create the vocabulary dictionary the all terms and their respective location
id2word_nouns = dict((v, k) for k, v in cv_nouns.vocabulary_.items())
# Let's start with 2 topics
lda_nouns = models.LdaModel(corpus=corpus_nouns, num_topics=2, id2word=id2word_nouns, passes=10)
lda_nouns.print_topics()
[(0, '0.011*"dad" + 0.006*"life" + 0.006*"shes" + 0.005*"mom" + 0.005*"parents" + 0.005*"school" + 0.004*"girl" + 0.004*"home" + 0.004*"hes" + 0.003*"hey"'), (1, '0.010*"thing" + 0.009*"day" + 0.008*"shit" + 0.008*"man" + 0.007*"cause" + 0.007*"life" + 0.007*"hes" + 0.007*"way" + 0.007*"fuck" + 0.007*"guy"')]
# Let's try topics = 3
lda_nouns = models.LdaModel(corpus=corpus_nouns, num_topics=3, id2word=id2word_nouns, passes=10)
lda_nouns.print_topics()
[(0, '0.010*"thing" + 0.009*"cause" + 0.009*"day" + 0.009*"life" + 0.008*"man" + 0.008*"guy" + 0.008*"way" + 0.008*"hes" + 0.007*"shit" + 0.007*"fuck"'), (1, '0.012*"shit" + 0.009*"man" + 0.008*"fuck" + 0.006*"lot" + 0.006*"didnt" + 0.005*"ahah" + 0.005*"money" + 0.005*"room" + 0.005*"hes" + 0.004*"guy"'), (2, '0.010*"day" + 0.008*"dad" + 0.008*"joke" + 0.007*"thing" + 0.006*"life" + 0.006*"hes" + 0.006*"shit" + 0.006*"lot" + 0.006*"years" + 0.006*"shes"')]
# Let's try 4 topics
lda_nouns = models.LdaModel(corpus=corpus_nouns, num_topics=4, id2word=id2word_nouns, passes=10)
lda_nouns.print_topics()
[(0, '0.013*"day" + 0.009*"thing" + 0.009*"cause" + 0.007*"women" + 0.007*"lot" + 0.006*"man" + 0.006*"shit" + 0.006*"way" + 0.006*"guy" + 0.005*"baby"'), (1, '0.008*"joke" + 0.008*"hes" + 0.008*"stuff" + 0.007*"thing" + 0.007*"day" + 0.007*"bo" + 0.006*"man" + 0.006*"years" + 0.006*"id" + 0.006*"repeat"'), (2, '0.012*"thing" + 0.010*"life" + 0.009*"cause" + 0.009*"day" + 0.009*"guy" + 0.009*"shit" + 0.008*"gon" + 0.008*"hes" + 0.007*"way" + 0.006*"kind"'), (3, '0.012*"shit" + 0.011*"fuck" + 0.011*"man" + 0.009*"dad" + 0.008*"life" + 0.006*"house" + 0.006*"hes" + 0.006*"way" + 0.006*"lot" + 0.006*"shes"')]
I still don't see the topics becoming clear, so in attempt 3 I will try both nouns and adjectivs.
# Create a function to pull out nouns and adjectives from a string of text
def nouns_adj(text):
'''Given a string of text, tokenize the text and pull out only the nouns and adjectives.'''
is_noun_adj = lambda pos: pos[:2] == 'NN' or pos[:2] == 'JJ'
tokenized = word_tokenize(text)
nouns_adj = [word for (word, pos) in pos_tag(tokenized) if is_noun_adj(pos)]
return ' '.join(nouns_adj)
# Apply the nouns function to the transcripts to filter only on nouns
data_nouns_adj = pd.DataFrame(data_clean.transcript.apply(nouns_adj))
data_nouns_adj
transcript | |
---|---|
ali | ladies gentlemen welcome stage ali wong hi wel... |
anthony | thank san francisco thank good people surprise... |
bill | right thank thank pleasure greater atlanta geo... |
bo | old macdonald farm e i i o farm pig e i i snor... |
dave | dirty jokes living stare most hard work profou... |
hasan | whats davis whats im home i netflix special la... |
jim | ladies gentlemen welcome stage mr jim jefferie... |
joe | ladies gentlemen joe fuck san francisco thanks... |
john | right petunia august thats good right hello he... |
louis | music lets lights lights thank much i i i nice... |
mike | wow hey thanks hey seattle nice look crazy ins... |
ricky | hello great thank fuck thank lovely welcome im... |
# Create a new document-term matrix using only nouns and adjectives, also remove common words with max_df
cv_nouns_adj = CountVectorizer(stop_words=stop_words, max_df=.8) # Remove corpus-specific stop words with max_df, if occurs >80%
data_cv_nouns_adj = cv_nouns_adj.fit_transform(data_nouns_adj.transcript)
data_dtm_nouns_adj = pd.DataFrame(data_cv_nouns_adj.toarray(), columns=cv_nouns_adj.get_feature_names())
data_dtm_nouns_adj.index = data_nouns_adj.index
# Create the gensim corpus
corpus_nouns_adj = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtm_nouns_adj.transpose()))
# Create the vocabulary dictionary
id2word_nouns_adj = dict((v, k) for k, v in cv_nouns_adj.vocabulary_.items())
# Let's start with 2 topics
lda_nouns_adj = models.LdaModel(corpus=corpus_nouns_adj, num_topics=2, id2word=id2word_nouns_adj, passes=10)
lda_nouns_adj.print_topics()
[(0, '0.004*"mom" + 0.004*"ass" + 0.003*"joke" + 0.003*"friend" + 0.003*"parents" + 0.003*"clinton" + 0.003*"jenny" + 0.003*"guns" + 0.002*"dick" + 0.002*"anthony"'), (1, '0.003*"joke" + 0.003*"bo" + 0.003*"comedy" + 0.003*"parents" + 0.003*"love" + 0.003*"gay" + 0.003*"hasan" + 0.002*"repeat" + 0.002*"nuts" + 0.002*"ahah"')]
# Let's try 3 topics
lda_nouns_adj = models.LdaModel(corpus=corpus_nouns_adj, num_topics=3, id2word=id2word_nouns_adj, passes=10)
lda_nouns_adj.print_topics()
[(0, '0.004*"hasan" + 0.004*"parents" + 0.004*"jenny" + 0.004*"class" + 0.004*"guns" + 0.003*"mom" + 0.003*"door" + 0.003*"ass" + 0.003*"girls" + 0.003*"girlfriend"'), (1, '0.004*"joke" + 0.004*"wife" + 0.003*"mom" + 0.003*"clinton" + 0.003*"ahah" + 0.003*"gay" + 0.003*"hell" + 0.002*"son" + 0.002*"nuts" + 0.002*"husband"'), (2, '0.006*"joke" + 0.005*"bo" + 0.004*"repeat" + 0.004*"jokes" + 0.004*"eye" + 0.004*"anthony" + 0.003*"contact" + 0.003*"tit" + 0.003*"mom" + 0.003*"ok"')]
# Let's try 4 topics
lda_nouns_adj = models.LdaModel(corpus=corpus_nouns_adj, num_topics=4, id2word=id2word_nouns_adj, passes=10)
lda_nouns_adj.print_topics()
[(0, '0.004*"ok" + 0.004*"ass" + 0.003*"mom" + 0.003*"dog" + 0.003*"bo" + 0.003*"parents" + 0.003*"um" + 0.003*"friend" + 0.003*"clinton" + 0.003*"jenny"'), (1, '0.006*"joke" + 0.004*"jenner" + 0.004*"nuts" + 0.003*"jokes" + 0.003*"bruce" + 0.003*"stupid" + 0.003*"hampstead" + 0.003*"chimp" + 0.003*"rape" + 0.003*"dead"'), (2, '0.007*"joke" + 0.005*"ahah" + 0.005*"mad" + 0.005*"anthony" + 0.004*"gun" + 0.004*"gay" + 0.004*"son" + 0.003*"nigga" + 0.003*"wife" + 0.003*"grandma"'), (3, '0.009*"hasan" + 0.007*"mom" + 0.006*"parents" + 0.006*"brown" + 0.004*"bike" + 0.004*"birthday" + 0.004*"york" + 0.003*"door" + 0.003*"bethany" + 0.003*"pizza"')]
# Keep it at 4 topics, but experiment with other hyper-parameters:
# Increase the number of passes
# Change alpha to really small value or symmetric or auto
# Change eta to very small values
# Set random_state to persist results on every run. By default LDA output varies on each run.
lda_nouns_adj = models.LdaModel(corpus=corpus_nouns_adj,
num_topics=4,
id2word=id2word_nouns_adj,
passes=100,
alpha='symmetric',
eta=0.00001,
random_state=np.random.RandomState(seed=10))
for topic, topwords in lda_nouns_adj.show_topics():
print("Topic", topic, "\n", topwords, "\n")
/Users/nwams/anaconda3/lib/python3.7/site-packages/gensim/models/ldamodel.py:775: RuntimeWarning: divide by zero encountered in log diff = np.log(self.expElogbeta)
Topic 0 0.008*"joke" + 0.007*"gun" + 0.006*"bo" + 0.005*"guns" + 0.005*"repeat" + 0.004*"um" + 0.004*"anthony" + 0.004*"party" + 0.004*"comedy" + 0.004*"jokes" Topic 1 0.011*"mom" + 0.010*"clinton" + 0.007*"husband" + 0.007*"cow" + 0.007*"wife" + 0.006*"ok" + 0.006*"office" + 0.006*"wan" + 0.005*"ass" + 0.005*"pregnant" Topic 2 0.007*"parents" + 0.006*"hasan" + 0.006*"jenny" + 0.006*"mom" + 0.005*"door" + 0.004*"brown" + 0.004*"texas" + 0.004*"york" + 0.003*"high" + 0.003*"friend" Topic 3 0.007*"joke" + 0.006*"ahah" + 0.005*"nuts" + 0.005*"gay" + 0.005*"tit" + 0.005*"young" + 0.004*"nigga" + 0.004*"dead" + 0.004*"jenner" + 0.004*"rape"
Unfortunately tuning the hyper-parameters did not yield any meaningful topics.