#!/usr/bin/env python # coding: utf-8 # # Training Doc2Vec on Wikipedia articles # This notebook replicates the **Document Embedding with Paragraph Vectors** paper, http://arxiv.org/abs/1507.07998. # # In that paper, the authors only showed results from the DBOW ("distributed bag of words") mode, trained on the English Wikipedia. Here we replicate this experiment using not only DBOW, but also the DM ("distributed memory") mode of the Paragraph Vector algorithm aka Doc2Vec. # ## Basic setup # Let's import the necessary modules and set up logging. The code below assumes Python 3.7+ and Gensim 4.0+. # In[1]: import logging import multiprocessing from pprint import pprint import smart_open from gensim.corpora.wikicorpus import WikiCorpus, tokenize from gensim.models.doc2vec import Doc2Vec, TaggedDocument logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) # ## Preparing the corpus # First, download the dump of all Wikipedia articles from [here](http://download.wikimedia.org/enwiki/latest). You want the file named `enwiki-latest-pages-articles.xml.bz2`. # # Second, convert that Wikipedia article dump from the arcane Wikimedia XML format into a plain text file. This will make the subsequent training faster and also allow easy inspection of the data = "input eyeballing". # # We'll preprocess each article at the same time, normalizing its text to lowercase, splitting into tokens, etc. Below I use a regexp tokenizer that simply looks for alphabetic sequences as tokens. But feel free to adapt the text preprocessing to your own domain. High quality preprocessing is often critical for the final pipeline accuracy – garbage in, garbage out! # In[2]: wiki = WikiCorpus( "enwiki-latest-pages-articles.xml.bz2", # path to the file you downloaded above tokenizer_func=tokenize, # simple regexp; plug in your own tokenizer here metadata=True, # also return the article titles and ids when parsing dictionary={}, # don't start processing the data yet ) with smart_open.open("wiki.txt.gz", "w", encoding='utf8') as fout: for article_no, (content, (page_id, title)) in enumerate(wiki.get_texts()): title = ' '.join(title.split()) if article_no % 500000 == 0: logging.info("processing article #%i: %r (%i tokens)", article_no, title, len(content)) fout.write(f"{title}\t{' '.join(content)}\n") # title_of_article [TAB] words of the article # The above took about 1 hour and created a new ~5.8 GB file named `wiki.txt.gz`. Note the output text was transparently compressed into `.gz` (GZIP) right away, using the [smart_open](https://github.com/RaRe-Technologies/smart_open) library, to save on disk space. # # Next we'll set up a document stream to load the preprocessed articles from `wiki.txt.gz` one by one, in the format expected by Doc2Vec, ready for training. We don't want to load everything into RAM at once, because that would blow up the memory. And it is not necessary – Gensim can handle streamed input training data: # In[3]: class TaggedWikiCorpus: def __init__(self, wiki_text_path): self.wiki_text_path = wiki_text_path def __iter__(self): for line in smart_open.open(self.wiki_text_path, encoding='utf8'): title, words = line.split('\t') yield TaggedDocument(words=words.split(), tags=[title]) documents = TaggedWikiCorpus('wiki.txt.gz') # A streamed iterable; nothing in RAM yet. # In[4]: # Load and print the first preprocessed Wikipedia document, as a sanity check = "input eyeballing". first_doc = next(iter(documents)) print(first_doc.tags, ': ', ' '.join(first_doc.words[:50] + ['………'] + first_doc.words[-50:])) # The document seems legit so let's move on to finally training some Doc2vec models. # ## Training Doc2Vec # The original paper had a vocabulary size of 915,715 word types, so we'll try to match it by setting `max_final_vocab` to 1,000,000 in the Doc2vec constructor. # # Other critical parameters were left unspecified in the paper, so we'll go with a window size of eight (a prediction window of 8 tokens to either side). It looks like the authors tried vector dimensionality of 100, 300, 1,000 & 10,000 in the paper (with 10k dims performing the best), but I'll only train with 200 dimensions here, to keep the RAM in check on my laptop. # # Feel free to tinker with these values yourself if you like: # In[5]: workers = 20 # multiprocessing.cpu_count() - 1 # leave one core for the OS & other stuff # PV-DBOW: paragraph vector in distributed bag of words mode model_dbow = Doc2Vec( dm=0, dbow_words=1, # dbow_words=1 to train word vectors at the same time too, not only DBOW vector_size=200, window=8, epochs=10, workers=workers, max_final_vocab=1000000, ) # PV-DM: paragraph vector in distributed memory mode model_dm = Doc2Vec( dm=1, dm_mean=1, # use average of context word vectors to train DM vector_size=200, window=8, epochs=10, workers=workers, max_final_vocab=1000000, ) # Run one pass through the Wikipedia corpus, to collect the 1M vocabulary and initialize the doc2vec models: # In[6]: model_dbow.build_vocab(documents, progress_per=500000) print(model_dbow) # Save some time by copying the vocabulary structures from the DBOW model to the DM model. # Both models are built on top of exactly the same data, so there's no need to repeat the vocab-building step. model_dm.reset_from(model_dbow) print(model_dm) # Now we’re ready to train Doc2Vec on the entirety of the English Wikipedia. **Warning!** Training this DBOW model takes ~14 hours, and DM ~6 hours, on my 2020 Linux machine. # In[7]: # Train DBOW doc2vec incl. word vectors. # Report progress every ½ hour. model_dbow.train(documents, total_examples=model_dbow.corpus_count, epochs=model_dbow.epochs, report_delay=30*60) # In[8]: # Train DM doc2vec. model_dm.train(documents, total_examples=model_dm.corpus_count, epochs=model_dm.epochs, report_delay=30*60) # ## Finding similar documents # After that, let's test both models! The DBOW model shows similar results as the original paper. # # First, calculate the most similar Wikipedia articles to the "Machine learning" article. The calculated word vectors and document vectors are stored separately, in `model.wv` and `model.dv` respectively: # In[9]: for model in [model_dbow, model_dm]: print(model) pprint(model.dv.most_similar(positive=["Machine learning"], topn=20)) # Both results seem similar and match the results from the paper's Table 1, although not exactly. This is because we don't know the exact parameters of the original implementation (see above). And also because we're training the model 7 years later and the Wikipedia content has changed in the meantime. # # Now following the paper's Table 2a), let's calculate the most similar Wikipedia entries to "Lady Gaga" using Paragraph Vector: # In[10]: for model in [model_dbow, model_dm]: print(model) pprint(model.dv.most_similar(positive=["Lady Gaga"], topn=10)) # The DBOW results are in line with what the paper shows in Table 2a), revealing similar singers in the U.S. # # Interestingly, the DM results seem to capture more "fact about Lady Gaga" (her albums, trivia), whereas DBOW recovered "similar artists". # # **Finally, let's do some of the wilder arithmetics that vectors embeddings are famous for**. What are the entries most similar to "Lady Gaga" - "American" + "Japanese"? Table 2b) in the paper. # # Note that "American" and "Japanese" are word vectors, but they live in the same space as the document vectors so we can add / subtract them at will, for some interesting results. All word vectors were already lowercased by our tokenizer above, so we look for the lowercased version here: # In[11]: for model in [model_dbow, model_dm]: print(model) vec = [model.dv["Lady Gaga"] - model.wv["american"] + model.wv["japanese"]] pprint([m for m in model.dv.most_similar(vec, topn=11) if m[0] != "Lady Gaga"]) # As a result, the DBOW model surfaced artists similar to Lady Gaga in Japan, such as **Ayumi Hamasaki** whose Wiki bio says: # # > Ayumi Hamasaki is a Japanese singer, songwriter, record producer, actress, model, spokesperson, and entrepreneur. # # So that sounds like a success. It's also the nr. 1 hit in the paper we're replicating – success! # # The DM model results are opaque to me, but seem art & Japan related as well. The score deltas between these DM results are marginal, so it's likely they would change if retrained on a different version of Wikipedia. Or even when simply re-run on the same version – the doc2vec training algorithm is stochastic. # # These results demonstrate that both training modes employed in the original paper are outstanding for calculating similarity between document vectors, word vectors, or a combination of both. The DM mode has the added advantage of being 4x faster to train. # If you wanted to continue working with these trained models, you could save them to disk, to avoid having to re-train the models from scratch every time: # In[12]: model_dbow.save('doc2vec_dbow.model') model_dm.save('doc2vec_dm.model') # To continue your doc2vec explorations, refer to the official API documentation in Gensim: https://radimrehurek.com/gensim/models/doc2vec.html