# Information Retrieval Lab: Basic Retrieval¶

In this exercise, we will implement a basic retrieval model. While we already answered some questions using term statistics in the indexing exercise, this lab introduces the concept of a relevance function: a way of quantifying the relevance between a query (expressing a users information need) and a document (potentially fulfilling an information need).

## Lecture Recall¶

First, recall some important concepts from the lecture: indexing, term weighting, document representations, query and query representations.

###### Indexing¶

Gather and store auxiliary information about documents, required for fast online query processing (i.e., to score and rank documents when a query is submitted). Such information may include term frequencies, document frequencies, document lengths, ... The data structure used is a key-value store (i.e., a hash map).

###### Term Weighting¶

For each index term of a document, calculate a weight indicating its importance with respect to the document, allowing for document scoring with respect to a query. Common term weighting schemes are term-frequency (tf), term-frequency-inverse-document-frequency (tfidf), BM25, ...

###### Document representations¶

The representation of a document $d$ is a $|T|$-dimensional vector, where the $i$-th vector component of $d$ corresponds to a term weight $w_i$ of term $t_i \in T$, indicating its importance for $d$.

###### Query¶

A inquiry string expressing a users information need, either in natural or in a specialized query language.

###### Query representations¶

A query representation $q$ is constructed like a document representation, but for a user-supplied query instead of a document.

## Setup¶

To implement a scoring model, we first import existing code from the previous exercises. We will make use of the Shakespeare document collection, the preprocessing component and the index. We supply a reference solution you can import, but feel free to continue using your own code!

Exercise: using the code from previous exercises, construct an index on top of the Shakespeare document collection.

In [ ]:
from modules.shakespeare import corpus
from modules.preprocessing import PreprocessorSpacy as Preprocessor
from modules.indexing import Index

# Your code here


## Term Weighting¶

To compute document representations, we first need to implement term weighting. For the purposes of this exercise, we choose the term-frequency-inverse-document-frequency (TFIDF) weighting scheme, given by the following equation: $tfidf = tf \cdot idf$, where $tf(t,d)$ is the normalized term frequency, denoting the importance of a term $t$ in document $d$, and $idf(t, D) = \log\frac{|D|}{df(t,d)}$ is the inverse document frequency, denoting the importance of term $t$ in general.

Note that different approaches to normalizing the term frequency exist. For this exercise, we use the following normalization:

$$1 + \log(\text{tf}(t,d))\text{ for tf}(t,d)> 0$$

Exercise: implement the tfidf term weighting function.

In [ ]:
from math import log

def tfidf(term_frequency: int, document_frequency: int, document_count: int) -> float:
"""
Term-frequency-inverted-document-frequency, which calculates the tfidf for a given term frequency, document frequency and document count.
:param term_frequency: The frequency of the term in the document.
:param document_frequency:
:param document_count:
:returns: tfidf score
"""
# Your code here


## Document Representation¶

The representation $d$ of a document $D$ is a $|T|$-dimensional vector, where the i-th vector component of $D$ corresponds to a term weight $w_i$ of term $t_i \in T$, indicating its importance for $D$.

We will use the previously defined TFIDF score as term weight: $w_i = tfidf(t_i, D)$.

Exercise: implement a function to generate a TFIDF-vector representation for a given document id, based on the the information available in the index. For the purposes of this exercise, implement vectors as lists. Make sure to keep dimensionality between documents, i.e. the same vector dimension should always correspond to the same index term!

In [ ]:
def get_document_representation(doc_id, index):
"""
Returns the tfidf vector for a given doc_id and index
:param doc_id: doc_id to construct the vector for
:param index: index to construct the vector from
:return: the tfidf vector for this document
"""
# Your code here


## Query Representation¶

To retrieve similar documents to a given query, we need to project the query into the same vector space as our document representations.

The query representation is constructed in a similar way to document representations, using the term data stored in the index. However, we calculate the term frequency manually, since a text query, not a document, is indexed. Keep in mind that the query needs to be processed in exactly the same ways as documents, i.e. have the same preprocessing applied.

Exercise: implement a function to generate a TFIDF-vector representation for a text string, based on the the information available in the index. For the purposes of this exercise, implement vectors as lists. Make sure to keep dimensionality between documents, i.e. the same vector dimension should always correspond to the same index term!

In [ ]:
def get_query_representation(text, index):
"""
Returns the tfidf vector for an arbitrary string.
:param text: string to construct the vector for
:param index: index to construct the vector from
:return: the tfidf vector for the string
"""
# Your code here


## Relevance Function¶

Now that we have a way to embed both documents and the query into a common vector space, we can rank documents according to a similarity function with respect to the query.

#### Building the Vector Space¶

To retrieve documents, we need to build a vector space, i.e. construct the representation vector for every document in our collection.

Exercise: construct a vector space for all documents in the index using their document representations. Use a {doc_id: vector, ...} dictionary as data structure.

In [ ]:
# Your code here


#### Defining similarity¶

To calculate the similarity score between the query and each document, we need to define a similarity function. We will use cosine similarity.

Exercise: implement cosine similarity. (Be mindful of the difference between distance and similarity!)

In [ ]:
from math import sqrt, pow

def score(vec_a, vec_b):
# Your code here


#### Querying¶

We now have all the components needed to answer queries. Write a wrapper function that takes a text and returns the (top $k$) results for the query. It construct a query representation and calculate the pairwise similarity between the query and all documents, returning an ordered list of (doc_id, score) tuples, descending by score.

Exercise: write a function to query the vector space.

In [ ]:
def query(text, vectors, index, topK=-1):
"""
Queries a given text against a given vector space. Embeds the query text and calculates the cosine similarity to every document in the vector space.
:param text: query text
:param vectors: list of document vectors to query against
:param index: the index data to query against
:param topK: number of top results to return
:return: list of (doc_id, score) tuples descending by score for all documents in the vector space
"""
# Your code here


# Simple Retrieval¶

You can now try out different queries. There are two example searchers below, but try and search for anything you like. Take a look at both the ranking results as well as the original documents; try to trace why certain documents appear in the order they are in given their TFIDF similarity.

In [ ]:
query("two households verona", vectors, index, topK=5)

In [ ]:
query("love", vectors, index, topK=5)

In [ ]:
# Your code here?