Now that we have completed our Python Primer, let's try to complete a real task, while at the same time trying to practice loops, iterations, and other Python functionality that we studied.
You are given a list of names. You know that the same entity in the list has different representations. You want to find duplicate companies in the data.
As a concrete example, open the file under data/restaurant-names.txt
. This contains a list of restaurant names, extracted from the NYC Restaurant inspection data set (available online). The Department of Health has been doing a decent, but not perfect, job in recording the company names. Therefore, the same restaurant appears under different names. Your task is to find "almost duplicate" entries in order to decide whether they correspond to the same business.
!head -10 ../data/restaurant-names.txt
!tail -5 ../data/restaurant-names.txt
Quite often, in our data, we have entries represented as strings that refer to the same entity but have different string representations (e.g., McDonald's vs McDonalds vs McDonald). We want to write code that will help in the task of finding such similar entries in our data.
Our task can be broken in the following tasks:
data
folder: The list of unique restaurant names from the NYC Restaurant Inspection dataset (data/restaurant-names.txt
). We need to write Python code that will read th file and return a list of strings that are the company names.[("McDonalds", 0.88), ("McDonald's", 0.81),....]
.# STEP 1: Read the list of names from a file and create a list of names
filename = "../data/restaurant-names.txt"
# We open the filename for reading
f = ...
# Read the file into memory
content = ...
# Content is a big string, with one restaurant per line
# so we split it into lines and create a list with the restaurant names
restaurants = ...
# STEP 1: Read the list of names from a file and create a list of names
filename = "../data/restaurant-names.txt"
# We open the filename for reading
f = open(filename, "r")
# Read the file into memory
content = f.read()
# Content is a big string, with one restaurant per line
# so we split it into lines and create a list with the restaurant names
restaurants = content.split("\n")
len(restaurants)
restaurants
There are many ways that we can calculate the similarity between two strings. For our case, we will focus on a few similarity metrics that already have implementations in Python.
# Edit distance
!sudo -H pip3 install -U jellyfish
# Ngram
!sudo -H pip3 install -U ngram
Once we have installed the necessary libraries for our project, we proceed to import
them and test the functions.
import jellyfish
From Wikipedia:
The edit distance _ is a way of quantifying how dissimilar two strings (e.g., words) are to one another by counting the minimum number of operations required to transform one string into the other._.
The Levenshtein distance between "kitten" and "sitting" is 3. A minimal edit script that transforms the former into the latter is:
jellyfish.levenshtein_distance("kitten", "sitting")
Let's try a few more examples
jellyfish.levenshtein_distance("Ipeirotis", "Iperiotos")
jellyfish.levenshtein_distance("Starbucks", "Starbacks")
jellyfish.levenshtein_distance("Starbucks", "Starbuck")
jellyfish.levenshtein_distance("Starbucks", "Wendys")
The Demerau Levenshtein distance also allows for transposition of two adjacent characters.
jellyfish.levenshtein_distance("Ipeirotis", "Iperiotis")
jellyfish.damerau_levenshtein_distance("Ipeirotis", "Iperiotis")
jellyfish.damerau_levenshtein_distance("Starbucks", "Starbucsk")
jellyfish.levenshtein_distance("Starbucks", "Starbucsk")
Jaro–Winkler distance is a string metric for measuring the edit distance between two sequences. Informally, the Jaro distance between two words is the minimum number of single-character transpositions required to change one word into the other; the Jaro–Winkler distance gives more favourable ratings to strings that match from the beginning.
jellyfish.jaro_winkler("Starbucks", "Starbarbr")
jellyfish.jaro_winkler("Starbucks", "Milwbucks")
Soundex is a phonetic algorithm for indexing names by sound, as pronounced in English. The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling.
Using this algorithm, both "Robert" and "Rupert" return the same string "R163" while "Rubin" yields "R150". "Ashcraft" and "Ashcroft" both yield "A261". "Tymczak" yields "T522" not "T520" (the chars 'z' and 'k' in the name are coded as 2 twice since a vowel lies in between them). "Pfister" yields "P236" not "P123" (the first two letters have the same number and are coded once as 'P').
jellyfish.soundex("Robert")
jellyfish.soundex("Rupert")
jellyfish.soundex("Ashcroft")
jellyfish.soundex("Ashcraft")
jellyfish.soundex("Papadopoylis")
jellyfish.soundex("Papadopoulos")
With the n-gram similarity score, we split the word into sequences of n consecutive characters (n-grams) and then compare the sets of n-grams between the two words. For example, the name "Panos" has the following 2-grams: "Pa", "an", "no", "os". (We can also add "#P" and "s#" if we want to capture the prefix and suffixes.) Strings that share a large number of q-grams are often similar.
import ngram
ngram.NGram.compare("Ipeirotis", "Iperotis", N=2)
ngram.NGram.compare("Ipeirotis", "Iperotis", N=1)
ngram.NGram.compare("New York University", "New York Universty", N=2)
ngram.NGram.compare("New York University", "University of New York", N=1)
ngram.NGram.compare("New York University", "Columbia Universty", N=2)
Given the experience with the metrics above, we want now to create a function that takes as input a string and returns their similarity. Our key requirement is for the similarity metric to be between 0 and 1, with 0 meaning no similarity, and 1 corresponding to identical strings. Some of the similarity functions above would fit right in, others will need some work.
import ngram
import jellyfish
def normalized_similarity_edit_distance(str1, str2):
# Compute the similarity between str1 and str2
distance = ...
# Normalize
normalized = ...
# Return the result
return ...
# For n-gram similarity it is very simple, we just return the result
import ngram
def computeSimilarity_ngram(str1, str2, n=3):
similarity = ngram.NGram.compare(str1, str2, N=n)
return similarity
computeSimilarity_ngram("New York University", "New York Univ")
computeSimilarity_ngram("New York University", "New York Univ", n=2)
computeSimilarity_ngram("New York University", "New York Univ", n=4)
computeSimilarity_ngram("New York University", "Columbia Univ", n=2)
# For edit distance
import jellyfish
def computeSimilarity_editdistance(str1, str2):
# Compute the maximum length of the two strings, to normalize our distance
maxlength = max(len(str1), len(str2))
# Compute the edit distance between two strings
distance = jellyfish.levenshtein_distance(str1, str2)
similarity = 1 - distance / maxlength
return similarity
computeSimilarity_editdistance("New York University", "New York Univ")
computeSimilarity_editdistance("New York University", "Columbia Univ")
# For soundex
import jellyfish
def computeSimilarity_soundex(str1, str2):
soundex1 = jellyfish.soundex(str1)
soundex2 = jellyfish.soundex(str2)
if soundex1 == soundex2:
return 1.0
else:
return 0.0
computeSimilarity_soundex("New York University", "New York Univ")
We will now up our game, and return back the results of the comparison using various methods. The method
parameter defines the metric that we will use
def computeSimilarity(str1, str2, method):
# The function should check the method
# and then call the appropriate similarity function
# what we implemented above and return the
# corresponding similarity value
return ...
# Getting closer to a real setting, we can now
# compute all the similarity metrics and return them
# all. Perhaps even compute an average value
def computeSimilarity(str1, str2):
# We return a dictionary with all the metrics
return {"ngram2": ..., "soundex": ...}
def computeSimilarity(str1, str2, method):
if method == "ngram2":
return computeSimilarity_ngram(str1, str2, n=2)
elif method == "ngram3":
return computeSimilarity_ngram(str1, str2, n=3)
elif method == "ngram4":
return computeSimilarity_ngram(str1, str2, n=4)
elif method == "edit_distance":
return computeSimilarity_editdistance(str1, str2)
elif method == "soundex":
return computeSimilarity_soundex(str1, str2)
else:
return None
computeSimilarity("New York University", "New York Univ", "ngram3")
computeSimilarity("New York University", "New York Univ", "edit_distance")
# Most of the time we are going to compute all similarity metrics
# and return back a dictionary with all the metrics
def computeSimilarity(str1, str2):
results = {
"ngram2": computeSimilarity_ngram(str1, str2, n=2),
"ngram3": computeSimilarity_ngram(str1, str2, n=3),
"ngram4": computeSimilarity_ngram(str1, str2, n=4),
"edit_distance": computeSimilarity_editdistance(str1, str2),
"soundex": computeSimilarity_soundex(str1, str2),
}
# Arbitrarily, compute a similarity metric as the average of all
results["average"] = sum(results.values()) / len(results)
# Similarly arbitrarily, we can compute our own metric, by mixing
# the various metrics in our own way.
results["custom"] = (
results["ngram3"] + 2.5 * results["edit_distance"] + 0.5 * results["soundex"]
) / 4
return results
computeSimilarity("New York University", "New York Univ")
We now create a function that accepts a company name and a list of companies, and computes their similarity. This part will get us to exercise our for-loops, and also illustrate how we can use lists and tuples.
Sorting a list of tuples:This part is a little bit advanced for now, so I will just give the code below. (Solution taken from http://stackoverflow.com/questions/3121979/how-to-sort-list-tuple-of-lists-tuples). Here is a small example below, which we will reuse in our function:
data = [("Panos", 0.5), ("Peter", 0.8), ("Pan", 0.8)]
# This code sorts the list "data", by using the second element
# of each tuple as the sorting key, and sorts things in reverse order
data.sort(
key=lambda tupl: tupl[1], reverse=True
) # sorts in place, in descending order, based on the second element of the tuple
print(data)
# STEP 3: We now create a function that accepts a company name
# and a list of companies, and computes their similarity
# We have a 'top' parameter (by default set to be 5)
# that restricts the results to only the "top" most similar
# string pairs. We also define a parameter "method" that defines
# what is the similarity method that we want to use. We also define a
# similarity threshold for keeping only results with sufficient similarity
def companySimilarity(query, companyList, top=5, method="average", sim_threshold=0.25):
...
# STEP 3: We now create a function that accepts a company name
# and a list of companies, and computes their similarity
# We have a 'top' parameter (by default set to be 5)
# that restricts the results to only the most similar
# string pairs. We also define a parameter "method" that defines
# what is the similarity method that we want to use. We also define a
# similarity threshold for keeping only results with sufficient similarity
def companySimilarity(query, companyList, top=5, method="average", sim_threshold=0.25):
# We will use a list to store the similar matches
results = []
# Go through all the restaurants
for c in companyList:
# We compute the similarities (all metrics)
# between the string "query" and the string "c"
# which is the variable that iterates over the list "companyList"
similarities = computeSimilarity(query, c)
# If the ngram similarity is above 0.25
if similarities[method] > sim_threshold:
# Add in results the matching restaurant name r
# and the similarity
results.append((c, similarities[method]))
# This list contains the matches. The list contains a list
# of tuples (company name, similarity)
# We sort in decreasing order of similarity
results.sort(key=lambda tupl: tupl[1], reverse=True)
# We return only the top results
return results[:top]
query = "MACDONALDS"
companySimilarity(query, restaurants, top=5, method="ngram3", sim_threshold=0.25)
query = "MACDONALDS"
companySimilarity(query, restaurants, top=5, method="average", sim_threshold=0.25)
query = "STARBUCKS"
companySimilarity(query, restaurants, top=5, method="ngram3", sim_threshold=0.25)
query = "STARBUCKS"
companySimilarity(query, restaurants, top=20, method="average", sim_threshold=0.25)
We are almost done. We now just go through all the companies in the list
and we call the companySimilarity
function that computes the similar company names
for all the companies in the list.
# STEP 4: We are almost done. We now just go through all the companies in the list
# and we call the companySimilarity function that computes the similar company names
# for all the companies in the list. We store the results in a dictionary, with the
# key being the company name, and the value being a "list of tuples" with the
# similar company names and the corresponding similarity value.
# Your code here
# STEP 4: We are almost done. We now just go through all the companies in the list
# and we call the companySimilarity function that computes the similar company names
# for all the companies in the list. We store the results in a dictionary, with the
# key being the company name, and the value being a "list of tuples" with the
# similar company names and the corresponding similarity value.
# The matches counter is just to stop the computation quickly
# after we have showed enough matches
matches = 0
stop_after = 20
for q in restaurants:
results = companySimilarity(
query=q, # the name of the restaurant that we use are query
companyList=restaurants, # the list of restaurants
top=6, # the number of matches that we want to get
method="average", # which similarity method to use
sim_threshold=0.4, # the similarity threshold
)
# We will print only non-identical matches (remember the top
# match is the restaurant matching with itself)
if len(results) > 1:
for r in results[1:]:
print(f"{q}\t===>\t{r[0]}\t{r[1]:2.3%}")
matches = matches + 1
if matches > stop_after:
break # We stop after a few illustrative matches