#!/usr/bin/env python # coding: utf-8 # Contents # ======== # - [Introduction](#Introduction) # - [Block Using the Sorted Neighborhood Blocker](#Block-Using-the-Sorted-Neighborhood-Blocker) # - [Block Tables to Produce a Candidate Set of Tuple Pairs](#Block-Tables-to-Produce-a-Candidate-Set-of-Tuple-Pairs) # - [Handling Missing Values](#Handling-Missing-Values) # - [Window Size](#Window-Size) # - [Stable Sort Order](#Stable-Sort-Order) # - [Sorted Neighborhood Blocker Limitations](#Sorted-Neighborhood-Blocker-limitations) # # Introduction # WARNING: The sorted neighborhood blocker is still experimental and has not been fully tested yet. Use this blocker at your own risk. # # Blocking is typically done to reduce the number of tuple pairs considered for matching. There are several blocking methods proposed. The *py_entitymatching* package supports a subset of such blocking methods (#ref to what is supported). One such supported blocker is the sorted neighborhood blocker. This IPython notebook illustrates how to perform blocking using the sorted neighborhood blocker. # # Note, often the sorted neighborhood blocking technique is used on a single table. In this case we have implemented sorted neighborhood blocking between two tables. We first enrich the tables with whether the table is the left table, or right table. Then we merge the tables. At this point we perform sorted neighborhood blocking, which is to pass a sliding window of `window_size` (default 2) across the merged dataset. Within the sliding window all tuple pairs that have one tuple from the left table and one tuple from the right table are returned. # First, we need to import *py_entitymatching* package and other libraries as follows: # In[2]: # Import py_entitymatching package import py_entitymatching as em import os import pandas as pd # Then, read the input tablse from the datasets directory # In[3]: # Get the datasets directory datasets_dir = em.get_install_path() + os.sep + 'datasets' # Get the paths of the input tables path_A = datasets_dir + os.sep + 'person_table_A.csv' path_B = datasets_dir + os.sep + 'person_table_B.csv' # In[4]: # Read the CSV files and set 'ID' as the key attribute A = em.read_csv_metadata(path_A, key='ID') B = em.read_csv_metadata(path_B, key='ID') # In[5]: A.head() # In[6]: B.head() # # Block Using the Sorted Neighborhood Blocker # # Once the tables are read, we can do blocking using sorted neighborhood blocker. # With the sorted neighborhood blocker, you can only block between two tables to produce a candidate set of tuple pairs. # ## Block Tables to Produce a Candidate Set of Tuple Pairs # In[7]: # Instantiate attribute equivalence blocker object sn = em.SortedNeighborhoodBlocker() # For the given two tables, we will assume that two persons with different `zipcode` values do not refer to the same real world person. So, we apply attribute equivalence blocking on `zipcode`. That is, we block all the tuple pairs that have different zipcodes. # In[8]: # Use block_tables to apply blocking over two input tables. C1 = sn.block_tables(A, B, l_block_attr='birth_year', r_block_attr='birth_year', l_output_attrs=['name', 'birth_year', 'zipcode'], r_output_attrs=['name', 'birth_year', 'zipcode'], l_output_prefix='l_', r_output_prefix='r_', window_size=3) # In[9]: # Display the candidate set of tuple pairs C1.head() # Note that the tuple pairs in the candidate set have the same zipcode. # # The attributes included in the candidate set are based on l_output_attrs and r_output_attrs mentioned in block_tables command (the key columns are included by default). Specifically, the list of attributes mentioned in l_output_attrs are picked from table A and the list of attributes mentioned in r_output_attrs are picked from table B. The attributes in the candidate set are prefixed based on l_output_prefix and r_ouptut_prefix parameter values mentioned in block_tables command. # In[10]: # Show the metadata of C1 em.show_properties(C1) # In[11]: id(A), id(B) # Note that the metadata of C1 includes key, foreign key to the left and right tables (i.e A and B) and pointers to left and right tables. # ### Handling Missing Values # If the input tuples have missing values in the blocking attribute, then they are ignored by default. This is because, including all possible tuple pairs with missing values can significantly increase the size of the candidate set. But if you want to include them, then you can set `allow_missing` paramater to be True. # In[12]: # Introduce some missing values A1 = em.read_csv_metadata(path_A, key='ID') A1.ix[0, 'zipcode'] = pd.np.NaN A1.ix[0, 'birth_year'] = pd.np.NaN # In[13]: A1 # In[14]: # Use block_tables to apply blocking over two input tables. C2 = sn.block_tables(A1, B, l_block_attr='zipcode', r_block_attr='zipcode', l_output_attrs=['name', 'birth_year', 'zipcode'], r_output_attrs=['name', 'birth_year', 'zipcode'], l_output_prefix='l_', r_output_prefix='r_', allow_missing=True) # setting allow_missing parameter to True # In[15]: len(C1), len(C2) # In[16]: C2 # The candidate set C2 includes all possible tuple pairs with missing values. # ## Window Size # # A tunable parameter to the Sorted Neighborhood Blocker is the Window size. To perform the same result as above with a larger window size is via the `window_size` argument. Note that it has more results than C1. # In[17]: C3 = sn.block_tables(A, B, l_block_attr='birth_year', r_block_attr='birth_year', l_output_attrs=['name', 'birth_year', 'zipcode'], r_output_attrs=['name', 'birth_year', 'zipcode'], l_output_prefix='l_', r_output_prefix='r_', window_size=5) # In[18]: len(C1) # In[19]: len(C3) # ## Stable Sort Order # # One final challenge for the Sorted Neighborhood Blocker is making the sort order stable. If the column being sorted on has multiple identical keys, and those keys are longer than the window size, then different results may occur between runs. To always guarantee the same results for every run, make sure to make the sorting column unique. One method to do so is to append the id of the tuple onto the end of the sorting column. Here is an example. # In[20]: A["birth_year_plus_id"]=A["birth_year"].map(str)+'-'+A["ID"].map(str) B["birth_year_plus_id"]=B["birth_year"].map(str)+'-'+A["ID"].map(str) C3 = sn.block_tables(A, B, l_block_attr='birth_year_plus_id', r_block_attr='birth_year_plus_id', l_output_attrs=['name', 'birth_year_plus_id', 'birth_year', 'zipcode'], r_output_attrs=['name', 'birth_year_plus_id', 'birth_year', 'zipcode'], l_output_prefix='l_', r_output_prefix='r_', window_size=5) # In[21]: C3.head() # # Sorted Neighborhood Blocker limitations # # Since the sorted neighborhood blocker requires position in sorted order, unlike other blockers, blocking on a candidate set or checking two tuples is not applicable. Attempts to call `block_candset` or `block_tuples` will raise an assertion.