Nessie Iceberg/Flink SQL Demo with NBA Dataset

This demo showcases how to use Nessie Python API along with Flink from Iceberg

To get started, we will first have to do a few setup steps that give us everything we need to get started with Nessie. In case you're interested in the detailed setup steps for Flink, you can check out the docs

The Binder server has downloaded flink and some data for us as well as started a Nessie server in the background. All we have to do is start Flink

The below cell starts a local Flink session with parameters needed to configure Nessie. Each config option is followed by a comment explaining its purpose.

In [ ]:
import os
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.table import StreamTableEnvironment
from pyflink.table.expressions import lit
from pynessie import init

# where we will store our data
warehouse = os.path.join(os.getcwd(), "flink-warehouse")
# this was downloaded when Binder started, its available on maven central
iceberg_flink_runtime_jar = os.path.join(os.getcwd(), "../iceberg-flink-runtime-0.12.0.jar")

env = StreamExecutionEnvironment.get_execution_environment()
env.add_jars("file://{}".format(iceberg_flink_runtime_jar))
table_env = StreamTableEnvironment.create(env)

nessie_client = init()

def create_ref_catalog(ref):
    """
    Create a flink catalog that is tied to a specific ref.

    In order to create the catalog we have to first create the branch
    """
    hash_ = nessie_client.get_reference(nessie_client.get_default_branch()).hash_
    try:
        nessie_client.create_branch(ref, hash_)
    except:
        pass # already created
    # The important args below are:
    # type: tell Flink to use Iceberg as the catalog
    # catalog-impl: which Iceberg catalog to use, in this case we want Nessie
    # uri: the location of the nessie server.
    # ref: the Nessie ref/branch we want to use (defaults to main)
    # warehouse: the location this catalog should store its data
    table_env.execute_sql(
            f"""CREATE CATALOG {ref}_catalog WITH (
            'type'='iceberg',
            'catalog-impl'='org.apache.iceberg.nessie.NessieCatalog',
            'uri'='http://localhost:19120/api/v1',
            'ref'='{ref}',
            'warehouse' = '{warehouse}')"""
        )
create_ref_catalog(nessie_client.get_default_branch())
print("\n\n\nFlink running\n\n\n")

Solving Data Engineering problems with Nessie

In this Demo we are a data engineer working at a fictional sports analytics blog. In order for the authors to write articles they have to have access to the relevant data. They need to be able to retrieve data quickly and be able to create charts with it.

We have been asked to collect and expose some information about basketball players. We have located some data sources and are now ready to start ingesting data into our data lakehouse. We will perform the ingestion steps on a Nessie branch to test and validate the data before exposing to the analysts.

Set up Nessie branches (via Nessie CLI)

Once all dependencies are configured, we can get started with ingesting our basketball data into Nessie with the following steps:

  • Create a new branch named dev
  • List all branches

It is worth mentioning that we don't have to explicitly create a main branch, since it's the default branch.

In [ ]:
create_ref_catalog("dev")

We have created the branch dev and we can see the branch with the Nessie hash its currently pointing to.

Below we list all branches. Note that the auto created main branch already exists and both branches point at the same hash

In [ ]:
!nessie --verbose branch

Create tables under dev branch

Once we created the dev branch and verified that it exists, we can create some tables and add some data.

We create two tables under the dev branch:

  • salaries
  • totals_stats

These tables list the salaries per player per year and their stats per year.

To create the data we:

  1. switch our branch context to dev
  2. create the table
  3. insert the data from an existing csv file. This csv file is already stored locally on the demo machine. A production use case would likely take feeds from official data sources
In [ ]:
# Load the dataset
from pyflink.table import DataTypes
from pyflink.table.descriptors import Schema, OldCsv, FileSystem

# Creating `salaries` table
(table_env.connect(FileSystem().path('../datasets/nba/salaries.csv'))
  .with_format(OldCsv()
               .field('Season', DataTypes.STRING()).field("Team", DataTypes.STRING())
               .field("Salary", DataTypes.STRING()).field("Player", DataTypes.STRING()))
  .with_schema(Schema()
               .field('Season', DataTypes.STRING()).field("Team", DataTypes.STRING())
               .field("Salary", DataTypes.STRING()).field("Player", DataTypes.STRING()))
  .create_temporary_table('dev_catalog.nba.salaries_temp'))

table_env.execute_sql("""CREATE TABLE IF NOT EXISTS dev_catalog.nba.salaries
            (Season STRING, Team STRING, Salary STRING, Player STRING)""").wait()

tab = table_env.from_path('dev_catalog.nba.salaries_temp')
tab.execute_insert('dev_catalog.nba.salaries').wait()

# Creating `totals_stats` table
(table_env.connect(FileSystem().path('../datasets/nba/totals_stats.csv'))
  .with_format(OldCsv()
               .field('Season', DataTypes.STRING()).field("Age", DataTypes.STRING()).field("Team", DataTypes.STRING())
               .field("ORB", DataTypes.STRING()).field("DRB", DataTypes.STRING()).field("TRB", DataTypes.STRING())
               .field("AST", DataTypes.STRING()).field("STL", DataTypes.STRING()).field("BLK", DataTypes.STRING())
               .field("TOV", DataTypes.STRING()).field("PTS", DataTypes.STRING()).field("Player", DataTypes.STRING())
               .field("RSorPO", DataTypes.STRING()))
  .with_schema(Schema()
               .field('Season', DataTypes.STRING()).field("Age", DataTypes.STRING()).field("Team", DataTypes.STRING())
               .field("ORB", DataTypes.STRING()).field("DRB", DataTypes.STRING()).field("TRB", DataTypes.STRING())
               .field("AST", DataTypes.STRING()).field("STL", DataTypes.STRING()).field("BLK", DataTypes.STRING())
               .field("TOV", DataTypes.STRING()).field("PTS", DataTypes.STRING()).field("Player", DataTypes.STRING())
               .field("RSorPO", DataTypes.STRING()))
  .create_temporary_table('dev_catalog.nba.totals_stats_temp'))

table_env.execute_sql(
        """CREATE TABLE IF NOT EXISTS dev_catalog.nba.totals_stats (Season STRING, Age STRING, Team STRING,
        ORB STRING, DRB STRING, TRB STRING, AST STRING, STL STRING, BLK STRING, TOV STRING, PTS STRING,
        Player STRING, RSorPO STRING)""").wait()

tab = table_env.from_path('dev_catalog.nba.totals_stats_temp')
tab.execute_insert('dev_catalog.nba.totals_stats').wait()

salaries = table_env.from_path('main_catalog.nba.`[email protected]`').select(lit(1).count).to_pandas().values[0][0]
totals_stats = table_env.from_path('main_catalog.nba.`[email protected]`').select(lit(1).count).to_pandas().values[0][0]
print(f"\n\n\nAdded {salaries} rows to the salaries table and {totals_stats} rows to the total_stats table.\n\n\n")

Now we count the rows in our tables to ensure they are the same number as the csv files. Note we use the [email protected] notation which overrides the context set by the catalog.

In [ ]:
table_count = table_env.from_path('dev_catalog.nba.`[email protected]`').select('Season.count').to_pandas().values[0][0]
csv_count = table_env.from_path('dev_catalog.nba.salaries_temp').select('Season.count').to_pandas().values[0][0]
assert table_count == csv_count
print(table_count)

table_count = table_env.from_path('dev_catalog.nba.`[email protected]`').select('Season.count').to_pandas().values[0][0]
csv_count = table_env.from_path('dev_catalog.nba.totals_stats_temp').select('Season.count').to_pandas().values[0][0]
assert table_count == csv_count
print(table_count)

Check generated tables

Since we have been working solely on the dev branch, where we created 2 tables and added some data, let's verify that the main branch was not altered by our changes.

In [ ]:
!nessie contents --list

And on the dev branch we expect to see two tables

In [ ]:
!nessie contents --list --ref dev

We can also verify that the dev and main branches point to different commits

In [ ]:
!nessie --verbose branch

Dev promotion into main

Once we are done with our changes on the dev branch, we would like to merge those changes into main. We merge dev into main via the command line merge command. Both branches should be at the same revision after merging/promotion.

In [ ]:
!nessie merge dev -b main --force

We can verify the branches are at the same hash and that the main branch now contains the expected tables and row counts.

The tables are now on main and ready for consumtion by our blog authors and analysts!

In [ ]:
!nessie --verbose branch
In [ ]:
!nessie contents --list
In [ ]:
table_count = table_env.from_path('main_catalog.nba.salaries').select('Season.count').to_pandas().values[0][0]
csv_count = table_env.from_path('dev_catalog.nba.salaries_temp').select('Season.count').to_pandas().values[0][0]
assert table_count == csv_count

table_count = table_env.from_path('main_catalog.nba.totals_stats').select('Season.count').to_pandas().values[0][0]
csv_count = table_env.from_path('dev_catalog.nba.totals_stats_temp').select('Season.count').to_pandas().values[0][0]
assert table_count == csv_count

Perform regular ETL on the new tables

Our analysts are happy with the data and we want to now regularly ingest data to keep things up to date. Our first ETL job consists of the following:

  1. Update the salaries table to add new data
  2. We have decided the Age column isn't required in the total_stats table so we will drop the column
  3. We create a new table to hold information about the players appearances in all star games

As always we will do this work on a branch and verify the results. This ETL job can then be set up to run nightly with new stats and salary information.

In [ ]:
create_ref_catalog("etl")
In [ ]:
# add some salaries for Kevin Durant
table_env.execute_sql("""INSERT INTO etl_catalog.nba.salaries
                        VALUES ('2017-18', 'Golden State Warriors', '$25000000', 'Kevin Durant'),
                        ('2018-19', 'Golden State Warriors', '$30000000', 'Kevin Durant'),
                        ('2019-20', 'Brooklyn Nets', '$37199000', 'Kevin Durant'),
                        ('2020-21', 'Brooklyn Nets', '$39058950', 'Kevin Durant')""").wait()
In [ ]:
# Rename the table `totals_stats` to `new_total_stats`
table_env.execute_sql("ALTER TABLE etl_catalog.nba.totals_stats RENAME TO etl_catalog.nba.new_total_stats").wait()
In [ ]:
# Creating `allstar_games_stats` table
(table_env.connect(FileSystem().path('../datasets/nba/allstar_games_stats.csv'))
    .with_format(OldCsv()
                 .field('Season', DataTypes.STRING()).field("Age", DataTypes.STRING()).field("Team", DataTypes.STRING())
                 .field("ORB", DataTypes.STRING()).field("TRB", DataTypes.STRING()).field("AST", DataTypes.STRING())
                 .field("STL", DataTypes.STRING()).field("BLK", DataTypes.STRING()).field("TOV", DataTypes.STRING())
                 .field("PF", DataTypes.STRING()).field("PTS", DataTypes.STRING()).field("Player", DataTypes.STRING()))
    .with_schema(Schema()
                 .field('Season', DataTypes.STRING()).field("Age", DataTypes.STRING()).field("Team", DataTypes.STRING())
                 .field("ORB", DataTypes.STRING()).field("TRB", DataTypes.STRING()).field("AST", DataTypes.STRING())
                 .field("STL", DataTypes.STRING()).field("BLK", DataTypes.STRING()).field("TOV", DataTypes.STRING())
                 .field("PF", DataTypes.STRING()).field("PTS", DataTypes.STRING()).field("Player", DataTypes.STRING()))
    .create_temporary_table('etl_catalog.nba.allstar_games_stats_temp'))

table_env.execute_sql(
        """CREATE TABLE IF NOT EXISTS etl_catalog.nba.allstar_games_stats (Season STRING, Age STRING,
        Team STRING, ORB STRING, TRB STRING, AST STRING, STL STRING, BLK STRING, TOV STRING,
        PF STRING, PTS STRING, Player STRING)""").wait()

tab = table_env.from_path('etl_catalog.nba.allstar_games_stats_temp')
tab.execute_insert('etl_catalog.nba.allstar_games_stats').wait()

# Notice how we view the data on the etl branch via @etl
table_env.from_path('etl_catalog.nba.`[email protected]`').to_pandas()

We can verify that the new table isn't on the main branch but is present on the etl branch

In [ ]:
# Since we have been working on the `etl` branch, the `allstar_games_stats` table is not on the `main` branch
!nessie contents --list
In [ ]:
# We should see `allstar_games_stats` and the `new_total_stats` on the `etl` branch
!nessie contents --list --ref etl

Now that we are happy with the data we can again merge it into main

In [ ]:
!nessie merge etl -b main --force

Now lets verify that the changes exist on the main branch and that the main and etl branches have the same hash

In [ ]:
!nessie contents --list
In [ ]:
!nessie --verbose branch
In [ ]:
table_count = table_env.from_path('main_catalog.nba.allstar_games_stats').select('Season.count').to_pandas().values[0][0]
csv_count = table_env.from_path('etl_catalog.nba.allstar_games_stats_temp').select('Season.count').to_pandas().values[0][0]
assert table_count == csv_count

Create experiment branch

As a data analyst we might want to carry out some experiments with some data, without affecting main in any way. As in the previous examples, we can just get started by creating an experiment branch off of main and carry out our experiment, which could consist of the following steps:

  • drop totals_stats table
  • add data to salaries table
  • compare experiment and main tables
In [ ]:
create_ref_catalog("experiment")
In [ ]:
# Drop the `totals_stats` table on the `experiment` branch
table_env.execute_sql("DROP TABLE IF EXISTS experiment_catalog.nba.new_total_stats")
In [ ]:
# add some salaries for Dirk Nowitzki
table_env.execute_sql("""INSERT INTO experiment_catalog.nba.salaries VALUES
    ('2015-16', 'Dallas Mavericks', '$8333333', 'Dirk Nowitzki'),
    ('2016-17', 'Dallas Mavericks', '$25000000', 'Dirk Nowitzki'),
    ('2017-18', 'Dallas Mavericks', '$5000000', 'Dirk Nowitzki'),
    ('2018-19', 'Dallas Mavericks', '$5000000', 'Dirk Nowitzki')""").wait()
In [ ]:
# We should see the `salaries` and `allstar_games_stats` tables only (since we just dropped `new_total_stats`)
!nessie contents --list --ref experiment
In [ ]:
# `main` hasn't changed been changed and still has the `new_total_stats` table
!nessie contents --list

Let's take a look at the contents of the salaries table on the experiment branch. Notice the use of the nessie catalog and the use of @experiment to view data on the experiment branch

In [ ]:
table_env.from_path('main_catalog.nba.`[email protected]`').select(lit(1).count).to_pandas()

and compare to the contents of the salaries table on the main branch. Notice that we didn't have to specify @branchName as it defaulted to the main branch

In [ ]:
table_env.from_path('main_catalog.nba.`[email protected]`').select(lit(1).count).to_pandas()
In [ ]: