This demo showcases how to use Nessie Python API along with Spark3 from Iceberg
To get started, we will first have to do a few setup steps that give us everything we need to get started with Nessie. In case you're interested in the detailed setup steps for Spark, you can check out the docs.
The Binder server has downloaded spark and some data for us as well as started a Nessie server in the background. All we have to do is start Spark.
The below cell starts a local Spark session with parameters needed to configure Nessie. Each config option is followed by a comment explaining its purpose.
import os
import findspark
from pyspark.sql import *
from pyspark import SparkConf
import pynessie
findspark.init()
pynessie_version = pynessie.__version__
conf = SparkConf()
# we need iceberg libraries and the nessie sql extensions
conf.set(
"spark.jars.packages",
f"org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:1.4.2,org.projectnessie.nessie-integrations:nessie-spark-extensions-3.2_2.12:0.74.0",
)
# ensure python <-> java interactions are w/ pyarrow
conf.set("spark.sql.execution.pyarrow.enabled", "true")
# create catalog dev_catalog as an iceberg catalog
conf.set("spark.sql.catalog.dev_catalog", "org.apache.iceberg.spark.SparkCatalog")
# tell the dev_catalog that its a Nessie catalog
conf.set("spark.sql.catalog.dev_catalog.catalog-impl", "org.apache.iceberg.nessie.NessieCatalog")
# set the location for Nessie catalog to store data. Spark writes to this directory
conf.set("spark.sql.catalog.dev_catalog.warehouse", "file://" + os.getcwd() + "/spark_warehouse/iceberg")
# set the location of the nessie server. In this demo its running locally. There are many ways to run it (see https://projectnessie.org/try/)
conf.set("spark.sql.catalog.dev_catalog.uri", "http://localhost:19120/api/v1")
# default branch for Nessie catalog to work on
conf.set("spark.sql.catalog.dev_catalog.ref", "main")
# use no authorization. Options are NONE AWS BASIC and aws implies running Nessie on a lambda
conf.set("spark.sql.catalog.dev_catalog.auth_type", "NONE")
# enable the extensions for both Nessie and Iceberg
conf.set(
"spark.sql.extensions",
"org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,org.projectnessie.spark.extensions.NessieSparkSessionExtensions",
)
# finally, start up the Spark server
spark = SparkSession.builder.config(conf=conf).getOrCreate()
print("Spark Running")
In this Demo we are a data engineer working at a fictional sports analytics blog. In order for the authors to write articles they have to have access to the relevant data. They need to be able to retrieve data quickly and be able to create charts with it.
We have been asked to collect and expose some information about basketball players. We have located some data sources and are now ready to start ingesting data into our data lakehouse. We will perform the ingestion steps on a Nessie branch to test and validate the data before exposing to the analysts.
Once all dependencies are configured, we can get started with ingesting our basketball data into Nessie
with the following steps:
dev
It is worth mentioning that we don't have to explicitly create a main
branch, since it's the default branch.
# Create the 'nba' namespace in Nessie
spark.sql("CREATE NAMESPACE dev_catalog.nba")
# Create the 'dev' branch from 'main' branch
spark.sql("CREATE BRANCH dev IN dev_catalog FROM main").toPandas()
We have created the branch dev
and we can see the branch with the Nessie hash
its currently pointing to.
Below we list all branches. Note that the auto created main
branch already exists and both branches point at the same empty hash
initially
spark.sql("LIST REFERENCES IN dev_catalog").toPandas()
Once we created the dev
branch and verified that it exists, we can create some tables and add some data.
We create two tables under the dev
branch:
salaries
totals_stats
These tables list the salaries per player per year and their stats per year.
To create the data we:
spark.sql("USE REFERENCE dev IN dev_catalog")
# Creating `salaries` table
spark.sql(
"""CREATE TABLE IF NOT EXISTS dev_catalog.nba.salaries
(Season STRING, Team STRING, Salary STRING, Player STRING) USING iceberg"""
)
spark.sql(
"""CREATE OR REPLACE TEMPORARY VIEW salaries_table USING csv
OPTIONS (path "../datasets/nba/salaries.csv", header true)"""
)
spark.sql("INSERT INTO dev_catalog.nba.salaries SELECT * FROM salaries_table")
# Creating `totals_stats` table
spark.sql(
"""CREATE TABLE IF NOT EXISTS dev_catalog.nba.totals_stats (
Season STRING, Age STRING, Team STRING, ORB STRING, DRB STRING, TRB STRING, AST STRING, STL STRING,
BLK STRING, TOV STRING, PTS STRING, Player STRING, RSorPO STRING)
USING iceberg"""
)
spark.sql(
"""CREATE OR REPLACE TEMPORARY VIEW stats_table USING csv
OPTIONS (path "../datasets/nba/totals_stats.csv", header true)"""
)
spark.sql("INSERT INTO dev_catalog.nba.totals_stats SELECT * FROM stats_table").toPandas()
Now we count the rows in our tables to ensure they are the same number as the csv files. Note we use the table@branch
notation which overrides the context set by a USE REFERENCE
command.
table_count = spark.sql("select count(*) from dev_catalog.nba.`salaries@dev`").toPandas().values[0][0]
csv_count = spark.sql("select count(*) from salaries_table").toPandas().values[0][0]
assert table_count == csv_count
print(table_count)
table_count = spark.sql("select count(*) from dev_catalog.nba.`totals_stats@dev`").toPandas().values[0][0]
csv_count = spark.sql("select count(*) from stats_table").toPandas().values[0][0]
assert table_count == csv_count
print(table_count)
Since we have been working solely on the dev
branch, where we created 2 tables and added some data,
let's verify that the main
branch was not altered by our changes.
spark.sql("USE REFERENCE main IN dev_catalog").toPandas()
spark.sql("SHOW TABLES IN dev_catalog").toPandas()
And on the dev
branch we expect to see two tables
spark.sql("USE REFERENCE dev IN dev_catalog").toPandas()
spark.sql("SHOW TABLES IN dev_catalog").toPandas()
We can also verify that the dev
and main
branches point to different commits
spark.sql("LIST REFERENCES IN dev_catalog").toPandas()
Once we are done with our changes on the dev
branch, we would like to merge those changes into main
.
We merge dev
into main
via the Spark sql merge
command.
Both branches should be at the same revision after merging/promotion.
spark.sql("MERGE BRANCH dev INTO main IN dev_catalog").toPandas()
We can verify that the main
branch now contains the expected tables and row counts.
The tables are now on main
and ready for consumption by our blog authors and analysts!
spark.sql("LIST REFERENCES IN dev_catalog").toPandas()
spark.sql("USE REFERENCE main IN dev_catalog").toPandas()
spark.sql("SHOW TABLES IN dev_catalog").toPandas()
table_count = spark.sql("select count(*) from dev_catalog.nba.salaries").toPandas().values[0][0]
csv_count = spark.sql("select count(*) from salaries_table").toPandas().values[0][0]
assert table_count == csv_count
print(table_count)
table_count = spark.sql("select count(*) from dev_catalog.nba.totals_stats").toPandas().values[0][0]
csv_count = spark.sql("select count(*) from stats_table").toPandas().values[0][0]
assert table_count == csv_count
print(table_count)
Our analysts are happy with the data and we want to now regularly ingest data to keep things up to date. Our first ETL job consists of the following:
Age
column isn't required in the totals_stats
table so we will drop the columnAs always we will do this work on a branch and verify the results. This ETL job can then be set up to run nightly with new stats and salary information.
spark.sql("CREATE BRANCH etl IN dev_catalog FROM main").toPandas()
# add some salaries for Kevin Durant
spark.sql("USE REFERENCE etl IN dev_catalog")
spark.sql(
"""INSERT INTO dev_catalog.nba.salaries VALUES
("2017-18", "Golden State Warriors", "$25000000", "Kevin Durant"),
("2018-19", "Golden State Warriors", "$30000000", "Kevin Durant"),
("2019-20", "Brooklyn Nets", "$37199000", "Kevin Durant"),
("2020-21", "Brooklyn Nets", "$39058950", "Kevin Durant")
"""
).toPandas()
# Dropping a column in the `totals_stats` table
spark.sql("ALTER TABLE dev_catalog.nba.totals_stats DROP COLUMN Age").toPandas()
# Creating `allstar_games_stats` table and viewing the contents
spark.sql(
"""CREATE TABLE IF NOT EXISTS dev_catalog.nba.allstar_games_stats (
Season STRING, Age STRING, Team STRING, ORB STRING, TRB STRING, AST STRING, STL STRING, BLK STRING,
TOV STRING, PF STRING, PTS STRING, Player STRING)
USING iceberg"""
)
spark.sql(
"""CREATE OR REPLACE TEMPORARY VIEW allstar_table USING csv
OPTIONS (path "../datasets/nba/allstar_games_stats.csv", header true)"""
)
spark.sql("INSERT INTO dev_catalog.nba.allstar_games_stats SELECT * FROM allstar_table").toPandas()
# notice how we view the data on the etl branch via @etl
spark.sql("select count(*) from dev_catalog.nba.`allstar_games_stats@etl`").toPandas()
We can verify that the new table isn't on the main
branch but is present on the etl branch
spark.sql("USE REFERENCE main IN dev_catalog").toPandas()
spark.sql("SHOW TABLES IN dev_catalog").toPandas()
spark.sql("USE REFERENCE etl IN dev_catalog").toPandas()
spark.sql("SHOW TABLES IN dev_catalog").toPandas()
Now that we are happy with the data we can again merge it into main
spark.sql("MERGE BRANCH etl INTO main IN dev_catalog").toPandas()
Now lets verify that the changes exist on the main
branch
spark.sql("USE REFERENCE main IN dev_catalog").toPandas()
spark.sql("SHOW TABLES IN dev_catalog").toPandas()
spark.sql("LIST REFERENCES IN dev_catalog").toPandas()
table_count = spark.sql("select count(*) from dev_catalog.nba.allstar_games_stats").toPandas().values[0][0]
csv_count = spark.sql("select count(*) from allstar_table").toPandas().values[0][0]
assert table_count == csv_count
print(table_count)
experiment
branch¶As a data analyst we might want to carry out some experiments with some data, without affecting main
in any way.
As in the previous examples, we can just get started by creating an experiment
branch off of main
and carry out our experiment, which could consist of the following steps:
totals_stats
tablesalaries
tableexperiment
and main
tablesspark.sql("CREATE BRANCH experiment IN dev_catalog FROM main").toPandas()
spark.sql("USE REFERENCE experiment IN dev_catalog").toPandas()
# Drop the `totals_stats` table on the `experiment` branch
spark.sql("DROP TABLE dev_catalog.nba.totals_stats").toPandas()
# add some salaries for Dirk Nowitzki
spark.sql(
"""INSERT INTO dev_catalog.nba.salaries VALUES
("2015-16", "Dallas Mavericks", "$8333333", "Dirk Nowitzki"),
("2016-17", "Dallas Mavericks", "$25000000", "Dirk Nowitzki"),
("2017-28", "Dallas Mavericks", "$5000000", "Dirk Nowitzki"),
("2018-19", "Dallas Mavericks", "$5000000", "Dirk Nowitzki")
"""
).toPandas()
spark.sql("SHOW TABLES IN dev_catalog").toPandas()
spark.sql("USE REFERENCE main IN dev_catalog").toPandas()
spark.sql("SHOW TABLES IN dev_catalog").toPandas()
Let's take a look at the contents of the salaries
table on the experiment
branch.
Notice the use of the nessie
catalog and the use of @experiment
to view data on the experiment
branch
spark.sql("select count(*) from dev_catalog.nba.`salaries@experiment`").toPandas()
and compare to the contents of the salaries
table on the main
branch. Notice that we didn't have to specify @branchName
as it defaulted
to the main
branch
spark.sql("select count(*) from dev_catalog.nba.salaries").toPandas()
And finally lets clean up after ourselves
spark.sql("DROP BRANCH dev IN dev_catalog")
spark.sql("DROP BRANCH etl IN dev_catalog")
spark.sql("DROP BRANCH experiment IN dev_catalog")