🚩 Create a free WhyLabs account to get more value out of whylogs!
Did you know you can store, visualize, and monitor whylogs profiles with the WhyLabs Observability Platform? Sign up for a free WhyLabs account to leverage the power of whylogs and WhyLabs together!
Hi there! 😀
In this example notebook, we will show you how to use whylogs
with an existing BigQuery table. We will query the table with pandas_gbq
and then profile the data with the read pandas.DataFrame
. We will try to demonstrate some investigation scenarios you can do and also how to store this snapshot for further analysis and mergeability, to make sure you will keep track and make your ML and data pipelines more responsible.
To start off with this example notebook, you will need to open up a BigQuery sandbox environment on this page. This will allow us to query example datasets and also upload our own data, with their environment limitations. If you want to follow along, simply replace the project_id
variable to your set project and you should be good to go.
Then let's make the needed imports and also install the libraries we will use for this example
# Note: you may need to restart the kernel to use updated packages.
%pip install 'whylogs[viz]'
%pip install pandas-gbq
%pip install tqdm
import pandas_gbq
import whylogs as why
Inform your GCP project id and it will prompt a one-time login access on the UI and create an environment config file for you.
project_id = "my-project-id" # Update here with your project
Let's query a public dataset and keep the data small for this demo purpose. With pandas_gbq
we will end up with a pandas.DataFrame object
sql = """
SELECT pack, bottle_volume_ml, state_bottle_cost, sale_dollars, city
FROM `bigquery-public-data.iowa_liquor_sales.sales`
LIMIT 1000
"""
df = pandas_gbq.read_gbq(sql, project_id=project_id)
Now let's profile this dataset with whylogs and see what we can do with profiles. The first thing will be to turn it into a pandas DataFrame and inspect what metrics we can get out of the box
results = why.log(df)
profile_view = results.view()
profile_view.to_pandas()
cardinality/est | cardinality/lower_1 | cardinality/upper_1 | counts/n | counts/null | distribution/max | distribution/mean | distribution/median | distribution/min | distribution/n | ... | distribution/stddev | frequent_items/frequent_strings | ints/max | ints/min | type | types/boolean | types/fractional | types/integral | types/object | types/string | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
column | |||||||||||||||||||||
bottle_volume_ml | 21.000001 | 21.000000 | 21.001050 | 1000 | 0 | 4800.00 | 893.20700 | 1000.00 | 20.0 | 1000 | ... | 449.242697 | [FrequentItem(value='1000.000000', est=461, up... | 4800.0 | 20.0 | SummaryType.COLUMN | 0 | 0 | 1000 | 0 | 0 |
city | 238.000140 | 238.000000 | 238.012023 | 1000 | 5 | NaN | 0.00000 | NaN | NaN | 0 | ... | 0.000000 | [FrequentItem(value='Des Moines', est=67, uppe... | NaN | NaN | SummaryType.COLUMN | 0 | 0 | 0 | 0 | 995 |
pack | 11.000000 | 11.000000 | 11.000549 | 1000 | 0 | 48.00 | 11.02300 | 12.00 | 1.0 | 1000 | ... | 4.664786 | [FrequentItem(value='12.000000', est=587, uppe... | 48.0 | 1.0 | SummaryType.COLUMN | 0 | 0 | 1000 | 0 | 0 |
sale_dollars | 531.304754 | 524.519438 | 538.259242 | 1000 | 0 | 37514.88 | 599.49446 | 119.96 | 0.0 | 1000 | ... | 1677.743947 | NaN | NaN | NaN | SummaryType.COLUMN | 0 | 1000 | 0 | 0 | 0 |
state_bottle_cost | 259.000166 | 259.000000 | 259.013098 | 1000 | 0 | 137.13 | 10.52482 | 7.62 | 0.0 | 1000 | ... | 8.750237 | NaN | NaN | NaN | SummaryType.COLUMN | 0 | 1000 | 0 | 0 | 0 |
5 rows × 28 columns
from whylogs.viz.extensions.reports.profile_summary import ProfileSummaryReport
ProfileSummaryReport(target_view=profile_view).report()
This report can be very useful, since it shows us in a bit more detail the characteristics of the distributions, and also lists some of the core metric names that we can refer to quickly.
You can also create a constraints suite and generate a report with passed and failed constraints, by using the same profile_view
object.
from whylogs.core.constraints import ConstraintsBuilder
from whylogs.core.constraints.factories import greater_than_number, mean_between_range
builder = ConstraintsBuilder(dataset_profile_view=profile_view)
builder.add_constraint(
greater_than_number(column_name="bottle_volume_ml", number=100)
)
builder.add_constraint(
mean_between_range(column_name="sale_dollars", lower=400.0, upper=700.0)
)
<whylogs.core.constraints.metric_constraints.ConstraintsBuilder at 0x157f4f220>
constraints = builder.build()
constraints.validate()
False
constraints.generate_constraints_report()
[('bottle_volume_ml greater than number 100', 0, 1), ('sale_dollars mean between 400.0 and 700.0 (inclusive)', 1, 0)]
You can also pass the constraints to the NotebookProfileVisualizer
and generate a visualization of the report
from whylogs.viz import NotebookProfileVisualizer
visualization = NotebookProfileVisualizer()
visualization.constraints_report(constraints=constraints)
In order to write the results, all you have to do is execute the following cell. You will notice that it will create a lightweight binary file, that can be further read and iterated on with whylogs
.
results.writer("local").write()
The same way you can write it locally, you are also able to write to other locations. Here we will demonstrate how easy it is to write to WhyLabs. To use this writer without explicitly calling your secrets, make sure you have the following environment variables defined:
To learn more about writing to WhyLabs, refer to this example
results.writer("whylabs").write()
Users can also benefit from our API to write created profiles to different integration locations, such as mlflow
, s3
and more. If you want to learn more about whylogs and find out other cool features and integrations, check out our examples page.
Happy coding! 🚀 😄