#!/usr/bin/env python # coding: utf-8 # >### 🚩 *Create a free WhyLabs account to get more value out of whylogs!*
# >*Did you know you can store, visualize, and monitor whylogs profiles with the [WhyLabs Observability Platform](https://whylabs.ai/whylogs-free-signup?utm_source=whylogs-Github&utm_medium=whylogs-example&utm_campaign=BigQuery_Example)? Sign up for a [free WhyLabs account](https://whylabs.ai/whylogs-free-signup?utm_source=whylogs-Github&utm_medium=whylogs-example&utm_campaign=BigQuery_Example) to leverage the power of whylogs and WhyLabs together!* # # whylogs with BigQuery # [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/whylabs/whylogs/blob/mainline/python/examples/integrations/BigQuery_Example.ipynb) # # # Hi there! 😀 # In this example notebook, we will show you how to use `whylogs` with an existing BigQuery table. We will query the table with `pandas_gbq` and then profile the data with the read `pandas.DataFrame`. We will try to demonstrate some investigation scenarios you can do and also how to store this snapshot for further analysis and mergeability, to make sure you will keep track and make your ML and data pipelines more responsible. # # ## Querying the data # To start off with this example notebook, you will need to open up a BigQuery sandbox environment on [this page](https://console.cloud.google.com/bigquery). This will allow us to query example datasets and also upload our own data, with their environment limitations. If you want to follow along, simply replace the `project_id` variable to your set project and you should be good to go. # # Then let's make the needed imports and also install the libraries we will use for this example # In[ ]: # Note: you may need to restart the kernel to use updated packages. get_ipython().run_line_magic('pip', "install 'whylogs[viz]'") get_ipython().run_line_magic('pip', 'install pandas-gbq') get_ipython().run_line_magic('pip', 'install tqdm') # In[2]: import pandas_gbq import whylogs as why # Inform your GCP project id and it will prompt a one-time login access on the UI and create an environment config file for you. # In[3]: project_id = "my-project-id" # Update here with your project # Let's query a public dataset and keep the data small for this demo purpose. With `pandas_gbq` we will end up with a pandas.DataFrame object # In[ ]: sql = """ SELECT pack, bottle_volume_ml, state_bottle_cost, sale_dollars, city FROM `bigquery-public-data.iowa_liquor_sales.sales` LIMIT 1000 """ df = pandas_gbq.read_gbq(sql, project_id=project_id) # ## Profiling with whylogs # # Now let's profile this dataset with whylogs and see what we can do with profiles. The first thing will be to turn it into a pandas DataFrame and inspect what metrics we can get out of the box # In[5]: results = why.log(df) profile_view = results.view() profile_view.to_pandas() # In[6]: from whylogs.viz.extensions.reports.profile_summary import ProfileSummaryReport # In[7]: ProfileSummaryReport(target_view=profile_view).report() # This report can be very useful, since it shows us in a bit more detail the characteristics of the distributions, and also lists some of the core metric names that we can refer to quickly. # ## Constraints check # # You can also create a constraints suite and generate a report with passed and failed constraints, by using the same `profile_view` object. # In[8]: from whylogs.core.constraints import ConstraintsBuilder from whylogs.core.constraints.factories import greater_than_number, mean_between_range # In[9]: builder = ConstraintsBuilder(dataset_profile_view=profile_view) builder.add_constraint( greater_than_number(column_name="bottle_volume_ml", number=100) ) builder.add_constraint( mean_between_range(column_name="sale_dollars", lower=400.0, upper=700.0) ) # In[10]: constraints = builder.build() # In[11]: constraints.validate() # In[12]: constraints.generate_constraints_report() # You can also pass the constraints to the `NotebookProfileVisualizer` and generate a visualization of the report # # In[13]: from whylogs.viz import NotebookProfileVisualizer visualization = NotebookProfileVisualizer() visualization.constraints_report(constraints=constraints) # ## Writing the profile results # # In order to write the results, all you have to do is execute the following cell. You will notice that it will create a lightweight binary file, that can be further read and iterated on with `whylogs`. # In[14]: results.writer("local").write() # ### Writing to WhyLabs # # The same way you can write it locally, you are also able to write to other locations. Here we will demonstrate how easy it is to write to WhyLabs. To use this writer without explicitly calling your secrets, make sure you have the following environment variables defined: # # - WHYLABS_API_KEY # - WHYLABS_DEFAULT_DATASET_ID # - WHYLABS_DEFAULT_ORG_ID # # To learn more about writing to WhyLabs, refer to [this example](https://github.com/whylabs/whylogs/blob/mainline/python/examples/integrations/writers/Writing_to_WhyLabs.ipynb) # In[ ]: results.writer("whylabs").write() # Users can also benefit from our API to write created profiles to different integration locations, such as `mlflow`, `s3` and more. If you want to learn more about whylogs and find out other cool features and integrations, check out our [examples page](https://github.com/whylabs/whylogs/tree/mainline/python/examples). # Happy coding! 🚀 😄