🚩 Create a free WhyLabs account to get more value out of whylogs!

Did you know you can store, visualize, and monitor whylogs profiles with the WhyLabs Observability Platform? Sign up for a free WhyLabs account to leverage the power of whylogs and WhyLabs together!

Writing profiles - Local/S3

Open in Colab

Hello there! If you've come to this tutorial, perhaps you are wondering what can you do after you generated your first (or maybe not the first) profile. Well, a good practice is to store these profiles as lightweight files, which is one of the cool features whylogs brings to the table.

Here we will check different flavors of writing, and you can check which one of these will meet your current needs. Shall we?

Installing whylogs

Let's first install whylogs, if you don't have it installed already:

In [1]:
import shutil
%pip install whylogs
Requirement already satisfied: whylogs in /Users/murilomendonca/Documents/repos/whylogs/python/.venv/lib/python3.9/site-packages (1.1.7)
Requirement already satisfied: protobuf>=3.19.4 in /Users/murilomendonca/Documents/repos/whylogs/python/.venv/lib/python3.9/site-packages (from whylogs) (4.21.6)
Requirement already satisfied: typing-extensions>=3.10 in /Users/murilomendonca/Documents/repos/whylogs/python/.venv/lib/python3.9/site-packages (from whylogs) (4.3.0)
Requirement already satisfied: whylogs-sketching>=3.4.1.dev3 in /Users/murilomendonca/Documents/repos/whylogs/python/.venv/lib/python3.9/site-packages (from whylogs) (3.4.1.dev3)
Note: you may need to restart the kernel to use updated packages.

Creating simple profiles

In order for us to get started, let's take a very simple example dataset and profile it.

In [2]:
import pandas as pd

data = {
    "col_1": [1.0, 2.2, 0.1, 1.2],
    "col_2": ["some", "text", "column", "example"],
    "col_3": [4, 2, 3, 5]
}

df = pd.DataFrame(data)
In [3]:
df.head()
Out[3]:
col_1 col_2 col_3
0 1.0 some 4
1 2.2 text 2
2 0.1 column 3
3 1.2 example 5
In [4]:
import whylogs as why

profile_results = why.log(df)
In [5]:
type(profile_results)
Out[5]:
whylogs.api.logger.result_set.ProfileResultSet

And now we can check its collected metrics by transforming it into a DatasetProfileView

In [6]:
profile_view = profile_results.view()
profile_view.to_pandas()
Out[6]:
cardinality/est cardinality/lower_1 cardinality/upper_1 counts/n counts/null distribution/max distribution/mean distribution/median distribution/min distribution/n ... distribution/stddev type types/boolean types/fractional types/integral types/object types/string frequent_items/frequent_strings ints/max ints/min
column
col_1 4.0 4.0 4.0002 4 0 2.2 1.125 1.2 0.1 4 ... 0.861684 SummaryType.COLUMN 0 4 0 0 0 NaN NaN NaN
col_2 4.0 4.0 4.0002 4 0 NaN 0.000 NaN NaN 0 ... 0.000000 SummaryType.COLUMN 0 0 0 0 4 [FrequentItem(value='some', est=1, upper=1, lo... NaN NaN
col_3 4.0 4.0 4.0002 4 0 5.0 3.500 4.0 2.0 4 ... 1.290994 SummaryType.COLUMN 0 0 4 0 0 [FrequentItem(value='2', est=1, upper=1, lower... 5.0 2.0

3 rows × 28 columns

Cool! So now that we have a proper profile created, let's see how we can persist is as a file.

Local writer

The first and most straight-forward way of persisting a whylogs profile as a file is to write it directly to your disk. Our API makes it possible with the following commands. You could either write it from the ProfileResultSet

In [7]:
profile_results.writer("local").write(dest="my_profile.bin")

If you want, you can also not specify a dest, but an optional base_dir, which will write your profile with its timestamp to this base directory you want. Let's see how:

In [8]:
import os
os.makedirs("my_directory",exist_ok=True)
In [9]:
profile_results.writer("local").option(base_dir="my_directory").write()

Or from the DatasetProfileView directly, with a path

In [10]:
profile_view.write(path="my_profile.bin")
Out[10]:
(True, 'my_profile.bin')

And if it couldn't get any more convenient, you can also use the same logic for logging to also write your profile, like:

In [11]:
why.write(profile=profile_view, base_dir="my_directory/my_profile.bin")
Out[11]:
(True, 'my_directory/my_profile.bin')
In [12]:
import os 
os.listdir("./my_directory")
Out[12]:
['profile_2022-10-17 17:52:00.282271.bin',
 'profile_2022-10-17 17:52:49.522728.bin',
 'my_profile.bin']

And that's it! Now you can go ahead and decide where and how to store these profiles, for further inspection and guaranteeing your data and ML model pipelines are generating useful and quality data for your end users. Let's delete those files not to clutter our environment then :)

In [ ]:
os.remove("./my_profile.bin")
shutil.rmtree("./my_directory")

s3 Writer

From an enterprise perspective, it can be interesting to use s3 buckets to store your profiles instead of manually deciding what to do with them from your local machine. And that is why we have created an integration to do just that!

To try to maintain this example simple enough, we won't use an actual cloud-based storage, but we will mock one with the moto library. This way, you can test this anywhere without worrying about credentials too much :) In order to keep whylogs as light as possible, and allow users to extend as they need, we have made s3 an extra dependency.

So let's get started by creating this mocked s3 bucket with the moto package.

P.S.: if you haven't installed the whylogs[s3] extra dependency already, uncomment and run the cell below.

In [14]:
%pip install -q 'whylogs[s3]' moto
Note: you may need to restart the kernel to use updated packages.
In [15]:
import boto3
from moto import mock_s3
from moto.s3.responses import DEFAULT_REGION_NAME

BUCKET_NAME = "my_great_bucket"


mocks3 = mock_s3()
mocks3.start()
resource = boto3.resource("s3", region_name=DEFAULT_REGION_NAME)
resource.create_bucket(Bucket=BUCKET_NAME)
Out[15]:
s3.Bucket(name='my_great_bucket')

Now that we have created our s3 bucket we will already be able to communicate with the mocked storage object. A good practice here is to declare your access credentials as environment variables. For a production setting, this won't be persisted into code, but this will give you a sense of how to safely use our s3 writer.

In [16]:
import os 

os.environ["AWS_ACCESS_KEY_ID"] = "my_key_id"
os.environ["AWS_SECRET_ACCESS_KEY"] = "my_access_key"
In [17]:
profile_results.writer("s3").option(bucket_name=BUCKET_NAME).write()

And you've done it! Seems too good to be true. How would I know if the profiles are there? 🤔 Well, let's investigate them.

In [18]:
s3_client = boto3.client("s3")
objects = s3_client.list_objects(Bucket=BUCKET_NAME)
In [19]:
objects.get("Name", [])
Out[19]:
'my_great_bucket'
In [20]:
objects.get("Contents", [])
Out[20]:
[{'Key': 'profile_2022-10-17 17:52:49.522728.bin',
  'LastModified': datetime.datetime(2022, 10, 17, 17, 52, 50, tzinfo=tzutc()),
  'ETag': '"2e463c85796b25f0f27b56965fa4211d"',
  'Size': 1143,
  'StorageClass': 'STANDARD',
  'Owner': {'DisplayName': 'webfile',
   'ID': '75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a'}}]

And there we have it, our local s3 bucket has our profile written! If we want to put our profile into a special "directory" - often referred to as prefix - we can do the following instead:

In [21]:
profile_results.writer("s3").option(
    bucket_name=BUCKET_NAME, 
    object_name=f"my_prefix/somewhere/profile_{profile_view.creation_timestamp}.bin"
    ).write()
In [22]:
objects = s3_client.list_objects(Bucket=BUCKET_NAME)
objects.get("Contents", [])
Out[22]:
[{'Key': 'my_prefix/somewhere/profile_2022-10-17 17:52:49.522728.bin',
  'LastModified': datetime.datetime(2022, 10, 17, 17, 52, 51, tzinfo=tzutc()),
  'ETag': '"2e463c85796b25f0f27b56965fa4211d"',
  'Size': 1143,
  'StorageClass': 'STANDARD',
  'Owner': {'DisplayName': 'webfile',
   'ID': '75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a'}},
 {'Key': 'profile_2022-10-17 17:52:49.522728.bin',
  'LastModified': datetime.datetime(2022, 10, 17, 17, 52, 50, tzinfo=tzutc()),
  'ETag': '"2e463c85796b25f0f27b56965fa4211d"',
  'Size': 1143,
  'StorageClass': 'STANDARD',
  'Owner': {'DisplayName': 'webfile',
   'ID': '75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a'}}]

Wrapping up the s3 objects

Now let's close our connection to our mocked s3 object.

In [23]:
mocks3.stop()

And that's it, you have just written a profile to an s3 bucket!

GCS Writer

We will do the same exercise as before to demonstrate how we can upload profiles to a GCS bucket and then verify if they landed there. For that we will use the GCP storage emulator library to create a local endpoint.

In [24]:
%pip install -q gcp-storage-emulator 'whylogs[gcs]'
Note: you may need to restart the kernel to use updated packages.
In [25]:
import os

from google.cloud import storage  # type: ignore
from gcp_storage_emulator.server import create_server
In [26]:
HOST = "localhost"
PORT = 9023
GCS_BUCKET = "test-bucket"

server = create_server(HOST, PORT, in_memory=True, default_bucket=GCS_BUCKET)
server.start()

os.environ["STORAGE_EMULATOR_HOST"] = f"http://{HOST}:{PORT}"
client = storage.Client()
bucket = client.bucket(GCS_BUCKET)

for blob in bucket.list_blobs():
    content = blob.download_as_bytes()
    print(f"Blob [{blob.name}]: {content}")

And this is empty, because we have just created our test bucket :)

In [27]:
from whylogs.api.writer.gcs import GCSWriter

os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/credentials.json"

writer = GCSWriter()
writer.option(bucket_name=GCS_BUCKET, object_name="my_object.bin").write(file=profile_view)
Out[27]:
(True,
 'Uploaded /var/folders/6x/tttdyqq91mx_wk9xrykz7x940000gn/T/tmp5clwcehb to test-bucket/my_object.bin')
In [28]:
for blob in bucket.list_blobs():
    content = blob.download_as_bytes()
    print(f"Blob [{blob.name}]")
Blob [my_object.bin]
In [29]:
server.stop()

If you want to check other integrations that we've made, please make sure to check out our other examples page.

Hopefully this tutorial will help you get started to save your profiles and make sure to keep your Data and ML Pipelines always Robust and Responsible :)