🚩 Create a free WhyLabs account to get more value out of whylogs!
Did you know you can store, visualize, and monitor whylogs profiles with the WhyLabs Observability Platform? Sign up for a free WhyLabs account to leverage the power of whylogs and WhyLabs together!
Hello there! If you've come to this tutorial, perhaps you are wondering what can you do after you generated your first (or maybe not the first) profile. Well, a good practice is to store these profiles as lightweight files, which is one of the cool features whylogs
brings to the table.
Here we will check different flavors of writing, and you can check which one of these will meet your current needs. Shall we?
Let's first install whylogs, if you don't have it installed already:
# Note: you may need to restart the kernel to use updated packages.
import shutil
%pip install whylogs
Requirement already satisfied: whylogs in /Users/murilomendonca/Documents/repos/whylogs/python/.venv/lib/python3.9/site-packages (1.1.7) Requirement already satisfied: protobuf>=3.19.4 in /Users/murilomendonca/Documents/repos/whylogs/python/.venv/lib/python3.9/site-packages (from whylogs) (4.21.6) Requirement already satisfied: typing-extensions>=3.10 in /Users/murilomendonca/Documents/repos/whylogs/python/.venv/lib/python3.9/site-packages (from whylogs) (4.3.0) Requirement already satisfied: whylogs-sketching>=3.4.1.dev3 in /Users/murilomendonca/Documents/repos/whylogs/python/.venv/lib/python3.9/site-packages (from whylogs) (3.4.1.dev3) Note: you may need to restart the kernel to use updated packages.
In order for us to get started, let's take a very simple example dataset and profile it.
import pandas as pd
data = {
"col_1": [1.0, 2.2, 0.1, 1.2],
"col_2": ["some", "text", "column", "example"],
"col_3": [4, 2, 3, 5]
}
df = pd.DataFrame(data)
df.head()
col_1 | col_2 | col_3 | |
---|---|---|---|
0 | 1.0 | some | 4 |
1 | 2.2 | text | 2 |
2 | 0.1 | column | 3 |
3 | 1.2 | example | 5 |
import whylogs as why
profile_results = why.log(df)
type(profile_results)
whylogs.api.logger.result_set.ProfileResultSet
And now we can check its collected metrics by transforming it into a DatasetProfileView
profile_view = profile_results.view()
profile_view.to_pandas()
cardinality/est | cardinality/lower_1 | cardinality/upper_1 | counts/n | counts/null | distribution/max | distribution/mean | distribution/median | distribution/min | distribution/n | ... | distribution/stddev | type | types/boolean | types/fractional | types/integral | types/object | types/string | frequent_items/frequent_strings | ints/max | ints/min | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
column | |||||||||||||||||||||
col_1 | 4.0 | 4.0 | 4.0002 | 4 | 0 | 2.2 | 1.125 | 1.2 | 0.1 | 4 | ... | 0.861684 | SummaryType.COLUMN | 0 | 4 | 0 | 0 | 0 | NaN | NaN | NaN |
col_2 | 4.0 | 4.0 | 4.0002 | 4 | 0 | NaN | 0.000 | NaN | NaN | 0 | ... | 0.000000 | SummaryType.COLUMN | 0 | 0 | 0 | 0 | 4 | [FrequentItem(value='some', est=1, upper=1, lo... | NaN | NaN |
col_3 | 4.0 | 4.0 | 4.0002 | 4 | 0 | 5.0 | 3.500 | 4.0 | 2.0 | 4 | ... | 1.290994 | SummaryType.COLUMN | 0 | 0 | 4 | 0 | 0 | [FrequentItem(value='2', est=1, upper=1, lower... | 5.0 | 2.0 |
3 rows × 28 columns
Cool! So now that we have a proper profile created, let's see how we can persist is as a file.
The first and most straight-forward way of persisting a whylogs
profile as a file is to write it directly to your disk. Our API makes it possible with the following commands. You could either write it from the ProfileResultSet
profile_results.writer("local").write(dest="my_profile.bin")
If you want, you can also not specify a dest
, but an optional base_dir
, which will write your profile with its timestamp to this base directory you want. Let's see how:
import os
os.makedirs("my_directory",exist_ok=True)
profile_results.writer("local").option(base_dir="my_directory").write()
Or from the DatasetProfileView
directly, with a path
profile_view.write("my_profile.bin")
(True, 'my_profile.bin')
And if it couldn't get any more convenient, you can also use the same logic for logging to also write your profile, like:
why.write(profile=profile_view, base_dir="my_directory", filename="my_profile.bin")
(True, 'my_directory/my_profile.bin')
import os
os.listdir("./my_directory")
['profile_2022-10-17 17:52:00.282271.bin', 'profile_2022-10-17 17:52:49.522728.bin', 'my_profile.bin']
And that's it! Now you can go ahead and decide where and how to store these profiles, for further inspection and guaranteeing your data and ML model pipelines are generating useful and quality data for your end users. Let's delete those files not to clutter our environment then :)
os.remove("./my_profile.bin")
shutil.rmtree("./my_directory")
From an enterprise perspective, it can be interesting to use s3
buckets to store your profiles instead of manually deciding what to do with them from your local machine. And that is why we have created an integration to do just that!
To try to maintain this example simple enough, we won't use an actual cloud-based storage, but we will mock one with the moto
library. This way, you can test this anywhere without worrying about credentials too much :) In order to keep whylogs
as light as possible, and allow users to extend as they need, we have made s3
an extra dependency.
So let's get started by creating this mocked s3
bucket with the moto
package.
P.S.: if you haven't installed the whylogs[s3]
extra dependency already, uncomment and run the cell below.
%pip install -q 'whylogs[s3]' moto
Note: you may need to restart the kernel to use updated packages.
import boto3
from moto import mock_s3
from moto.s3.responses import DEFAULT_REGION_NAME
BUCKET_NAME = "my_great_bucket"
mocks3 = mock_s3()
mocks3.start()
resource = boto3.resource("s3", region_name=DEFAULT_REGION_NAME)
resource.create_bucket(Bucket=BUCKET_NAME)
s3.Bucket(name='my_great_bucket')
Now that we have created our s3
bucket we will already be able to communicate with the mocked storage object. A good practice here is to declare your access credentials as environment variables. For a production setting, this won't be persisted into code, but this will give you a sense of how to safely use our s3
writer.
import os
os.environ["AWS_ACCESS_KEY_ID"] = "my_key_id"
os.environ["AWS_SECRET_ACCESS_KEY"] = "my_access_key"
profile_results.writer("s3").option(bucket_name=BUCKET_NAME).write()
And you've done it! Seems too good to be true. How would I know if the profiles are there? 🤔 Well, let's investigate them.
s3_client = boto3.client("s3")
objects = s3_client.list_objects(Bucket=BUCKET_NAME)
objects.get("Name", [])
'my_great_bucket'
objects.get("Contents", [])
[{'Key': 'profile_2022-10-17 17:52:49.522728.bin', 'LastModified': datetime.datetime(2022, 10, 17, 17, 52, 50, tzinfo=tzutc()), 'ETag': '"2e463c85796b25f0f27b56965fa4211d"', 'Size': 1143, 'StorageClass': 'STANDARD', 'Owner': {'DisplayName': 'webfile', 'ID': '75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a'}}]
And there we have it, our local s3
bucket has our profile written!
If we want to put our profile into a special "directory" - often referred to as prefix - we can do the following instead:
profile_results.writer("s3").option(
bucket_name=BUCKET_NAME,
object_name=f"my_prefix/somewhere/profile_{profile_view.creation_timestamp}.bin"
).write()
objects = s3_client.list_objects(Bucket=BUCKET_NAME)
objects.get("Contents", [])
[{'Key': 'my_prefix/somewhere/profile_2022-10-17 17:52:49.522728.bin', 'LastModified': datetime.datetime(2022, 10, 17, 17, 52, 51, tzinfo=tzutc()), 'ETag': '"2e463c85796b25f0f27b56965fa4211d"', 'Size': 1143, 'StorageClass': 'STANDARD', 'Owner': {'DisplayName': 'webfile', 'ID': '75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a'}}, {'Key': 'profile_2022-10-17 17:52:49.522728.bin', 'LastModified': datetime.datetime(2022, 10, 17, 17, 52, 50, tzinfo=tzutc()), 'ETag': '"2e463c85796b25f0f27b56965fa4211d"', 'Size': 1143, 'StorageClass': 'STANDARD', 'Owner': {'DisplayName': 'webfile', 'ID': '75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a'}}]
Now let's close our connection to our mocked s3
object.
mocks3.stop()
And that's it, you have just written a profile to an s3 bucket!
We will do the same exercise as before to demonstrate how we can upload profiles to a GCS bucket and then verify if they landed there. For that we will use the GCP storage emulator library to create a local endpoint.
%pip install -q gcp-storage-emulator 'whylogs[gcs]'
Note: you may need to restart the kernel to use updated packages.
import os
from google.cloud import storage # type: ignore
from gcp_storage_emulator.server import create_server
import random
import socket
def find_free_port(preferred_port, min_port, max_port):
def is_port_free(port):
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
try:
s.bind(('localhost', port))
return True
except OSError:
return False
if is_port_free(preferred_port):
return preferred_port
else:
while True:
port = random.randint(min_port, max_port)
if is_port_free(port):
return port
HOST = "localhost"
PORT = find_free_port(9023, 9000, 9100)
GCS_BUCKET = "test-bucket"
server = create_server(HOST, PORT, in_memory=True, default_bucket=GCS_BUCKET)
server.start()
os.environ["STORAGE_EMULATOR_HOST"] = f"http://{HOST}:{PORT}"
client = storage.Client()
bucket = client.bucket(GCS_BUCKET)
for blob in bucket.list_blobs():
content = blob.download_as_bytes()
print(f"Blob [{blob.name}]: {content}")
And this is empty, because we have just created our test bucket :)
from whylogs.api.writer.gcs import GCSWriter
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/credentials.json"
writer = GCSWriter()
writer.option(bucket_name=GCS_BUCKET, object_name="my_object.bin").write(file=profile_view)
(True, 'Uploaded /var/folders/6x/tttdyqq91mx_wk9xrykz7x940000gn/T/tmp5clwcehb to test-bucket/my_object.bin')
for blob in bucket.list_blobs():
content = blob.download_as_bytes()
print(f"Blob [{blob.name}]")
Blob [my_object.bin]
server.stop()
If you want to check other integrations that we've made, please make sure to check out our other examples page.
Hopefully this tutorial will help you get started to save your profiles and make sure to keep your Data and ML Pipelines always Robust and Responsible :)