#!/usr/bin/env python # coding: utf-8 # >### 🚩 *Create a free WhyLabs account to get more value out of whylogs!*
# >*Did you know you can store, visualize, and monitor whylogs profiles with the [WhyLabs Observability Platform](https://whylabs.ai/whylogs-free-signup?utm_source=whylogs-Github&utm_medium=whylogs-example&utm_campaign=Mlflow_Logging)? Sign up for a [free WhyLabs account](https://whylabs.ai/whylogs-free-signup?utm_source=whylogs-Github&utm_medium=whylogs-example&utm_campaign=Mlflow_Logging) to leverage the power of whylogs and WhyLabs together!* # # MLflow Logging # # [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/whylabs/whylogs/blob/mainline/python/examples/integrations/Mlflow_Logging.ipynb) # # [MLflow](https://www.mlflow.org/) is an open-source model platform that can track, manage and help users deploy their models to production with a very consistent API and good software engineering practices. Whylogs users can benefit from our API to seamlessly log profiles to their Mlflow environment. Let's see how. # ## Setup # # For this tutorial we will simplify the approach by using MLflow's local client. One of MLflow's advantages is that it uses the exact same API to work both locally and in the cloud. So with a minor setup, the code shown here can be easily extended if you're working with MLflow in Kubernetes or in Databricks, for example. In order to get started, make sure you have both `mlflow` and `whylogs` installed in your environment by uncommenting the following cells: # In[1]: # Note: you may need to restart the kernel to use updated packages. get_ipython().run_line_magic('pip', "install 'whylogs[mlflow]'") # We are also installing `pandas`, `scikit-learn` and `matplotlib` in order to have a very simple training example and show you how you can start profiling your training data with `whylogs`. So, if you still haven't, also run the following cell: # In[2]: get_ipython().run_line_magic('pip', 'install -q scikit-learn matplotlib pandas mlflow-skinny') # ## Get the data # # Now let us get an example dataset from the `scikit-learn` library and create a function that returns an aggregated dataframe with it. We will use this same function later on! # In[1]: import pandas as pd from sklearn.datasets import load_iris def get_data() -> pd.DataFrame: iris_data = load_iris() dataframe = pd.DataFrame(iris_data.data, columns=iris_data.feature_names) dataframe["target"] = pd.DataFrame(iris_data.target) return dataframe # In[2]: df = get_data() # In[3]: df.head() # ## Train a model # # Let's define the simplest model to be trained with `scikit-learn`. We aren't interested in model performance nor deep ML concepts, but only in having some baseline model being trained and having the overall idea of how to use `whylogs` with your existing training pipeline. # In[4]: from sklearn.tree import DecisionTreeClassifier def train(dataframe: pd.DataFrame) -> None: model = DecisionTreeClassifier(max_depth=2) model.fit(dataframe.drop("target", axis=1), y=dataframe["target"]) # We could serialize a model, but we will take a shortcut here taking advantage of `mlflow`'s awesome `autolog` method. # In[ ]: import mlflow with mlflow.start_run() as run: mlflow.sklearn.autolog() df = get_data() train(dataframe=df) run_id = run.info.run_id mlflow.end_run() # And now we should see that a `mlruns/` directory was created and that we already have our trained model in there! # In[6]: import os os.listdir(f"mlruns/0/{run_id}/artifacts/model") # ## Profile the training data with `whylogs` # # Now in order to profile your training data with `whylogs`, you'll basically need to use our `logger` API, which is as simple as: # In[7]: import whylogs as why profile_result = why.log(df) profile_view = profile_result.view() # In[8]: profile_view.to_pandas() # ## Writing your profile to `mlflow` # # Now even more interesting than writing this profile locally is the ability to use `mlflow`'s API **together** with `whylogs`', in order to store the training data profile and analyze the results of your experiments over time. For that, we basically need to define a function that will # # 1. Profile our training data # 2. Log the profile as an `mlflow` artifact # # Let's see how this function can be written: # In[10]: def log_profile(dataframe: pd.DataFrame) -> None: profile_result = why.log(dataframe) profile_result.writer("mlflow").write() # And we can call that function we defined in our `mlflow` run experiment, like this: # In[11]: with mlflow.start_run() as run: mlflow.sklearn.autolog() df = get_data() train(dataframe=df) log_profile(dataframe=df) run_id = run.info.run_id mlflow.end_run() # If we inspect the recently created experiment folder, we will see that a `whylogs` directory was created there with our profile. # In[12]: os.listdir(f"mlruns/0/{run_id}/artifacts/whylogs") # And we can even use `mlflow`'s API to fetch and read back our profile, like: # In[13]: from mlflow.tracking import MlflowClient client = MlflowClient() local_dir = "/tmp/artifact_downloads" if not os.path.exists(local_dir): os.mkdir(local_dir) local_path = client.download_artifacts(run_id, "whylogs", local_dir) # In[14]: os.listdir(local_path) # In[ ]: profile_name = os.listdir(local_path)[0] result = why.read(path=f"{local_path}/{profile_name}") # In[16]: result.view().to_pandas() # And with those few lines we have successfully fetched the profile artifact from our experiment. Over time, we will be able to track down some very relevant information on how our data behaves, **why** is our model generating the results and walk towards a more Robust and Responsible AI field. # # Hope this tutorial will help you get started with `whylogs`. Stay tuned to our [Github repo](https://github.com/whylabs/whylogs) and also our [community Slack](https://github.com/whylabs/whylogs#:~:text=us%2C%20please%20join-,our%20Slack%20Community,-.%20In%20addition%20to) to get the latest from `whylogs`. # # See you soon!