blog bg left
Back to Blog

ML Monitoring in Under 5 Minutes

It only takes a few minutes and a few lines of code to monitor your ML models and data pipelines.  

Data validation and ML model monitoring are foundational steps to building reliable pipelines and responsible machine learning applications.

In this short post, I will show you how to use an open source data logging library and an AI observatory platform to monitor common issues with your ML models, such as data drift, concept drift, data quality, and performance.

Data logging and ML monitoring setup

First, we’ll install whylogs, an open-source data logging library that captures key statistical properties of data. We’ll also include dependencies for writing to the WhyLabs AI observatory for ML monitoring.

pip install “whylogs[whylabs]

Next, we’ll import the `whylogs`,`pandas`, and `os` libraries into our Python project. We’ll also create a dataframe of our dataset to profile.

import whylogs as why
import pandas as pd
import os
# create dataframe with dataset
dataset = pd.read_csv("https://whylabs-public.s3.us-west-2.amazonaws.com/datasets/tour/current.csv")

The data profiles created with whylogs can be used on their own for data validation and data drift visualization, but in this example, we’re going to write profiles to the WhyLabs Observatory to perform ML monitoring.

In order to write profiles to WhyLabs, we’ll create an account and grab our `Org-ID`, `Access token`, and `Project-ID` to set them as environment variables in our project.

# Set WhyLabs access keys
os.environ["WHYLABS_DEFAULT_ORG_ID"] = 'YOURORGID'
os.environ["WHYLABS_API_KEY"] = 'YOURACCESSTOKEN'
os.environ["WHYLABS_DEFAULT_DATASET_ID"] = 'PROJECTID'

Create a free WhyLabs account here, no credit card required.

Create a new project and get the ID:

Create Project > Set up model > Create Project

Create Project in WhyLabs

Get organization ID and access token:

Menu > Settings > Access Tokens > Create Access Token

Create an Access Token in WhyLabs

That’s it for setting up. We can now write data profiles to WhyLabs.

Write profiles to WhyLabs for ML monitoring

Once the access keys are set up, we can easily create a profile of your dataset  and write it to WhyLabs. This allows us to monitor input data and model predictions with just a few lines of code!

# initial WhyLabs writer, Create whylogs profile, write profile to WhyLabs
writer = WhyLabsWriter()
profile= why.log(dataset)
writer.write(file=profile.view())

Profiles can be created at any stage of a pipeline allowing you to monitor data at every step.

By default the time stamp will be the time of the profile upload, but it can be overwritten to log data from different collection times and backfill profiles.

You can see an example of writing and backfilling data in this notebook.

Once profiles are written to WhyLabs they can be inspected, compared, and monitored for data quality and data drift.

Comparing and Inspecting Profiles in WhyLabs

Now we can enable a pre-configured monitor with just one click (or create a custom one) to detect anomalies in our data profiles. This makes it easy to set up common monitoring tasks, such detecting data drift, data quality issues, and model performance.

Enabling a Pre-configured Monitor

Once a monitor is configured, it can be previewed while inspecting an input feature.

Data Drift Detected With ML Monitoring in WhyLabs

When anomalies are detected, notifications can be sent via email, Slack, or PagerDuty. Set notification preferences in Settings > Notifications & Digest Settings.

Setting Up Notifications in WhyLabs

That’s it! We have gone through all the steps needed to ingest data from anywhere in ML pipelines and get notified if anomalies occur.

Separating model input and outputs

It can be useful to separate model inputs and outputs, especially if you have a lot of features in your input data. Any features with names that contain the word “output” will appear in the outputs tab.

Monitoring model performance metrics

So far we’ve seen how to monitor model input and output data, but we can also monitor performance metrics such as accuracy, precision, etc. by logging ground truth with our prediction results.

To log performance metrics for monitoring use `why.log_classification_metrics` or `why.log_regression_metrics` and pass in a dataframe containing ground truth our model output results.

results = why.log_classification_metrics(
         df,
         target_column = "ground_truth",
         prediction_column = "cls_output",
         score_column="prob_output"
     )
 
 profile = results.profile() 
 results.writer("whylabs").write()
Note: Make sure your project is configured as a classification or regression model in the settings.

Just like the input data, performance metrics get uploaded with the current timestamp unless overwritten. See an example of backfilling data for performance monitoring in the example notebooks below.

Backfilling Data for Performance Monitoring in WhyLabs

Again we can select a pre-configured monitor to detect any change in performance.

See example notebooks for classification and regression monitoring on our GitHub.

Recap on ML monitoring

We covered how to quickly set up data and ML monitoring solutions that can be used at any point in your pipeline! With the right tools, ML monitoring can only take a few minutes with a few lines of code.

We barely scratched the surface of whylogs and WhyLabs features. If you’d like to learn more, request a demo or sign-up for free and explore the features yourself!

Example notebooks mentioned in this post:

Ready to implement data & ML monitoring in your own applications?

Other posts

Glassdoor Decreases Latency Overhead and Improves Data Monitoring with WhyLabs

The Glassdoor team describes their integration latency challenges and how they were able to decrease latency overhead and improve data monitoring with WhyLabs.

Understanding and Monitoring Embeddings in Amazon SageMaker with WhyLabs

WhyLabs and Amazon Web Services (AWS) explore the various ways embeddings are used, issues that can impact your ML models, how to identify those issues and set up monitors to prevent them in the future!

Data Drift Monitoring and Its Importance in MLOps

It's important to continuously monitor and manage ML models to ensure ML model performance. We explore the role of data drift management and why it's crucial in your MLOps pipeline.

Ensuring AI Success in Healthcare: The Vital Role of ML Monitoring

Discover how ML monitoring plays a crucial role in the Healthcare industry to ensure the reliability, compliance, and overall safety of AI-driven systems.

WhyLabs Recognized by CB Insights GenAI 50 among the Most Innovative Generative AI Startups

WhyLabs has been named on CB Insights’ first annual GenAI 50 list, named as one of the world’s top 50 most innovative companies developing generative AI applications and infrastructure across industries.

Hugging Face and LangKit: Your Solution for LLM Observability

See how easy it is to generate out-of-the-box text metrics for Hugging Face LLMs and monitor them in WhyLabs to identify how model performance and user interaction are changing over time.

7 Ways to Monitor Large Language Model Behavior

Discover seven ways to track and monitor Large Language Model behavior using metrics for ChatGPT’s responses for a fixed set of 200 prompts across 35 days.

Safeguarding and Monitoring Large Language Model (LLM) Applications

We explore the concept of observability and validation in the context of language models, and demonstrate how to effectively safeguard them using guardrails.

Robust & Responsible AI Newsletter - Issue #6

A quarterly roundup of the hottest LLM, ML and Data-Centric AI news, including industry highlights, what’s brewing at WhyLabs, and more.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo
loading...