blog bg left
Back to Blog

Deploy your ML model with UbiOps and monitor it with WhyLabs

Introduction

Machine learning models can only provide value for a business when they are brought out of the sandbox and into the real world. However, this is easier said than done. Many enterprises fail to consider the difficulty of productionizing their models and struggle to deploy their models, resulting in wasted resources and broken promises about the value of machine learning. Fortunately, UbiOps and WhyLabs have partnered together to make deploying and monitoring machine learning models easy.

UbiOps is the easy-to-use serving and hosting layer for data science code. UbiOps makes it easier than ever to use a top-notch deployment, serving, and management layer on top of your preferred infrastructure. Accessible via the UI, client library, or CLI, it’s suitable for every type of data scientist, without the need for in-depth engineering knowledge. You only have to bring your Python or R based algorithms and UbiOps takes care of the rest.

WhyLabs provides the missing piece of the puzzle for monitoring and observing ML deployments. With the WhyLabs AI Observability Platform, it’s never been easier to ensure model and data health. Data science teams use the platform to monitor data pipelines and AI applications, surfacing data quality issues, data bias, and concept drift. Out-of-the-box  anomaly detection and purpose-built visualizations let WhyLabs’ users prevent costly model failures and eliminate the need for manual troubleshooting. It works on any data, structured or unstructured, at any scale, on any platform.

To showcase the integration, we will train a model to predict the price of a used car based on a number of factors (including horsepower, year, and mileage), deploy it with UbiOps and then monitor it “in production” with WhyLabs. We use a simplified version of this Kaggle dataset, from which we have removed less-relevant features and cut down the number of rows. We split our dataset into “training”, “testing”, and “production” dataframes, and add some perturbations to the production dataset to highlight the impact of differences between sandbox data and real world data. You can run all of this code yourself by running this Jupyter notebook or simply follow along in this post.

Preparing our environment

We advise you to go through the UbiOps quickstart and WhyLabs quickstart before continuing.

Before we get into the fun of training, deploying and monitoring our model, we need to create an environment which is conducive to these activities. We can do so by installing certain dependencies (pandas, sklearn, ubiops, and whylogs) using pip.

import sys
!{sys.executable} -m pip install -U pip
!{sys.executable} -m pip install pandas --user
!{sys.executable} -m pip install sklearn --user
!{sys.executable} -m pip install ubiops --user
!{sys.executable} -m pip install whylogs --user

We also need to set certain environment variables to interact with the UbiOps and WhyLabs platforms, including API tokens or keys for both, a project name for UbiOps, and an org and dataset ID for WhyLabs.

import os

# Set WhyLabs config variables
WHYLABS_API_KEY = "whylabs.apikey"
WHYLABS_DEFAULT_ORG_ID = "org-1"
WHYLABS_DEFAULT_DATASET_ID = "model-1"


# Set ubiops config variables
API_TOKEN = "Token ubiopsapitoken" # Make sure this is in the format "Token token-code"
PROJECT_NAME = "blog-post"

# Set environment variables
os.environ["WHYLABS_API_KEY"] = WHYLABS_API_KEY
os.environ["WHYLABS_DEFAULT_ORG_ID"] = WHYLABS_DEFAULT_ORG_ID
os.environ["WHYLABS_DEFAULT_DATASET_ID"] = WHYLABS_DEFAULT_DATASET_ID

Once we’ve done all of this, we are ready to train our model.

Training the model

We start by training a simple linear regression from the scikit-learn library on our dataset. Alongside the model training, we also generate data logs in the form of whylogs profiles and send those profiles to the WhyLabs platform. Doing so allows us to create a “baseline” of the data used to train the model, which we can compare against the data in production to ensure that our model’s performance doesn’t degrade.

We do so by introducing the following code snippet alongside our model training code:

writer = WhyLabsWriter("", formats=[],)
session = Session(project="demo-project", pipeline="pipeline-id", writers=[writer])
with session.logger(dataset_timestamp=yesterday) as ylog:
    ylog.log_dataframe(data)

Deploying with UbiOps

Now that we’ve got a trained model, we can deploy it with UbiOps. The key difference between this deployment and most of the deployments documented elsewhere is the WhyLabs integration. As explained earlier, WhyLabs configures itself using environment variables. In UbiOps we can also easily create environment variables and make them available in our deployments.

The cool thing is that we can also create them using our client library! So by reusing the WhyLabs variables we created earlier, we can do something like this:

# Create environment variables for whylabs
api.deployment_environment_variables_create(
    project_name=PROJECT_NAME,
    deployment_name=DEPLOYMENT_NAME,
    data=ubiops.EnvironmentVariableCreate(
        name="WHYLABS_API_KEY",
        value=WHYLABS_API_KEY,
        secret=True
    )
)

api.deployment_environment_variables_create(
    project_name=PROJECT_NAME,
    deployment_name=DEPLOYMENT_NAME,
    data=ubiops.EnvironmentVariableCreate(
        name="WHYLABS_DEFAULT_ORG_ID",
        value=WHYLABS_DEFAULT_ORG_ID,
        secret=True
    )
)

api.deployment_environment_variables_create(
    project_name=PROJECT_NAME,
    deployment_name=DEPLOYMENT_NAME,
    data=ubiops.EnvironmentVariableCreate(
        name="WHYLABS_DEFAULT_DATASET_ID",
        value=WHYLABS_DEFAULT_DATASET_ID,
        secret=True
    )
)

This way the WhyLabs client can be easily initialized by putting something like this in for example your init():

self.wl_session = get_or_create_session()

Monitoring with WhyLabs

Now that we’ve got our model deployed and are sending data to it, we can also log that same data as well as the predictions being made, and send those logs over to WhyLabs. As above, creating whylogs profiles and sending them to WhyLabs requires a short code snippet:

X = pd.read_csv('production_used_cars_data.csv')
Y = pd.read_csv('prediction.csv')
combined = X
combined['price'] = Y['target']
with session.logger() as ylog:
    ylog.log_dataframe(combined)

Once we’ve sent these profiles over to WhyLabs, we can compare them to the profile generated for the training data and see whether there’s training-serving skew that might be impacting our model performance.

WhyLabs allows you to track changes in your data over time, as well as set up alerts that notify you as soon as the data coming into your model or the model’s predictions change. These features allow you to monitor your ML model in production, all with a collaborative, user-friendly UI.

Conclusion

With UbiOps and WhyLabs, deploying and monitoring your model has never been easier. In this example, we’ve shown how you can log your training data with whylogs while the model is being trained, and then compare those data logs to data logs generated when the model is deployed in UbiOps.

For a complete technical rundown of an integration example take a look at the GitHub repo.

If you are interested in deploying your data science code, check out UbiOps. You can try it today for free! For more technical information you can also visit UbiOps Docs page: www.ubiops.com/docs/.

If you are interested in trying out the WhyLabs platform, check out our totally free Starter edition.

Other posts

Glassdoor Decreases Latency Overhead and Improves Data Monitoring with WhyLabs

The Glassdoor team describes their integration latency challenges and how they were able to decrease latency overhead and improve data monitoring with WhyLabs.

Understanding and Monitoring Embeddings in Amazon SageMaker with WhyLabs

WhyLabs and Amazon Web Services (AWS) explore the various ways embeddings are used, issues that can impact your ML models, how to identify those issues and set up monitors to prevent them in the future!

Data Drift Monitoring and Its Importance in MLOps

It's important to continuously monitor and manage ML models to ensure ML model performance. We explore the role of data drift management and why it's crucial in your MLOps pipeline.

Ensuring AI Success in Healthcare: The Vital Role of ML Monitoring

Discover how ML monitoring plays a crucial role in the Healthcare industry to ensure the reliability, compliance, and overall safety of AI-driven systems.

WhyLabs Recognized by CB Insights GenAI 50 among the Most Innovative Generative AI Startups

WhyLabs has been named on CB Insights’ first annual GenAI 50 list, named as one of the world’s top 50 most innovative companies developing generative AI applications and infrastructure across industries.

Hugging Face and LangKit: Your Solution for LLM Observability

See how easy it is to generate out-of-the-box text metrics for Hugging Face LLMs and monitor them in WhyLabs to identify how model performance and user interaction are changing over time.

7 Ways to Monitor Large Language Model Behavior

Discover seven ways to track and monitor Large Language Model behavior using metrics for ChatGPT’s responses for a fixed set of 200 prompts across 35 days.

Safeguarding and Monitoring Large Language Model (LLM) Applications

We explore the concept of observability and validation in the context of language models, and demonstrate how to effectively safeguard them using guardrails.

Robust & Responsible AI Newsletter - Issue #6

A quarterly roundup of the hottest LLM, ML and Data-Centric AI news, including industry highlights, what’s brewing at WhyLabs, and more.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo
loading...