blog bg left
Back to Blog

Deploy your ML model with UbiOps and monitor it with WhyLabs

Introduction

Machine learning models can only provide value for a business when they are brought out of the sandbox and into the real world. However, this is easier said than done. Many enterprises fail to consider the difficulty of productionizing their models and struggle to deploy their models, resulting in wasted resources and broken promises about the value of machine learning. Fortunately, UbiOps and WhyLabs have partnered together to make deploying and monitoring machine learning models easy.

UbiOps is the easy-to-use serving and hosting layer for data science code. UbiOps makes it easier than ever to use a top-notch deployment, serving, and management layer on top of your preferred infrastructure. Accessible via the UI, client library, or CLI, it’s suitable for every type of data scientist, without the need for in-depth engineering knowledge. You only have to bring your Python or R based algorithms and UbiOps takes care of the rest.

WhyLabs provides the missing piece of the puzzle for monitoring and observing ML deployments. With the WhyLabs AI Observability Platform, it’s never been easier to ensure model and data health. Data science teams use the platform to monitor data pipelines and AI applications, surfacing data quality issues, data bias, and concept drift. Out-of-the-box  anomaly detection and purpose-built visualizations let WhyLabs’ users prevent costly model failures and eliminate the need for manual troubleshooting. It works on any data, structured or unstructured, at any scale, on any platform.

To showcase the integration, we will train a model to predict the price of a used car based on a number of factors (including horsepower, year, and mileage), deploy it with UbiOps and then monitor it “in production” with WhyLabs. We use a simplified version of this Kaggle dataset, from which we have removed less-relevant features and cut down the number of rows. We split our dataset into “training”, “testing”, and “production” dataframes, and add some perturbations to the production dataset to highlight the impact of differences between sandbox data and real world data. You can run all of this code yourself by running this Jupyter notebook or simply follow along in this post.

Preparing our environment

We advise you to go through the UbiOps quickstart and WhyLabs quickstart before continuing.

Before we get into the fun of training, deploying and monitoring our model, we need to create an environment which is conducive to these activities. We can do so by installing certain dependencies (pandas, sklearn, ubiops, and whylogs) using pip.

import sys
!{sys.executable} -m pip install -U pip
!{sys.executable} -m pip install pandas --user
!{sys.executable} -m pip install sklearn --user
!{sys.executable} -m pip install ubiops --user
!{sys.executable} -m pip install whylogs --user

We also need to set certain environment variables to interact with the UbiOps and WhyLabs platforms, including API tokens or keys for both, a project name for UbiOps, and an org and dataset ID for WhyLabs.

import os

# Set WhyLabs config variables
WHYLABS_API_KEY = "whylabs.apikey"
WHYLABS_DEFAULT_ORG_ID = "org-1"
WHYLABS_DEFAULT_DATASET_ID = "model-1"


# Set ubiops config variables
API_TOKEN = "Token ubiopsapitoken" # Make sure this is in the format "Token token-code"
PROJECT_NAME = "blog-post"

# Set environment variables
os.environ["WHYLABS_API_KEY"] = WHYLABS_API_KEY
os.environ["WHYLABS_DEFAULT_ORG_ID"] = WHYLABS_DEFAULT_ORG_ID
os.environ["WHYLABS_DEFAULT_DATASET_ID"] = WHYLABS_DEFAULT_DATASET_ID

Once we’ve done all of this, we are ready to train our model.

Training the model

We start by training a simple linear regression from the scikit-learn library on our dataset. Alongside the model training, we also generate data logs in the form of whylogs profiles and send those profiles to the WhyLabs platform. Doing so allows us to create a “baseline” of the data used to train the model, which we can compare against the data in production to ensure that our model’s performance doesn’t degrade.

We do so by introducing the following code snippet alongside our model training code:

writer = WhyLabsWriter("", formats=[],)
session = Session(project="demo-project", pipeline="pipeline-id", writers=[writer])
with session.logger(dataset_timestamp=yesterday) as ylog:
    ylog.log_dataframe(data)

Deploying with UbiOps

Now that we’ve got a trained model, we can deploy it with UbiOps. The key difference between this deployment and most of the deployments documented elsewhere is the WhyLabs integration. As explained earlier, WhyLabs configures itself using environment variables. In UbiOps we can also easily create environment variables and make them available in our deployments.

The cool thing is that we can also create them using our client library! So by reusing the WhyLabs variables we created earlier, we can do something like this:

# Create environment variables for whylabs
api.deployment_environment_variables_create(
    project_name=PROJECT_NAME,
    deployment_name=DEPLOYMENT_NAME,
    data=ubiops.EnvironmentVariableCreate(
        name="WHYLABS_API_KEY",
        value=WHYLABS_API_KEY,
        secret=True
    )
)

api.deployment_environment_variables_create(
    project_name=PROJECT_NAME,
    deployment_name=DEPLOYMENT_NAME,
    data=ubiops.EnvironmentVariableCreate(
        name="WHYLABS_DEFAULT_ORG_ID",
        value=WHYLABS_DEFAULT_ORG_ID,
        secret=True
    )
)

api.deployment_environment_variables_create(
    project_name=PROJECT_NAME,
    deployment_name=DEPLOYMENT_NAME,
    data=ubiops.EnvironmentVariableCreate(
        name="WHYLABS_DEFAULT_DATASET_ID",
        value=WHYLABS_DEFAULT_DATASET_ID,
        secret=True
    )
)

This way the WhyLabs client can be easily initialized by putting something like this in for example your init():

self.wl_session = get_or_create_session()

Monitoring with WhyLabs

Now that we’ve got our model deployed and are sending data to it, we can also log that same data as well as the predictions being made, and send those logs over to WhyLabs. As above, creating whylogs profiles and sending them to WhyLabs requires a short code snippet:

X = pd.read_csv('production_used_cars_data.csv')
Y = pd.read_csv('prediction.csv')
combined = X
combined['price'] = Y['target']
with session.logger() as ylog:
    ylog.log_dataframe(combined)

Once we’ve sent these profiles over to WhyLabs, we can compare them to the profile generated for the training data and see whether there’s training-serving skew that might be impacting our model performance.

WhyLabs allows you to track changes in your data over time, as well as set up alerts that notify you as soon as the data coming into your model or the model’s predictions change. These features allow you to monitor your ML model in production, all with a collaborative, user-friendly UI.

Conclusion

With UbiOps and WhyLabs, deploying and monitoring your model has never been easier. In this example, we’ve shown how you can log your training data with whylogs while the model is being trained, and then compare those data logs to data logs generated when the model is deployed in UbiOps.

For a complete technical rundown of an integration example take a look at the GitHub repo.

If you are interested in deploying your data science code, check out UbiOps. You can try it today for free! For more technical information you can also visit UbiOps Docs page: www.ubiops.com/docs/.

If you are interested in trying out the WhyLabs platform, check out our totally free Starter edition.

Other posts

Data Logging With whylogs

Users can detect data drift, prevent ML model performance degradation, validate the quality of their data, and more in a single, lightning-fast, easy-to-use package. The v1 release brings a simpler API, new data constraints, new profile visualizations, faster performance, and a usability refresh.

Visually Inspecting Data Profiles for Data Distribution Shifts

This short tutorial shows how to inspect data for distribution shift issues by comparing distribution metrics and applying statistical tests for drift values calculations.

Choosing the Right Data Quality Monitoring Solution

In the second article in this series, we break down what to look for in a data quality monitoring solution, open source and Saas tools available, and how to decide on the best one for your organization.

A Comprehensive Overview Of Data Quality Monitoring

In the first article in this series, we provide a detailed overview of why data quality monitoring is crucial for building successful data and machine learning systems and how to approach it.

WhyLabs Now Available in AWS Marketplace

AWS customers worldwide can now quickly deploy the WhyLabs AI Observatory to monitor, understand, and debug their machine learning models deployed in AWS.

Deploying and Monitoring Made Easy with TeachableHub and WhyLabs

Deploying a model into production and maintaining its performance can be harrowing for many Data Scientists, especially without specialized expertise and equipment. Fortunately, TeachableHub and WhyLabs make it easy to get models out of the sandbox and into a production-ready environment.

How Observability Uncovers the Effects of ML Technical Debt

Many teams test their machine learning models offline but conduct little to no online evaluation after initial deployment. These teams are flying blind—running production systems with no insight into their ongoing performance.

AI Observability for All

We’re excited to announce our new Starter edition: a free tier of our model monitoring solution that allows users to access all of the features of the WhyLabs AI observability platform. It is entirely self-service, meaning that users can sign up for an account and get started right away.

WhyLabs Achieves SOC 2 Type 2 Certification!

We are very happy to announce that we successfully completed our SOC 2 Type 2 examination with zero exceptions. WhyLabs is committed to ensuring our current, and future customers are well informed about the robust capabilities and security of the WhyLabs AI Observatory platform.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Get started for free
loading...