blog bg left
Back to Blog

Observability in Production: Monitoring Data Drift with WhyLabs and Valohai

Imagine that magical day when your machine learning model is in production. It is possibly integrated into end-user applications, serving predictions and providing real-world value. As a Data Scientist, You may think that your job is done and that you can move on to the next problem to be solved. Unfortunately, the work is just getting started.

What works today might not work tomorrow. And when a model is in real-world use, serving the faulty predictions can lead to catastrophic consequences like what happened with Zillow and their iBuying algorithm which caused the company to overpay for real estate and ultimately, lay off 25% of their workforce.

...

We will dig into how we can easily get started with observability and detect data drift using whylogs while executing your pipeline on Valohai.

Continue reading on the Valohai Blog

Other posts

AI Observability for All

We’re excited to announce our new Starter edition: a free tier of our model monitoring solution that allows users to access all of the features of the WhyLabs AI observability platform. It is entirely self-service, meaning that users can sign up for an account and get started right away.

Deploy your ML model with UbiOps and monitor it with WhyLabs

Machine learning models can only provide value for a business when they are brought out of the sandbox and into the real world... Fortunately, UbiOps and WhyLabs have partnered together to make deploying and monitoring machine learning models easy.

Why You Need ML Monitoring

Machine learning models are increasingly becoming key to businesses of all shapes and sizes, performing myriad functions... If a machine learning model is providing value to a business, it’s essential that the model remains performant.

Data Labeling Meets Data Monitoring with Superb AI and WhyLabs

Data quality is the key to a performant machine learning model. That’s why WhyLabs and Superb AI are on a mission to ensure that data scientists and machine learning engineers have access to tools designed specifically for their needs and workflows.

Running and Monitoring Distributed ML with Ray and whylogs

Running and monitoring distributed ML systems can be challenging. Fortunately, Ray makes parallelizing Python processes easy, and the open source whylogs enables users to monitor ML models in production, even if those models are running in a distributed environment.

Monitor your SageMaker model with WhyLabs

In this blog post, we will dive into the WhyLabs AI Observatory, a data and ML monitoring and observability platform, and show how it complements Amazon SageMaker.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Get started for free
loading...