The WhyLabs Blog

Our ideas and thoughts on how to run AI with certainty

blog bg left


Data quality is one of the most important considerations for machine learning applications—and it's one of the most frequently overlooked. We explore why it’s an essential step in the MLOps process and how to check your data quality with whylogs.


A Solution for Monitoring Image Data

A breakdown of how to monitor unstructured data such as images, the types of problems that threaten computer vision systems, and a solution for these challenges.

Small Changes for Big SQLite Performance Increases

A behind-the-scenes look at how the WhyLabs engineering team improved SQLite performance to make monitoring data and machine learning models faster and easier for whylogs users.

5 Ways to Inspect Data & Models with whylogs Profile Visualizer

Understand what’s happening in your data, identify and correct issues quickly, and maintain the quality and relevance of high-performing data and ML models with whylogs profile visualizer.

Visually Inspecting Data Profiles for Data Distribution Shifts

This short tutorial shows how to inspect data for distribution shift issues by comparing distribution metrics and applying statistical tests for drift values calculations.

Data Logging With whylogs

Users can detect data drift, prevent ML model performance degradation, validate the quality of their data, and more in a single, lightning-fast, easy-to-use package. The v1 release brings a simpler API, new data constraints, new profile visualizations, faster performance, and a usability refresh.

Choosing the Right Data Quality Monitoring Solution

In the second article in this series, we break down what to look for in a data quality monitoring solution, open source and Saas tools available, and how to decide on the best one for your organization.

A Comprehensive Overview Of Data Quality Monitoring

In the first article in this series, we provide a detailed overview of why data quality monitoring is crucial for building successful data and machine learning systems and how to approach it.

WhyLabs Now Available in AWS Marketplace

AWS customers worldwide can now quickly deploy the WhyLabs AI Observatory to monitor, understand, and debug their machine learning models deployed in AWS.

Deploying and Monitoring Made Easy with TeachableHub and WhyLabs

Deploying a model into production and maintaining its performance can be harrowing for many Data Scientists, especially without specialized expertise and equipment. Fortunately, TeachableHub and WhyLabs make it easy to get models out of the sandbox and into a production-ready environment.

How Observability Uncovers the Effects of ML Technical Debt

Many teams test their machine learning models offline but conduct little to no online evaluation after initial deployment. These teams are flying blind—running production systems with no insight into their ongoing performance.

Deploy your ML model with UbiOps and monitor it with WhyLabs

Machine learning models can only provide value for a business when they are brought out of the sandbox and into the real world... Fortunately, UbiOps and WhyLabs have partnered together to make deploying and monitoring machine learning models easy.

AI Observability for All

We’re excited to announce our new Starter edition: a free tier of our model monitoring solution that allows users to access all of the features of the WhyLabs AI observability platform. It is entirely self-service, meaning that users can sign up for an account and get started right away.

WhyLabs Achieves SOC 2 Type 2 Certification!

We are very happy to announce that we successfully completed our SOC 2 Type 2 examination with zero exceptions. WhyLabs is committed to ensuring our current, and future customers are well informed about the robust capabilities and security of the WhyLabs AI Observatory platform.

Observability in Production: Monitoring Data Drift with WhyLabs and Valohai

What works today might not work tomorrow. And when a model is in real-world use, serving the faulty predictions can lead to catastrophic consequences...

Why You Need ML Monitoring

Machine learning models are increasingly becoming key to businesses of all shapes and sizes, performing myriad functions... If a machine learning model is providing value to a business, it’s essential that the model remains performant.

Data Labeling Meets Data Monitoring with Superb AI and WhyLabs

Data quality is the key to a performant machine learning model. That’s why WhyLabs and Superb AI are on a mission to ensure that data scientists and machine learning engineers have access to tools designed specifically for their needs and workflows.

Running and Monitoring Distributed ML with Ray and whylogs

Running and monitoring distributed ML systems can be challenging. Fortunately, Ray makes parallelizing Python processes easy, and the open source whylogs enables users to monitor ML models in production, even if those models are running in a distributed environment.

Monitor your SageMaker model with WhyLabs

In this blog post, we will dive into the WhyLabs AI Observatory, a data and ML monitoring and observability platform, and show how it complements Amazon SageMaker.

Deploy and Monitor your ML Application with Flask and WhyLabs

In this article, we deploy a Flask application for pattern recognition based on the well-known Iris Dataset. For the application monitoring, we’ll explore the free, starter edition of the WhyLabs Observability Platform in order to set up our own model monitoring dashboard.

WhyLabs Raises $10M from Andrew Ng, Defy Partners to bring AI observability to every AI practitioner

SEATTLE, November 4, 2021 — WhyLabs, the leading provider of observability for AI and data applications announced today the close of a $10 million Series A co-led by Defy Partners and Andrew Ng’s AI Fund.

WhyLabs, AI Observability as a Service

The AI community is experiencing an outbreak of concerns about the robustness and reliability of AI systems, but observability is the mechanism for creating a feedback loop between the ML pipeline and human operators that builds trust and transparency.

Detecting Semantic Drift within Image Data: Monitoring Context-Full Data with whylogs

Concept drifts can originate in different stages of your data pipeline, even before the data collection itself. In this article, we’ll show how whylogs can help you monitor your machine learning system’s data ingestion pipeline by enabling concept drift detection, specifically for image data.

Don’t Let Your Data Fail You; Continuous Data Validation with whylogs and Github Actions

Ensuring data quality should be among your top priorities when developing an ML pipeline. In this article we’ll show how whylogs constraints with Github Actions can help with data validation, as a key component in ensuring data quality.

WhyLabs' Data Geeks Unleashed

This month three members of the WhyLabs team are speaking at the Data and AI Summit. In this post you find descriptions and links to the talk by Alessya Visnjic, Leandro Almeida, and Andy Dang.

Integrating whylogs into your Kafka ML Pipeline

Evaluating the quality of data in the Kafka stream is a non-trivial task due to large volumes of data and latency requirements. This is an ideal job for whylogs, an open-source package for Python or Java that uses Apache DataSketches to monitor and detect statistical anomalies in streaming data.

Monitoring High-Performance Machine Learning Models with RAPIDS and whylogs

Machine learning (ML) data is big and messy. Organizations have increasingly adopted RAPIDS and cuML to help their teams run experiments faster and achieve better model performance on larger datasets.

Streamlining data monitoring with whylogs and MLflow

It's hard to overstate the importance of monitoring data quality in ML pipelines. In this post we will explore an elegant solution with whylogs and MLflow, which allows for a more informed analysis of model performance.

Data Logging: Sampling versus Profiling

In traditional software, logging and instrumentation have been adopted as standard practice to create transparency and to make sense of the health of a complex system. When it comes to AI applications, the lack of tools and standardized approaches mean that logging is often spotty and incomplete.

WhyLabs: The AI Observability Platform

As companies across industries adopt AI applications in order to improve products and stay competitive, very few have seen a return on their investments. That’s because AI operations are expensive...

Introducing WhyLabs, a Leap Forward in AI Reliability

Today, we are excited to announce WhyLabs, a company that empowers AI practitioners to reap the benefits of AI without the spectacular failures that so often make the news.

whylogs: Embrace Data Logging Across Your ML Systems

Fire up your MLOps with a scalable, lightweight, open source data logging library Co-author: Bernease Herman We are thrilled to announce the open-source package whylogs. It enables data logging for any ML/AI pipeline in a few lines of code. Data logging is a critic...
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo