blog bg left
Back to Blog

AI Observability for All

Everyone should have access to the tools and technologies that enable MLOps best practices. Machine learning has become a key resource for companies of all different shapes and sizes, from small startups to large enterprises. At WhyLabs, we’re committed to ensuring that every business is able to run their ML models with the certainty that those models will continue to perform.

That’s why we’re excited to announce our new Starter edition: a free tier of our model monitoring solution that allows users to access all of the features of the WhyLabs AI observability platform. It is entirely self-service, meaning that users can sign up for an account and get started right away, without having to talk to a salesperson or enter credit card information.

Monitoring and observability are key to ensuring that a model stays healthy and performant in production. Without the ability to inspect the performance of a model, data scientists and machine learning engineers are flying blind. Monitoring eliminates costly model failures, reduces time to resolution for data quality bugs, and allows data scientists and machine learning engineers to be proactive about responding to model performance degradation. By using a trustworthy monitoring tool, ML practitioners can spend their time building, deploying, and improving models.

With the release of AI Observatory—our fully-automated SaaS platform— AI practitioners are able to monitor all of their ML models, regardless of scale, with zero configuration necessary. What’s more, they are able to do it in a privacy-preserving fashion thanks to the open source whylogs library. whylogs is the only library that enables logging, testing, and monitoring of an ML application without the need for raw data to leave the user’s environment.

Zero configuration

ML models have dozens, occasionally thousands of features, and nobody would want to configure monitoring for each feature by hand. WhyLabs gives users a zero configuration monitoring capability that reduces onboarding time and minimizes maintenance. To enable monitoring across all key statistics for all model input features, you simply choose between two smart baselines and let WhyLabs do the rest. Smart baselines allow you to either monitor on a trailing window (i.e. past X days) or monitor against constraints generated from training data. For a model with 100 features, WhyLabs would automatically configure 400 monitors: drift detection, missing values, schema tracking and cardinality across each feature.

Highly scalable

One of the key advantages of the WhyLabs AI Observability Platform is its scalability. With WhyLabs’ unique approach to ML monitoring, users can monitor hundreds of models with thousands of features each, even if the models are making millions of predictions per hourday. This is possible because feature data and model performance data gets profiled on the spot. There is no need to centralize the data in some storage for post processing and no need for sampling. The WhyLabs approach captures 100% of the data and results in 2-10X more accurate data drift monitors. This approach allows WhyLabs to serve customers with real-time ML systems, streaming pipelines, and edge deployments. Scalability doesn’t end there, since the WhyLabs platform is a SaaS solution, users don’t need to worry about managing or scaling infrastructure.

Privacy preserving

Raw customer data never has to be transferred outside of the customer perimeter, it doesn’t even need to leave the existing model training/inference environment. WhyLabs is unique in our ability to make this guarantee because of the architecture of the AI Observatory. Unlike other solutions that require passing raw data to closed-source proprietary software, our solution relies on the open source whylogs library to generate statistical profiles of the data and only communicates those statistical profiles to the WhyLabs platform. The raw data never gets sent directly to the platform. This approach allows WhyLabs to serve customers in fintech and healthcare organizations with the strictest data governance requirements.

But don’t take our word for it! Try out the always-free, fully self-serve Starter edition for yourself.

Other posts

Model Monitoring for Financial Fraud Classification

Model monitoring is helping the financial services industry avoid huge losses caused by performance degradation in their fraud transaction models.

Data and ML Monitoring is Easier with whylogs v1.1

The release of whylogs v1.1 brings many features to the whylogs data logging API, making it even easier to monitor your data and ML models!

Robust & Responsible AI Newsletter - Issue #3

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

Data Quality Monitoring in Apache Airflow with whylogs

To make the most of whylogs within your existing Apache Airflow pipelines, we’ve created the whylogs Airflow provider. Using an example, we’ll show how you can use whylogs and Airflow to make your workflow more responsible, scalable, and efficient.

Data Logging with whylogs: Profiling for Efficiency and Speed

Rather than sampling data, whylogs captures snapshots of the data making it fast and efficient for data logging, even if your datasets scale to larger sizes.

Data Quality Monitoring for Kafka, Beyond Schema Validation

Data quality mapped to a schema registry or data type validation is a good start, but there are a few things most data application owners don’t think about. We explore error scenarios beyond schema validation and how to mitigate them.

Data + Model Monitoring with WhyLabs: simple, customizable, actionable

The new monitoring system maximizes the helpfulness of alerts and minimizes alert fatigue, so users can focus on improving their models instead of worrying about them in production...

A Solution for Monitoring Image Data

A breakdown of how to monitor unstructured data such as images, the types of problems that threaten computer vision systems, and a solution for these challenges.

How to Validate Data Quality for ML Monitoring

Data quality is one of the most important considerations for machine learning applications—and it's one of the most frequently overlooked. We explore why it’s an essential step in the MLOps process and how to check your data quality with whylogs.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo