blog bg left
Back to Blog

AI Observability for All

Everyone should have access to the tools and technologies that enable MLOps best practices. Machine learning has become a key resource for companies of all different shapes and sizes, from small startups to large enterprises. At WhyLabs, we’re committed to ensuring that every business is able to run their ML models with the certainty that those models will continue to perform.

That’s why we’re excited to announce our new Starter edition: a free tier of our model monitoring solution that allows users to access all of the features of the WhyLabs AI observability platform. It is entirely self-service, meaning that users can sign up for an account and get started right away, without having to talk to a salesperson or enter credit card information.

Monitoring and observability are key to ensuring that a model stays healthy and performant in production. Without the ability to inspect the performance of a model, data scientists and machine learning engineers are flying blind. Monitoring eliminates costly model failures, reduces time to resolution for data quality bugs, and allows data scientists and machine learning engineers to be proactive about responding to model performance degradation. By using a trustworthy monitoring tool, ML practitioners can spend their time building, deploying, and improving models.

With the release of AI Observatory—our fully-automated SaaS platform— AI practitioners are able to monitor all of their ML models, regardless of scale, with zero configuration necessary. What’s more, they are able to do it in a privacy-preserving fashion thanks to the open source whylogs library. whylogs is the only library that enables logging, testing, and monitoring of an ML application without the need for raw data to leave the user’s environment.

Zero configuration

ML models have dozens, occasionally thousands of features, and nobody would want to configure monitoring for each feature by hand. WhyLabs gives users a zero configuration monitoring capability that reduces onboarding time and minimizes maintenance. To enable monitoring across all key statistics for all model input features, you simply choose between two smart baselines and let WhyLabs do the rest. Smart baselines allow you to either monitor on a trailing window (i.e. past X days) or monitor against constraints generated from training data. For a model with 100 features, WhyLabs would automatically configure 400 monitors: drift detection, missing values, schema tracking and cardinality across each feature.

Highly scalable

One of the key advantages of the WhyLabs AI Observability Platform is its scalability. With WhyLabs’ unique approach to ML monitoring, users can monitor hundreds of models with thousands of features each, even if the models are making millions of predictions per hourday. This is possible because feature data and model performance data gets profiled on the spot. There is no need to centralize the data in some storage for post processing and no need for sampling. The WhyLabs approach captures 100% of the data and results in 2-10X more accurate data drift monitors. This approach allows WhyLabs to serve customers with real-time ML systems, streaming pipelines, and edge deployments. Scalability doesn’t end there, since the WhyLabs platform is a SaaS solution, users don’t need to worry about managing or scaling infrastructure.

Privacy preserving

Raw customer data never has to be transferred outside of the customer perimeter, it doesn’t even need to leave the existing model training/inference environment. WhyLabs is unique in our ability to make this guarantee because of the architecture of the AI Observatory. Unlike other solutions that require passing raw data to closed-source proprietary software, our solution relies on the open source whylogs library to generate statistical profiles of the data and only communicates those statistical profiles to the WhyLabs platform. The raw data never gets sent directly to the platform. This approach allows WhyLabs to serve customers in fintech and healthcare organizations with the strictest data governance requirements.

But don’t take our word for it! Try out the always-free, fully self-serve Starter edition for yourself.

Other posts

Achieving Ethical AI with Model Performance Tracing and ML Explainability

With Model Performance Tracing and ML Explainability, we’ve accelerated our customers’ journey toward achieving the three goals of ethical AI - fairness, accountability and transparency.

BigQuery Data Monitoring with WhyLabs

We’re excited to announce the release of a no-code solution for data monitoring in Google BigQuery, making it simple to monitor your data quality without writing a single line of code.

Robust & Responsible AI Newsletter - Issue #4

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

WhyLabs Private Beta: Real-time Data Monitoring on Prem

We’re excited to announce our Private Beta release of an extension service for the Profile Store, enabling production use cases of whylogs on customers' premises.

Understanding Kolmogorov-Smirnov (KS) Tests for Data Drift on Profiled Data

We experiment with statistical tests, Kolmogorov-Smirnov (KS) specifically, applied to full datasets and dataset profiles and compare the results.

Re-imagine Data Monitoring with whylogs and Apache Spark

An overview of how the whylogs integration with Apache Spark achieves large scale data profiling, and how users can apply this integration into existing data and ML pipelines.

ML Monitoring in Under 5 Minutes

A quick guide to using whylogs and WhyLabs to monitor common issues with your ML models to surface data drift, concept drift, data quality, and performance issues.

AIShield and WhyLabs: Threat Detection and Monitoring for AI

The seamless integration of AIShield’s security insights on WhyLabs AI observability platform delivers comprehensive insights into ML workloads and brings security hardening to AI-powered enterprises.

Large Scale Data Profiling with whylogs and Fugue on Spark, Ray or Dask

Profiling large-scale data for use cases such as anomaly detection, drift detection, and data validation with Fugue on Spark, Ray or Dask.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo