AI Observability for All
- AI Observability
- News
- WhyLabs
Jan 4, 2022
Everyone should have access to the tools and technologies that enable MLOps best practices. Machine learning has become a key resource for companies of all different shapes and sizes, from small startups to large enterprises. At WhyLabs, we’re committed to ensuring that every business is able to run their ML models with the certainty that those models will continue to perform.
That’s why we’re excited to announce our new Starter edition: a free tier of our model monitoring solution that allows users to access all of the features of the WhyLabs AI observability platform. It is entirely self-service, meaning that users can sign up for an account and get started right away, without having to talk to a salesperson or enter credit card information.
Monitoring and observability are key to ensuring that a model stays healthy and performant in production. Without the ability to inspect the performance of a model, data scientists and machine learning engineers are flying blind. Monitoring eliminates costly model failures, reduces time to resolution for data quality bugs, and allows data scientists and machine learning engineers to be proactive about responding to model performance degradation. By using a trustworthy monitoring tool, ML practitioners can spend their time building, deploying, and improving models.
With the release of AI Observatory—our fully-automated SaaS platform— AI practitioners are able to monitor all of their ML models, regardless of scale, with zero configuration necessary. What’s more, they are able to do it in a privacy-preserving fashion thanks to the open source whylogs library. whylogs is the only library that enables logging, testing, and monitoring of an ML application without the need for raw data to leave the user’s environment.
Zero configuration
ML models have dozens, occasionally thousands of features, and nobody would want to configure monitoring for each feature by hand. WhyLabs gives users a zero configuration monitoring capability that reduces onboarding time and minimizes maintenance. To enable monitoring across all key statistics for all model input features, you simply choose between two smart baselines and let WhyLabs do the rest. Smart baselines allow you to either monitor on a trailing window (i.e. past X days) or monitor against constraints generated from training data. For a model with 100 features, WhyLabs would automatically configure 400 monitors: drift detection, missing values, schema tracking and cardinality across each feature.
Highly scalable
One of the key advantages of the WhyLabs AI Observability Platform is its scalability. With WhyLabs’ unique approach to ML monitoring, users can monitor hundreds of models with thousands of features each, even if the models are making millions of predictions per hourday. This is possible because feature data and model performance data gets profiled on the spot. There is no need to centralize the data in some storage for post processing and no need for sampling. The WhyLabs approach captures 100% of the data and results in 2-10X more accurate data drift monitors. This approach allows WhyLabs to serve customers with real-time ML systems, streaming pipelines, and edge deployments. Scalability doesn’t end there, since the WhyLabs platform is a SaaS solution, users don’t need to worry about managing or scaling infrastructure.
Privacy preserving
Raw customer data never has to be transferred outside of the customer perimeter, it doesn’t even need to leave the existing model training/inference environment. WhyLabs is unique in our ability to make this guarantee because of the architecture of the AI Observatory. Unlike other solutions that require passing raw data to closed-source proprietary software, our solution relies on the open source whylogs library to generate statistical profiles of the data and only communicates those statistical profiles to the WhyLabs platform. The raw data never gets sent directly to the platform. This approach allows WhyLabs to serve customers in fintech and healthcare organizations with the strictest data governance requirements.
But don’t take our word for it! Try out the always-free, fully self-serve Starter edition for yourself.
Other posts
Best Practicies for Monitoring and Securing RAG Systems in Production
Oct 8, 2024
- Retrival-Augmented Generation (RAG)
- LLM Security
- Generative AI
- ML Monitoring
- LangKit
How to Evaluate and Improve RAG Applications for Safe Production Deployment
Jul 17, 2024
- AI Observability
- LLMs
- LLM Security
- LangKit
- RAG
- Open Source
WhyLabs Integrates with NVIDIA NIM to Deliver GenAI Applications with Security and Control
Jun 2, 2024
- AI Observability
- Generative AI
- Integrations
- LLM Security
- LLMs
- Partnerships
OWASP Top 10 Essential Tips for Securing LLMs: Guide to Improved LLM Safety
May 21, 2024
- LLMs
- LLM Security
- Generative AI
7 Ways to Evaluate and Monitor LLMs
May 13, 2024
- LLMs
- Generative AI
How to Distinguish User Behavior and Data Drift in LLMs
May 7, 2024
- LLMs
- Generative AI