The WhyLabs Blog

Our ideas and thoughts on how to run AI with certainty

blog bg left

FEATURED POST

The Glassdoor team describes their integration latency challenges and how they were able to decrease latency overhead and improve data monitoring with WhyLabs.

OTHER POSTS

Filter blog posts:

Understanding and Monitoring Embeddings in Amazon SageMaker with WhyLabs

WhyLabs and Amazon Web Services (AWS) explore the various ways embeddings are used, issues that can impact your ML models, how to identify those issues and set up monitors to prevent them in the future!

Data Drift Monitoring and Its Importance in MLOps

It's important to continuously monitor and manage ML models to ensure ML model performance. We explore the role of data drift management and why it's crucial in your MLOps pipeline.

Ensuring AI Success in Healthcare: The Vital Role of ML Monitoring

Discover how ML monitoring plays a crucial role in the Healthcare industry to ensure the reliability, compliance, and overall safety of AI-driven systems.

WhyLabs Recognized by CB Insights GenAI 50 among the Most Innovative Generative AI Startups

WhyLabs has been named on CB Insights’ first annual GenAI 50 list, named as one of the world’s top 50 most innovative companies developing generative AI applications and infrastructure across industries.

Hugging Face and LangKit: Your Solution for LLM Observability

See how easy it is to generate out-of-the-box text metrics for Hugging Face LLMs and monitor them in WhyLabs to identify how model performance and user interaction are changing over time.

7 Ways to Monitor Large Language Model Behavior

Discover seven ways to track and monitor Large Language Model behavior using metrics for ChatGPT’s responses for a fixed set of 200 prompts across 35 days.

Safeguarding and Monitoring Large Language Model (LLM) Applications

We explore the concept of observability and validation in the context of language models, and demonstrate how to effectively safeguard them using guardrails.

Robust & Responsible AI Newsletter - Issue #6

A quarterly roundup of the hottest LLM, ML and Data-Centric AI news, including industry highlights, what’s brewing at WhyLabs, and more.

Monitoring LLM Performance with LangChain and LangKit

In this blog post, we dive into the significance of monitoring Large Language Models (LLMs) and show how to gain insights and effectively monitor a LangChain application with LangKit and WhyLabs.

BYOF: Bring Your Own Functions - Announcing UDFs in whylogs

With the release of whylogs 1.2.0, UDFs are available out-of-the-box. UDFs are the foundation for monitoring complex data, allowing you to craft custom metrics that fit your unique business or research objectives.

Best Practices for Monitoring Large Language Models

With the growing adoption of LLMs to automate and streamline NLP operations, it's crucial to establish effective monitoring practices that can detect and prevent issues.

Production-Ready Models with Databricks and WhyLabs

Databricks and WhyLabs partner to enable a unique integration that makes it possible to compute all key telemetry data necessary for AI monitoring directly in Apache Spark.

WhyLabs Recognized as a Leading AI Company on CB Insights' 2023 AI100 List

WhyLabs was included on the highly regarded CB Insights AI100 List of Most Innovative AI Companies Worldwide for 2023.

LangKit: Making Large Language Models Safe and Responsible

With LangKit, you can monitor LLM behavior and performance with a data centric approach to LLMOps. Identify and mitigate malicious prompts, toxicity, problematic topics, hallucinations, and jailbreak attempts for reliable, safe, and effective LLMs.

Data Validation at Scale – Detecting and Responding to Data Misbehavior

In this tutorial, we’ll introduce the concept of data logging and discuss how to validate data at scale by creating metric constraints and generating reports based on the data’s statistical profiles using whylogs.

Mind Your Models: 5 Ways to Implement ML Monitoring in Production

We’ve outlined five easy ways to monitor your ML models in production to ensure they are robust and responsible by monitoring for concept drift, data drift, data quality, AI explainability and more.

Get Early Access to the First Purpose-Built Monitoring Solution for LLMs

We’re excited to announce our private beta release of LangKit, the first purpose-built large language model monitoring solution! Join the responsible LLM revolution by signing up for early access.

Simplifying ML Deployment: A Conversation with BentoML's Founder & CEO Chaoyu Yang

A summary of the live interview with Chaoyu Yang, Founder & CEO at BentoML, on putting machine learning models in production and BentoML's role in simplifying deployment.

Data Drift vs. Concept Drift and Why Monitoring for Them is Important

Data drift and concept drift are two common challenges that can impact ML models on production. In this blog, we'll explore the differences between these two types of drift and why monitoring for them is crucial.

Robust & Responsible AI Newsletter - Issue #5

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

Detecting Financial Fraud in Real-Time: A Guide to ML Monitoring

Fraud is a significant challenge for financial institutions and businesses. As fraudsters constantly adapt their tactics, it’s essential to implement a robust ML monitoring system to ensure that models effectively detect fraud and minimize false positives.

How to Troubleshoot Embeddings Without Eye-balling t-SNE or UMAP Plots

WhyLabs' scalable approach to monitoring high dimensional embeddings data means you don’t have to eye-ball pretty UMAP plots to troubleshoot embeddings!

Achieving Ethical AI with Model Performance Tracing and ML Explainability

With Model Performance Tracing and ML Explainability, we’ve accelerated our customers’ journey toward achieving the three goals of ethical AI - fairness, accountability and transparency.

Detecting and Fixing Data Drift in Computer Vision

In this tutorial, Magdalena Konkiewicz from Toloka focuses on the practical part of data drift detection and fixing it on a computer vision example.

BigQuery Data Monitoring with WhyLabs

We’re excited to announce the release of a no-code solution for data monitoring in Google BigQuery, making it simple to monitor your data quality without writing a single line of code.

Robust & Responsible AI Newsletter - Issue #4

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

WhyLabs Private Beta: Real-time Data Monitoring on Prem

We’re excited to announce our Private Beta release of an extension service for the Profile Store, enabling production use cases of whylogs on customers' premises.

Understanding Kolmogorov-Smirnov (KS) Tests for Data Drift on Profiled Data

We experiment with statistical tests, Kolmogorov-Smirnov (KS) specifically, applied to full datasets and dataset profiles and compare the results.

Re-imagine Data Monitoring with whylogs and Apache Spark

An overview of how the whylogs integration with Apache Spark achieves large scale data profiling, and how users can apply this integration into existing data and ML pipelines.

ML Monitoring in Under 5 Minutes

A quick guide to using whylogs and WhyLabs to monitor common issues with your ML models to surface data drift, concept drift, data quality, and performance issues.

AIShield and WhyLabs: Threat Detection and Monitoring for AI

The seamless integration of AIShield’s security insights on WhyLabs AI observability platform delivers comprehensive insights into ML workloads and brings security hardening to AI-powered enterprises.

Large Scale Data Profiling with whylogs and Fugue on Spark, Ray or Dask

Profiling large-scale data for use cases such as anomaly detection, drift detection, and data validation with Fugue on Spark, Ray or Dask.

Monitoring Image Data with whylogs v1

When operating computer vision systems, data quality and data drift issues always pose the risk of model performance degradation. Whylabs provides a simple yet highly customizable solution for maintaining observability into data to detect issues and take action sooner.

WhyLabs Private Beta: Real-time, No-code, Cloud Storage Data Profiling

We’re excited to announce our Private Beta release for a no-code integration option for WhyLabs, allowing users to bypass the need to integrate whylogs into their data pipeline.

Data and ML Monitoring is Easier with whylogs v1.1

The release of whylogs v1.1 brings many features to the whylogs data logging API, making it even easier to monitor your data and ML models!

Model Monitoring for Financial Fraud Classification

Model monitoring is helping the financial services industry avoid huge losses caused by performance degradation in their fraud transaction models.

Robust & Responsible AI Newsletter - Issue #3

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

Data Quality Monitoring in Apache Airflow with whylogs

To make the most of whylogs within your existing Apache Airflow pipelines, we’ve created the whylogs Airflow provider. Using an example, we’ll show how you can use whylogs and Airflow to make your workflow more responsible, scalable, and efficient.

Data Logging with whylogs: Profiling for Efficiency and Speed

Rather than sampling data, whylogs captures snapshots of the data making it fast and efficient for data logging, even if your datasets scale to larger sizes.

Data Quality Monitoring for Kafka, Beyond Schema Validation

Data quality mapped to a schema registry or data type validation is a good start, but there are a few things most data application owners don’t think about. We explore error scenarios beyond schema validation and how to mitigate them.

Data + Model Monitoring with WhyLabs: simple, customizable, actionable

The new monitoring system maximizes the helpfulness of alerts and minimizes alert fatigue, so users can focus on improving their models instead of worrying about them in production...

A Solution for Monitoring Image Data

A breakdown of how to monitor unstructured data such as images, the types of problems that threaten computer vision systems, and a solution for these challenges.

How to Validate Data Quality for ML Monitoring

Data quality is one of the most important considerations for machine learning applications—and it's one of the most frequently overlooked. We explore why it’s an essential step in the MLOps process and how to check your data quality with whylogs.

Small Changes for Big SQLite Performance Increases

A behind-the-scenes look at how the WhyLabs engineering team improved SQLite performance to make monitoring data and machine learning models faster and easier for whylogs users.

5 Ways to Inspect Data & Models with whylogs Profile Visualizer

Understand what’s happening in your data, identify and correct issues quickly, and maintain the quality and relevance of high-performing data and ML models with whylogs profile visualizer.

Visually Inspecting Data Profiles for Data Distribution Shifts

This short tutorial shows how to inspect data for distribution shift issues by comparing distribution metrics and applying statistical tests for drift values calculations.

Robust & Responsible AI Newsletter - Issue #2

Your roundup of the hottest in MLOps and Data-Centric AI including industry highlights, what’s brewing at WhyLabs, and more.

Data Logging with whylogs

Users can detect data drift, prevent ML model performance degradation, validate the quality of their data, and more in a single, lightning-fast, easy-to-use package. The v1 release brings a simpler API, new data constraints, new profile visualizations, faster performance, and a usability refresh.

Choosing the Right Data Quality Monitoring Solution

In the second article in this series, we break down what to look for in a data quality monitoring solution, open source and Saas tools available, and how to decide on the best one for your organization.

A Comprehensive Overview Of Data Quality Monitoring

In the first article in this series, we provide a detailed overview of why data quality monitoring is crucial for building successful data and machine learning systems and how to approach it.

Robust & Responsible AI Newsletter - Issue #1

Your roundup of the hottest in MLOps and Data-Centric AI including industry highlights, what’s brewing at WhyLabs, and more.

WhyLabs Now Available in AWS Marketplace

AWS customers worldwide can now quickly deploy the WhyLabs AI Observatory to monitor, understand, and debug their machine learning models deployed in AWS.

Deploying and Monitoring Made Easy with TeachableHub and WhyLabs

Deploying a model into production and maintaining its performance can be harrowing for many Data Scientists, especially without specialized expertise and equipment. Fortunately, TeachableHub and WhyLabs make it easy to get models out of the sandbox and into a production-ready environment.

How Observability Uncovers the Effects of ML Technical Debt

Many teams test their machine learning models offline but conduct little to no online evaluation after initial deployment. These teams are flying blind—running production systems with no insight into their ongoing performance.

Deploy your ML model with UbiOps and monitor it with WhyLabs

Machine learning models can only provide value for a business when they are brought out of the sandbox and into the real world... Fortunately, UbiOps and WhyLabs have partnered together to make deploying and monitoring machine learning models easy.

AI Observability for All

We’re excited to announce our new Starter edition: a free tier of our model monitoring solution that allows users to access all of the features of the WhyLabs AI observability platform. It is entirely self-service, meaning that users can sign up for an account and get started right away.

WhyLabs Achieves SOC 2 Type 2 Certification!

We are very happy to announce that we successfully completed our SOC 2 Type 2 examination with zero exceptions. WhyLabs is committed to ensuring our current, and future customers are well informed about the robust capabilities and security of the WhyLabs AI Observatory platform.

Observability in Production: Monitoring Data Drift with WhyLabs and Valohai

What works today might not work tomorrow. And when a model is in real-world use, serving the faulty predictions can lead to catastrophic consequences...

Why You Need ML Monitoring

Machine learning models are increasingly becoming key to businesses of all shapes and sizes, performing myriad functions... If a machine learning model is providing value to a business, it’s essential that the model remains performant.

Data Labeling Meets Data Monitoring with Superb AI and WhyLabs

Data quality is the key to a performant machine learning model. That’s why WhyLabs and Superb AI are on a mission to ensure that data scientists and machine learning engineers have access to tools designed specifically for their needs and workflows.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo