The WhyLabs Blog
Our ideas and thoughts on how to run AI with certainty
Learn how the NIST AI Risk Management Framework (RMF) guides AI security and governance and discover how WhyLabs guardrails can help implement and manage AI risks effectively.
Rich Young
Dec 10, 2024
- AI risk management
- AI Observability
- AI security
- NIST RMF implementation
- AI compliance
- AI risk mitigation
- AI risk management
- AI Observability
- AI security
- NIST RMF implementation
- AI compliance
- AI risk mitigation
OTHER POSTS
Filter blog posts:
Best Practicies for Monitoring and Securing RAG Systems in Production
Rich Young
| Oct 8, 2024
Retrieval-augmented generation (RAG) systems combine advanced retrieval techniques with large language models (LLMs) to improve the responses they generate...
- Retrival-Augmented Generation (RAG)
- LLM Security
- Generative AI
- ML Monitoring
- LangKit
How to Evaluate and Improve RAG Applications for Safe Production Deployment
Rich Young
| Jul 17, 2024
Learn how to evaluate and improve RAG applications using LangKit and WhyLabs AI Control Center. Develop secure and reliable RAG applications.
- AI Observability
- LLMs
- LLM Security
- LangKit
- RAG
- Open Source
WhyLabs Integrates with NVIDIA NIM to Deliver GenAI Applications with Security and Control
WhyLabs Team
| Jun 2, 2024
With WhyLabs and NVIDIA NIM, enterprises can accelerate GenAI application deployment and help ensure the safety of end-user experiences
WhyLabs has been on a mission to empower enterprises with tools that ensure safe and responsible AI adoption. With its integration with NVIDIA NIM inference microservices, WhyLabs is helping make responsible AI adoption more accessible. Customers can now maintain better security and control of GenAI applications with self-hosted deployment of the most powerfu
- AI Observability
- Generative AI
- Integrations
- LLM Security
- LLMs
- Partnerships
OWASP Top 10 Essential Tips for Securing LLMs: Guide to Improved LLM Safety
Alessya Visnjic
| May 21, 2024
Discover strategies for safeguarding your large language models (LLMs). Learn how to protect your AI technologies effectively based on OWASP's top 10 security tips.
- LLMs
- LLM Security
- Generative AI
7 Ways to Evaluate and Monitor LLMs
WhyLabs Team
| May 13, 2024
Learn about 7 techniques for evaluating & monitoring LLMs, including LLM-as-a-Judge, ML-model-as-a-Judge, and embedding-as-a-source. Improve your understanding of LLMs with these strategies.
- LLMs
- Generative AI
How to Distinguish User Behavior and Data Drift in LLMs
Bernease Herman
| May 7, 2024
Large Language Models (LLMs) rarely provide consistent responses for the same prompts over time. In this blog we’ll demonstrate how identify and monitor data changes using a few common scenarios.
- LLMs
- Generative AI
AI Observability is Dead, Long Live AI Observability! Introducing WhyLabs AI Control Center for Generative and Predictive AI
Alessya Visnjic
| Apr 24, 2024
Today, we release the new WhyLabs AI Control Platform! The new iteration of WhyLabs offers teams real-time control over their AI applications as AI Observability becomes insufficient in the world of generative AI.
- WhyLabs
- News
- Generative AI
Preparing for the EU AI Act: Insights, Impact, and What It Means for You
Alessya Visnjic
| Feb 28, 2024
This article provides a practical understanding of the EU AI Act, analyzes its impact on various stakeholders, discusses the EU and non-EU relevance, advises on compliance, and recommends staying ahead.
- WhyLabs
- News
- AI Observability
- Generative AI
- LangKit
- LLM Security
Step-by-Step Guide to Selecting a Data Quality Monitoring Solution in 2024
WhyLabs Team
| Feb 16, 2024
Learn from our complete guide to choosing a data quality monitoring tool. Learn the dimensions of an ideal data quality monitoring tool, including top 5 the open source and paid options in the market with notes of the key features and costs analysis.
- ML Monitoring
A Comprehensive Overview Of Data Quality Monitoring
WhyLabs Team
| Feb 2, 2024
In the first article in this series, we provide a detailed overview of why data quality monitoring is crucial for building successful data and machine learning systems and how to approach it.
- ML Monitoring
- Data Quality
Best Practices for Monitoring Large Language Models
Kelsey Olmeim
| Jan 15, 2024
Learn key strategies for effectively monitoring LLMs with our comprehensive guide. Understand essential metrics, alert systems, and scalability considerations for ensuring the highest quality and security in your NLP applications.
- LLMs
- Generative AI
- LLM Security
- LangKit
A Guide to Large Language Model Operations (LLMOps)
WhyLabs Team
| Jan 10, 2024
A comprehensive guide to Large Model Operations (LLMOps), covering the management, deployment, and optimization of large language models. Learn how to effectively handle large models and enhance their performance.
- LLMs
- LangKit
- Generative AI
- LLM Security
Data Drift vs. Concept Drift and Why Monitoring for Them is Important
Kelsey Olmeim
| Jan 1, 2024
Learn the critical differences between data drift and concept drift in machine learning models. Learn why monitoring these shifts is vital for maintaining model accuracy and performance.
- ML Monitoring
- Data Quality
Navigating Threats: Detecting LLM Prompt Injections and Jailbreaks
Felipe Adachi
| Dec 19, 2023
We explore how to detect and mitigate large language model (LLM) prompt injection and jailbreak attacks with LangKit, an open-source package for LLM and NLP applications.
- LLMs
- LangKit
- Open Source
- LLM Security
- WhyLabs
- Generative AI
WhyLabs Announces SCA with AWS to Accelerate Responsible Generative AI Adoption
WhyLabs Team
| Nov 14, 2023
WhyLabs announces a Strategic Collaboration Agreement (SCA) with AWS to help enterprises accelerate the development of AI-powered applications.
- WhyLabs
- Partnerships
- News
- Generative AI
Understanding and Mitigating LLM Hallucinations
Felipe Adachi
| Oct 18, 2023
LLMs are known for their ability to generate non-factual or nonsensical statements, more commonly known as “hallucinations.” This blog post will cover the challenges and possible solutions for hallucination detection.
- LLMs
- AI Observability
- LangKit
- LLM Security
- Generative AI
- WhyLabs
Understanding and Monitoring Embeddings in Amazon SageMaker with WhyLabs
Andre Elizondo,
Shun Mao,
James Yi
| Sep 11, 2023
WhyLabs and Amazon Web Services (AWS) explore the various ways embeddings are used, issues that can impact your ML models, how to identify those issues and set up monitors to prevent them in the future!
- WhyLabs
- ML Monitoring
- AI Observability
- Partnerships
- Integrations
Data Drift Monitoring and Its Importance in MLOps
Sage Elliott
| Aug 29, 2023
It's important to continuously monitor and manage ML models to ensure ML model performance. We explore the role of data drift management and why it's crucial in your MLOps pipeline.
- ML Monitoring
- Data Quality
- WhyLabs
- Whylogs
Glassdoor Decreases Latency Overhead and Improves Data Monitoring with WhyLabs
Jamie Broomall,
Lanqi Fei,
Natalia Skaczkowska-Drabczyk
| Aug 17, 2023
The Glassdoor team describes their integration latency challenges and how they were able to decrease latency overhead and improve data monitoring with WhyLabs.
- WhyLabs
- News
- Data Quality
Ensuring AI Success in Healthcare: The Vital Role of ML Monitoring
Kelsey Olmeim
| Aug 10, 2023
Discover how ML monitoring plays a crucial role in the Healthcare industry to ensure the reliability, compliance, and overall safety of AI-driven systems.
- ML Monitoring
WhyLabs Recognized by CB Insights GenAI 50 among the Most Innovative Generative AI Startups
WhyLabs Team
| Aug 8, 2023
WhyLabs has been named on CB Insights’ first annual GenAI 50 list, named as one of the world’s top 50 most innovative companies developing generative AI applications and infrastructure across industries.
- WhyLabs
- News
- Generative AI
- LLM Security
- LangKit
- LLMs
Hugging Face and LangKit: Your Solution for LLM Observability
Sage Elliott
| Jul 26, 2023
See how easy it is to generate out-of-the-box text metrics for Hugging Face LLMs and monitor them in WhyLabs to identify how model performance and user interaction are changing over time.
- LLMs
- Integrations
- LLM Security
- LangKit
- Generative AI
7 Ways to Monitor Large Language Model Behavior
Felipe Adachi
| Jul 20, 2023
Discover seven ways to track and monitor Large Language Model behavior using metrics for ChatGPT’s responses for a fixed set of 200 prompts across 35 days.
- LLMs
- Generative AI
- LangKit
- LLM Security
Safeguarding and Monitoring Large Language Model (LLM) Applications
Felipe Adachi
| Jul 11, 2023
We explore the concept of observability and validation in the context of language models, and demonstrate how to effectively safeguard them using guardrails.
- LLMs
- LangKit
- LLM Security
- WhyLabs
- AI Observability
- Generative AI
Robust & Responsible AI Newsletter - Issue #6
WhyLabs Team
| Jul 10, 2023
A quarterly roundup of the hottest LLM, ML and Data-Centric AI news, including industry highlights, what’s brewing at WhyLabs, and more.
- WhyLabs
Monitoring LLM Performance with LangChain and LangKit
Sage Elliott
| Jul 10, 2023
In this blog post, we dive into the significance of monitoring Large Language Models (LLMs) and show how to gain insights and effectively monitor a LangChain application with LangKit and WhyLabs.
- LLMs
- Integrations
- LangKit
- Open Source
- Generative AI
WhyLabs Weekly: Monitor LangChain LLM Applications
WhyLabs Team
| Jul 7, 2023
Monitor LangChain applications, UDFs in whylogs, best practices for monitoring LLMs, and more!
BYOF: Bring Your Own Functions - Announcing UDFs in whylogs
Andre Elizondo
| Jun 30, 2023
With the release of whylogs 1.2.0, UDFs are available out-of-the-box. UDFs are the foundation for monitoring complex data, allowing you to craft custom metrics that fit your unique business or research objectives.
- Whylogs
- Open Source
- Product Updates
Production-Ready Models with Databricks and WhyLabs
Andre Elizondo,
Alessya Visnjic
| Jun 23, 2023
Databricks and WhyLabs partner to enable a unique integration that makes it possible to compute all key telemetry data necessary for AI monitoring directly in Apache Spark.
- WhyLabs
- Integrations
- AI Observability
- Whylogs
- News
WhyLabs Recognized as a Leading AI Company on CB Insights' 2023 AI100 List
Kelsey Olmeim
| Jun 21, 2023
WhyLabs was included on the highly regarded CB Insights AI100 List of Most Innovative AI Companies Worldwide for 2023.
- WhyLabs
- AI Observability
- News