Monitor your SageMaker model with WhyLabs
- AI Observability
- Integrations
- WhyLabs
- ML Monitoring
Nov 18, 2021
As the real-world changes, machine learning models degrade in their ability to accurately represent it, resulting in model performance degradation. That’s why it’s important for data scientists and machine learning engineers to support models with tools that provide ML monitoring and observability, thereby preventing that performance degradation. In this blog post, we will dive into the WhyLabs AI Observatory, a data and ML monitoring and observability platform, and show how it complements Amazon SageMaker.
Amazon SageMaker is incredibly powerful for training and deploying machine learning models at scale. WhyLabs allows you to monitor and observe your machine learning model, ensuring that it doesn’t suffer from performance degradation and continues to provide value to your business. In this blog post, we’re going to demonstrate how to use WhyLabs to identify training-serving skew in a computer vision example for a model trained and deployed with SageMaker. WhyLabs is unique in its ability to monitor computer vision models and image data; whylogs library is able to extract features and metadata from images as described in “Detecting Semantic Drift within Image Data”. The ability to create profiles based on images means that users can identify differences between training data and serving data and understand whether they need to retrain their models...
Continue reading on the AWS Startup Blog website
Other posts
xPreparing for the EU AI Act: Insights, Impact, and What It Means for You
Feb 28, 2024
- WhyLabs
- News
- AI Observability
- Generative AI
- LangKit
- LLM Security