Running and Monitoring Distributed ML with Ray and whylogs
- open source
- whylogs
- integration
- AI Observability
- Logging
Nov 23, 2021
Running and monitoring distributed ML systems can be challenging. The need to manage multiple servers, and the fact that those servers emit different logs, means that there can be a lot of overhead involved in scaling up a distributed ML system. Fortunately, Ray makes parallelizing Python processes easy, and the open source whylogs enables users to monitor ML models in production, even if those models are running in a distributed environment.
Ray is an exciting project that allows you to parallelize pretty much anything written in Python. One of the advantages of the whylogs architecture is that it operates on mergeable profiles that can be easily generated in distributed systems and collected into a single profile downstream for analysis, enabling monitoring for distributed systems. This post will review some options that Ray users have for integrating whylogs into their architectures as a monitoring solution.
Continue reading on the Anyscale Ray Blog
Other posts
Choosing the Right Data Quality Monitoring Solution
May 18, 2022
- ML Monitoring
- Machine Learning
Deploying and Monitoring Made Easy with TeachableHub and WhyLabs
Mar 16, 2022
- integration
- ML Monitoring
- AI Observability
A Comprehensive Overview Of Data Quality Monitoring
Apr 29, 2022
- ML Monitoring
- Machine Learning
WhyLabs Now Available in AWS Marketplace
Mar 18, 2022
- AI Observability
- WhyLabs
- Machine Learning
- ML Monitoring
How Observability Uncovers the Effects of ML Technical Debt
Mar 10, 2022
- AI Observability
- WhyLabs
- whylogs
- Thought Leadership
- ML Monitoring
- MLOps
- Data Logging
Deploy your ML model with UbiOps and monitor it with WhyLabs
Jan 5, 2022
- integration
- AI Observability
- Machine Learning
- ML Monitoring
- MLOps
- Startup
- WhyLabs