blog bg left
Back to Blog

Running and Monitoring Distributed ML with Ray and whylogs

Running and monitoring distributed ML systems can be challenging. The need to manage multiple servers, and the fact that those servers emit different logs, means that there can be a lot of overhead involved in scaling up a distributed ML system. Fortunately, Ray makes parallelizing Python processes easy, and the open source whylogs enables users to monitor ML models in production, even if those models are running in a distributed environment.

Ray is an exciting project that allows you to parallelize pretty much anything written in Python. One of the advantages of the whylogs architecture is that it operates on mergeable profiles that can be easily generated in distributed systems and collected into a single profile downstream for analysis, enabling monitoring for distributed systems. This post will review some options that Ray users have for integrating whylogs into their architectures as a monitoring solution.

Continue reading on the Anyscale Ray Blog

Other posts

AI Observability for All

We’re excited to announce our new Starter edition: a free tier of our model monitoring solution that allows users to access all of the features of the WhyLabs AI observability platform. It is entirely self-service, meaning that users can sign up for an account and get started right away.

Observability in Production: Monitoring Data Drift with WhyLabs and Valohai

What works today might not work tomorrow. And when a model is in real-world use, serving the faulty predictions can lead to catastrophic consequences...

Why You Need ML Monitoring

Machine learning models are increasingly becoming key to businesses of all shapes and sizes, performing myriad functions... If a machine learning model is providing value to a business, it’s essential that the model remains performant.

Data Labeling Meets Data Monitoring with Superb AI and WhyLabs

Data quality is the key to a performant machine learning model. That’s why WhyLabs and Superb AI are on a mission to ensure that data scientists and machine learning engineers have access to tools designed specifically for their needs and workflows.

Monitor your SageMaker model with WhyLabs

In this blog post, we will dive into the WhyLabs AI Observatory, a data and ML monitoring and observability platform, and show how it complements Amazon SageMaker.

Deploy and Monitor your ML Application with Flask and WhyLabs

In this article, we deploy a Flask application for pattern recognition based on the well-known Iris Dataset. For the application monitoring, we’ll explore the free, starter edition of the WhyLabs Observability Platform in order to set up our own model monitoring dashboard.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Get started for free
loading...