blog bg left
Back to Blog

Robust & Responsible AI Newsletter - Issue #1

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

ISSUE: March 2022

🕚  TL;DR

Trying to keep up with MLOps, but only have 10 minutes? Here is your shortlist:

Read: Real-time ML is all the rage! Chip Huyen’s Real-time Machine Learning: Challenges and Solutions outlines what it takes to implement online inference and how to move towards continuous learning - very technical, yet accessible guide.

Watch: Enabling monitoring for ML models is on everyone’s roadmap this year. A team from Loka gave a practical talk that explores a number of monitoring solutions that are available to practitioners today, including SageMaker and WhyLabs.

Attend: MLOps experts are getting together IRL in Austin for the Data Council 2022 event. If you are attending in person, make sure to meet up with WhyLabs’ Bernease Herman, to chat about data-centric AI and all-things MLOps.

💡 Open Source Spotlight

There's a lot going on in the world of open source tooling! Here is what's new:

Reproducible ML pipelines on any cloud? Yes please! ZenML is an extensible, open-source MLOps framework to create production-ready machine learning pipelines. With their latest release you can enjoy a cloud agnostic pipeline in no time!

ML logging, monitoring, and unit testing all at once? Yes we can! The latest release of whylogs gives you the power to capture all of the vitals of your data pipeline locally, build constraints for data unit tests, and monitor for data drifts, all in a Jupyter notebook.

ML workflows + Kubernetes giving you headaches? Flyte has been simplifying highly concurrent, scalable, and maintainable ML & data workflows since 2019. Recent notable feature highlights: BigQuery plugin and AWS Batch support.

📚 What MLOps experts are reading

Keeping up with the latest on MLOps can be a full-time job. Here are the highlights:

Most ML teams are implementing monitoring right now. In Shreya Shankar’s mini-series on the current state of monitoring, our favorite is Categorizing Post-Deployment Issues, where she breaks down monitoring problems along two axes: statefulness (stateless /stateful) and components (single component/cross-component).

Data-centric AI and MLOps philosophies are converging. Andrew Ng launched a resource hub focused on data-centric mechanisms across the AI lifecycle.D. Scully's updated view on technical debt of data in deployment is a must read. WhyLabs contributed an article on how observability helps tackle data technical debt.

Responsible AI begins at the design phase! Chip Huyen teaches the fundamentals at Stanford in a not-your-typical undergrad course, bringing industry leaders to present real-world views. Must read: Learnings from Booking.com’s 150 models, Stitch Fix’s ML deployment architecture, &ML telemetry design (by WhyLabs).

Academic research continues to push the state of the art of what it means to monitor ML and data systems. The Stanford team released a method called Mandoline, that can be used to compute reweighted performance estimates that work under distribution shift when labels are not available. This approach is similar to what we call “segments” at WhyLabs.

☕️ What’s brewing at WhyLabs

At WhyLabs, we are focused on making AI observability as easy and intuitive as your coffee machine. Here are our latest releases:

AI Observatory is now on the AWS Marketplace! For those who are already on AWS, enabling observability for SageMaker models has never been easier. If you are wondering why use WhyLabs AI Observatory with SageMaker, this AWS blog has answers.

We’re SOC 2 Type 2 certified: Our successful audit completion makes it even easier for our customers to evaluate the WhyLabs solution with their security teams. Here is how we’re going above and beyond to keep data safe.

Can root cause analysis feel good and look beautiful? Check out the latest interactive features inside the AI Observatory profile viewer: compare histogram data across multiple profiles, discover anomalies, and find outliers within distributions. Learn more through our short videos on profile comparisons for continuous features and discrete features.

🎟️ Robust & Responsible AI, at an event near you

If you're looking for high quality events, we've got you covered. As a perk, you will always have a friend, because somebody from WhyLabs is either speaking or attending!

Hands-On Data Monitoring Workshop | March 29 | Virtual

DataTalks.Club is organizing a practical workshop focused on monitoring. Danny Leybzon will be walking through monitoring batch Python or Spark data pipelines and Kafka streaming pipelines with whylogs.

MLOps World: Machine Learning in Production | March 30 | Virtual

The inaugural NYC summit for MLOps practitioners with workshops and talks. Alessya Visnjic will be speaking about designing ML telemetry, building monitoring on top of telemetry, and enabling transparency in ML pipelines.

ODSC East | April 19-23 | Boston, MA

Data science conference that focuses on the latest language and infrastructure advancements. Danny Leybzon is also speaking there on his favorite topic of fixing ML models. If you are attending in person and want to meet, connect with Danny! Register now, while tickets are 40% off.

Join the Community

Join the Robust & Responsible AI (Rsqrd) Community on Slack to connect with other practitioners, share ideas, and learn about exciting new techniques. Attend the community live chats or check out YouTube to see all the recordings.

If you want to help support whylogs, the open-source standard for data logging, check out our GitHub and give us a star.

📬 Subscribe to the Robust & Responsible AI newsletter to get the latest Data-Centric AI and MLOps news delivered quarterly to your inbox!

Other posts

Model Monitoring for Financial Fraud Classification

Model monitoring is helping the financial services industry avoid huge losses caused by performance degradation in their fraud transaction models.

Data and ML Monitoring is Easier with whylogs v1.1

The release of whylogs v1.1 brings many features to the whylogs data logging API, making it even easier to monitor your data and ML models!

Robust & Responsible AI Newsletter - Issue #3

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

Data Quality Monitoring in Apache Airflow with whylogs

To make the most of whylogs within your existing Apache Airflow pipelines, we’ve created the whylogs Airflow provider. Using an example, we’ll show how you can use whylogs and Airflow to make your workflow more responsible, scalable, and efficient.

Data Logging with whylogs: Profiling for Efficiency and Speed

Rather than sampling data, whylogs captures snapshots of the data making it fast and efficient for data logging, even if your datasets scale to larger sizes.

Data Quality Monitoring for Kafka, Beyond Schema Validation

Data quality mapped to a schema registry or data type validation is a good start, but there are a few things most data application owners don’t think about. We explore error scenarios beyond schema validation and how to mitigate them.

Data + Model Monitoring with WhyLabs: simple, customizable, actionable

The new monitoring system maximizes the helpfulness of alerts and minimizes alert fatigue, so users can focus on improving their models instead of worrying about them in production...

A Solution for Monitoring Image Data

A breakdown of how to monitor unstructured data such as images, the types of problems that threaten computer vision systems, and a solution for these challenges.

How to Validate Data Quality for ML Monitoring

Data quality is one of the most important considerations for machine learning applications—and it's one of the most frequently overlooked. We explore why it’s an essential step in the MLOps process and how to check your data quality with whylogs.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo
loading...