blog bg left
Back to Blog

Robust & Responsible AI Newsletter - Issue #1

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

ISSUE: March 2022

🕚  TL;DR

Trying to keep up with MLOps, but only have 10 minutes? Here is your shortlist:

Read: Real-time ML is all the rage! Chip Huyen’s Real-time Machine Learning: Challenges and Solutions outlines what it takes to implement online inference and how to move towards continuous learning - very technical, yet accessible guide.

Watch: Enabling monitoring for ML models is on everyone’s roadmap this year. A team from Loka gave a practical talk that explores a number of monitoring solutions that are available to practitioners today, including SageMaker and WhyLabs.

Attend: MLOps experts are getting together IRL in Austin for the Data Council 2022 event. If you are attending in person, make sure to meet up with WhyLabs’ Bernease Herman, to chat about data-centric AI and all-things MLOps.

💡 Open Source Spotlight

There's a lot going on in the world of open source tooling! Here is what's new:

Reproducible ML pipelines on any cloud? Yes please! ZenML is an extensible, open-source MLOps framework to create production-ready machine learning pipelines. With their latest release you can enjoy a cloud agnostic pipeline in no time!

ML logging, monitoring, and unit testing all at once? Yes we can! The latest release of whylogs gives you the power to capture all of the vitals of your data pipeline locally, build constraints for data unit tests, and monitor for data drifts, all in a Jupyter notebook.

ML workflows + Kubernetes giving you headaches? Flyte has been simplifying highly concurrent, scalable, and maintainable ML & data workflows since 2019. Recent notable feature highlights: BigQuery plugin and AWS Batch support.

📚 What MLOps experts are reading

Keeping up with the latest on MLOps can be a full-time job. Here are the highlights:

Most ML teams are implementing monitoring right now. In Shreya Shankar’s mini-series on the current state of monitoring, our favorite is Categorizing Post-Deployment Issues, where she breaks down monitoring problems along two axes: statefulness (stateless /stateful) and components (single component/cross-component).

Data-centric AI and MLOps philosophies are converging. Andrew Ng launched a resource hub focused on data-centric mechanisms across the AI lifecycle.D. Scully's updated view on technical debt of data in deployment is a must read. WhyLabs contributed an article on how observability helps tackle data technical debt.

Responsible AI begins at the design phase! Chip Huyen teaches the fundamentals at Stanford in a not-your-typical undergrad course, bringing industry leaders to present real-world views. Must read: Learnings from’s 150 models, Stitch Fix’s ML deployment architecture, &ML telemetry design (by WhyLabs).

Academic research continues to push the state of the art of what it means to monitor ML and data systems. The Stanford team released a method called Mandoline, that can be used to compute reweighted performance estimates that work under distribution shift when labels are not available. This approach is similar to what we call “segments” at WhyLabs.

☕️ What’s brewing at WhyLabs

At WhyLabs, we are focused on making AI observability as easy and intuitive as your coffee machine. Here are our latest releases:

AI Observatory is now on the AWS Marketplace! For those who are already on AWS, enabling observability for SageMaker models has never been easier. If you are wondering why use WhyLabs AI Observatory with SageMaker, this AWS blog has answers.

We’re SOC 2 Type 2 certified: Our successful audit completion makes it even easier for our customers to evaluate the WhyLabs solution with their security teams. Here is how we’re going above and beyond to keep data safe.

Can root cause analysis feel good and look beautiful? Check out the latest interactive features inside the AI Observatory profile viewer: compare histogram data across multiple profiles, discover anomalies, and find outliers within distributions. Learn more through our short videos on profile comparisons for continuous features and discrete features.

🎟️ Robust & Responsible AI, at an event near you

If you're looking for high quality events, we've got you covered. As a perk, you will always have a friend, because somebody from WhyLabs is either speaking or attending!

Hands-On Data Monitoring Workshop | March 29 | Virtual

DataTalks.Club is organizing a practical workshop focused on monitoring. Danny Leybzon will be walking through monitoring batch Python or Spark data pipelines and Kafka streaming pipelines with whylogs.

MLOps World: Machine Learning in Production | March 30 | Virtual

The inaugural NYC summit for MLOps practitioners with workshops and talks. Alessya Visnjic will be speaking about designing ML telemetry, building monitoring on top of telemetry, and enabling transparency in ML pipelines.

ODSC East | April 19-23 | Boston, MA

Data science conference that focuses on the latest language and infrastructure advancements. Danny Leybzon is also speaking there on his favorite topic of fixing ML models. If you are attending in person and want to meet, connect with Danny! Register now, while tickets are 40% off.

Join the Community

Join the Robust & Responsible AI (Rsqrd) Community on Slack to connect with other practitioners, share ideas, and learn about exciting new techniques. Attend the community live chats or check out YouTube to see all the recordings.

If you want to help support whylogs, the open-source standard for data logging, check out our GitHub and give us a star.

📬 Subscribe to the Robust & Responsible AI newsletter to get the latest Data-Centric AI and MLOps news delivered quarterly to your inbox!

Other posts

Achieving Ethical AI with Model Performance Tracing and ML Explainability

With Model Performance Tracing and ML Explainability, we’ve accelerated our customers’ journey toward achieving the three goals of ethical AI - fairness, accountability and transparency.

BigQuery Data Monitoring with WhyLabs

We’re excited to announce the release of a no-code solution for data monitoring in Google BigQuery, making it simple to monitor your data quality without writing a single line of code.

Robust & Responsible AI Newsletter - Issue #4

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

WhyLabs Private Beta: Real-time Data Monitoring on Prem

We’re excited to announce our Private Beta release of an extension service for the Profile Store, enabling production use cases of whylogs on customers' premises.

Understanding Kolmogorov-Smirnov (KS) Tests for Data Drift on Profiled Data

We experiment with statistical tests, Kolmogorov-Smirnov (KS) specifically, applied to full datasets and dataset profiles and compare the results.

Re-imagine Data Monitoring with whylogs and Apache Spark

An overview of how the whylogs integration with Apache Spark achieves large scale data profiling, and how users can apply this integration into existing data and ML pipelines.

ML Monitoring in Under 5 Minutes

A quick guide to using whylogs and WhyLabs to monitor common issues with your ML models to surface data drift, concept drift, data quality, and performance issues.

AIShield and WhyLabs: Threat Detection and Monitoring for AI

The seamless integration of AIShield’s security insights on WhyLabs AI observability platform delivers comprehensive insights into ML workloads and brings security hardening to AI-powered enterprises.

Large Scale Data Profiling with whylogs and Fugue on Spark, Ray or Dask

Profiling large-scale data for use cases such as anomaly detection, drift detection, and data validation with Fugue on Spark, Ray or Dask.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo