blog bg left
Back to Blog

Robust & Responsible AI Newsletter - Issue #1

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

ISSUE: March 2022

🕚  TL;DR

Trying to keep up with MLOps, but only have 10 minutes? Here is your shortlist:

Read: Real-time ML is all the rage! Chip Huyen’s Real-time Machine Learning: Challenges and Solutions outlines what it takes to implement online inference and how to move towards continuous learning - very technical, yet accessible guide.

Watch: Enabling monitoring for ML models is on everyone’s roadmap this year. A team from Loka gave a practical talk that explores a number of monitoring solutions that are available to practitioners today, including SageMaker and WhyLabs.

Attend: MLOps experts are getting together IRL in Austin for the Data Council 2022 event. If you are attending in person, make sure to meet up with WhyLabs’ Bernease Herman, to chat about data-centric AI and all-things MLOps.

💡 Open Source Spotlight

There's a lot going on in the world of open source tooling! Here is what's new:

Reproducible ML pipelines on any cloud? Yes please! ZenML is an extensible, open-source MLOps framework to create production-ready machine learning pipelines. With their latest release you can enjoy a cloud agnostic pipeline in no time!

ML logging, monitoring, and unit testing all at once? Yes we can! The latest release of whylogs gives you the power to capture all of the vitals of your data pipeline locally, build constraints for data unit tests, and monitor for data drifts, all in a Jupyter notebook.

ML workflows + Kubernetes giving you headaches? Flyte has been simplifying highly concurrent, scalable, and maintainable ML & data workflows since 2019. Recent notable feature highlights: BigQuery plugin and AWS Batch support.

📚 What MLOps experts are reading

Keeping up with the latest on MLOps can be a full-time job. Here are the highlights:

Most ML teams are implementing monitoring right now. In Shreya Shankar’s mini-series on the current state of monitoring, our favorite is Categorizing Post-Deployment Issues, where she breaks down monitoring problems along two axes: statefulness (stateless /stateful) and components (single component/cross-component).

Data-centric AI and MLOps philosophies are converging. Andrew Ng launched a resource hub focused on data-centric mechanisms across the AI lifecycle.D. Scully's updated view on technical debt of data in deployment is a must read. WhyLabs contributed an article on how observability helps tackle data technical debt.

Responsible AI begins at the design phase! Chip Huyen teaches the fundamentals at Stanford in a not-your-typical undergrad course, bringing industry leaders to present real-world views. Must read: Learnings from Booking.com’s 150 models, Stitch Fix’s ML deployment architecture, &ML telemetry design (by WhyLabs).

Academic research continues to push the state of the art of what it means to monitor ML and data systems. The Stanford team released a method called Mandoline, that can be used to compute reweighted performance estimates that work under distribution shift when labels are not available. This approach is similar to what we call “segments” at WhyLabs.

☕️ What’s brewing at WhyLabs

At WhyLabs, we are focused on making AI observability as easy and intuitive as your coffee machine. Here are our latest releases:

AI Observatory is now on the AWS Marketplace! For those who are already on AWS, enabling observability for SageMaker models has never been easier. If you are wondering why use WhyLabs AI Observatory with SageMaker, this AWS blog has answers.

We’re SOC 2 Type 2 certified: Our successful audit completion makes it even easier for our customers to evaluate the WhyLabs solution with their security teams. Here is how we’re going above and beyond to keep data safe.

Can root cause analysis feel good and look beautiful? Check out the latest interactive features inside the AI Observatory profile viewer: compare histogram data across multiple profiles, discover anomalies, and find outliers within distributions. Learn more through our short videos on profile comparisons for continuous features and discrete features.

🎟️ Robust & Responsible AI, at an event near you

If you're looking for high quality events, we've got you covered. As a perk, you will always have a friend, because somebody from WhyLabs is either speaking or attending!

Hands-On Data Monitoring Workshop | March 29 | Virtual

DataTalks.Club is organizing a practical workshop focused on monitoring. Danny Leybzon will be walking through monitoring batch Python or Spark data pipelines and Kafka streaming pipelines with whylogs.

MLOps World: Machine Learning in Production | March 30 | Virtual

The inaugural NYC summit for MLOps practitioners with workshops and talks. Alessya Visnjic will be speaking about designing ML telemetry, building monitoring on top of telemetry, and enabling transparency in ML pipelines.

ODSC East | April 19-23 | Boston, MA

Data science conference that focuses on the latest language and infrastructure advancements. Danny Leybzon is also speaking there on his favorite topic of fixing ML models. If you are attending in person and want to meet, connect with Danny! Register now, while tickets are 40% off.

Join the Community

Join the Robust & Responsible AI (Rsqrd) Community on Slack to connect with other practitioners, share ideas, and learn about exciting new techniques. Attend the community live chats or check out YouTube to see all the recordings.

If you want to help support whylogs, the open-source standard for data logging, check out our GitHub and give us a star.

📬 Subscribe to the Robust & Responsible AI newsletter to get the latest Data-Centric AI and MLOps news delivered quarterly to your inbox!

Other posts

Get Early Access to the First Purpose-Built Monitoring Solution for LLMs

We’re excited to announce our private beta release of LangKit, the first purpose-built large language model monitoring solution! Join the responsible LLM revolution by signing up for early access.

Mind Your Models: 5 Ways to Implement ML Monitoring in Production

We’ve outlined five easy ways to monitor your ML models in production to ensure they are robust and responsible by monitoring for concept drift, data drift, data quality, AI explainability and more.

Simplifying ML Deployment: A Conversation with BentoML's Founder & CEO Chaoyu Yang

A summary of the live interview with Chaoyu Yang, Founder & CEO at BentoML, on putting machine learning models in production and BentoML's role in simplifying deployment.

Data Drift vs. Concept Drift and Why Monitoring for Them is Important

Data drift and concept drift are two common challenges that can impact ML models on production. In this blog, we'll explore the differences between these two types of drift and why monitoring for them is crucial.

Robust & Responsible AI Newsletter - Issue #5

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

Detecting Financial Fraud in Real-Time: A Guide to ML Monitoring

Fraud is a significant challenge for financial institutions and businesses. As fraudsters constantly adapt their tactics, it’s essential to implement a robust ML monitoring system to ensure that models effectively detect fraud and minimize false positives.

How to Troubleshoot Embeddings Without Eye-balling t-SNE or UMAP Plots

WhyLabs' scalable approach to monitoring high dimensional embeddings data means you don’t have to eye-ball pretty UMAP plots to troubleshoot embeddings!

Achieving Ethical AI with Model Performance Tracing and ML Explainability

With Model Performance Tracing and ML Explainability, we’ve accelerated our customers’ journey toward achieving the three goals of ethical AI - fairness, accountability and transparency.

Detecting and Fixing Data Drift in Computer Vision

In this tutorial, Magdalena Konkiewicz from Toloka focuses on the practical part of data drift detection and fixing it on a computer vision example.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo
loading...