Robust & Responsible AI Newsletter - Issue #1
- WhyLabs
Mar 22, 2022
Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.
ISSUE: March 2022
🕚 TL;DR
Trying to keep up with MLOps, but only have 10 minutes? Here is your shortlist:
Read: Real-time ML is all the rage! Chip Huyen’s Real-time Machine Learning: Challenges and Solutions outlines what it takes to implement online inference and how to move towards continuous learning - very technical, yet accessible guide.
Watch: Enabling monitoring for ML models is on everyone’s roadmap this year. A team from Loka gave a practical talk that explores a number of monitoring solutions that are available to practitioners today, including SageMaker and WhyLabs.
Attend: MLOps experts are getting together IRL in Austin for the Data Council 2022 event. If you are attending in person, make sure to meet up with WhyLabs’ Bernease Herman, to chat about data-centric AI and all-things MLOps.
💡 Open Source Spotlight
There's a lot going on in the world of open source tooling! Here is what's new:
Reproducible ML pipelines on any cloud? Yes please! ZenML is an extensible, open-source MLOps framework to create production-ready machine learning pipelines. With their latest release you can enjoy a cloud agnostic pipeline in no time!
ML logging, monitoring, and unit testing all at once? Yes we can! The latest release of whylogs gives you the power to capture all of the vitals of your data pipeline locally, build constraints for data unit tests, and monitor for data drifts, all in a Jupyter notebook.
ML workflows + Kubernetes giving you headaches? Flyte has been simplifying highly concurrent, scalable, and maintainable ML & data workflows since 2019. Recent notable feature highlights: BigQuery plugin and AWS Batch support.
📚 What MLOps experts are reading
Keeping up with the latest on MLOps can be a full-time job. Here are the highlights:
Most ML teams are implementing monitoring right now. In Shreya Shankar’s mini-series on the current state of monitoring, our favorite is Categorizing Post-Deployment Issues, where she breaks down monitoring problems along two axes: statefulness (stateless /stateful) and components (single component/cross-component).
Data-centric AI and MLOps philosophies are converging. Andrew Ng launched a resource hub focused on data-centric mechanisms across the AI lifecycle.D. Scully's updated view on technical debt of data in deployment is a must read. WhyLabs contributed an article on how observability helps tackle data technical debt.
Responsible AI begins at the design phase! Chip Huyen teaches the fundamentals at Stanford in a not-your-typical undergrad course, bringing industry leaders to present real-world views. Must read: Learnings from Booking.com’s 150 models, Stitch Fix’s ML deployment architecture, &ML telemetry design (by WhyLabs).
Academic research continues to push the state of the art of what it means to monitor ML and data systems. The Stanford team released a method called Mandoline, that can be used to compute reweighted performance estimates that work under distribution shift when labels are not available. This approach is similar to what we call “segments” at WhyLabs.
☕️ What’s brewing at WhyLabs
At WhyLabs, we are focused on making AI observability as easy and intuitive as your coffee machine. Here are our latest releases:
AI Observatory is now on the AWS Marketplace! For those who are already on AWS, enabling observability for SageMaker models has never been easier. If you are wondering why use WhyLabs AI Observatory with SageMaker, this AWS blog has answers.
We’re SOC 2 Type 2 certified: Our successful audit completion makes it even easier for our customers to evaluate the WhyLabs solution with their security teams. Here is how we’re going above and beyond to keep data safe.
Can root cause analysis feel good and look beautiful? Check out the latest interactive features inside the AI Observatory profile viewer: compare histogram data across multiple profiles, discover anomalies, and find outliers within distributions. Learn more through our short videos on profile comparisons for continuous features and discrete features.
🎟️ Robust & Responsible AI, at an event near you
If you're looking for high quality events, we've got you covered. As a perk, you will always have a friend, because somebody from WhyLabs is either speaking or attending!
Hands-On Data Monitoring Workshop | March 29 | Virtual
DataTalks.Club is organizing a practical workshop focused on monitoring. Danny Leybzon will be walking through monitoring batch Python or Spark data pipelines and Kafka streaming pipelines with whylogs.
MLOps World: Machine Learning in Production | March 30 | Virtual
The inaugural NYC summit for MLOps practitioners with workshops and talks. Alessya Visnjic will be speaking about designing ML telemetry, building monitoring on top of telemetry, and enabling transparency in ML pipelines.
ODSC East | April 19-23 | Boston, MA
Data science conference that focuses on the latest language and infrastructure advancements. Danny Leybzon is also speaking there on his favorite topic of fixing ML models. If you are attending in person and want to meet, connect with Danny! Register now, while tickets are 40% off.
Join the Community
Join the Robust & Responsible AI (Rsqrd) Community on Slack to connect with other practitioners, share ideas, and learn about exciting new techniques. Attend the community live chats or check out YouTube to see all the recordings.
If you want to help support whylogs, the open-source standard for data logging, check out our GitHub and give us a star.
📬 Subscribe to the Robust & Responsible AI newsletter to get the latest Data-Centric AI and MLOps news delivered quarterly to your inbox!
Other posts
Understanding and Implementing the NIST AI Risk Management Framework (RMF) with WhyLabs
Dec 10, 2024
- AI risk management
- AI Observability
- AI security
- NIST RMF implementation
- AI compliance
- AI risk mitigation
Best Practicies for Monitoring and Securing RAG Systems in Production
Oct 8, 2024
- Retrival-Augmented Generation (RAG)
- LLM Security
- Generative AI
- ML Monitoring
- LangKit
How to Evaluate and Improve RAG Applications for Safe Production Deployment
Jul 17, 2024
- AI Observability
- LLMs
- LLM Security
- LangKit
- RAG
- Open Source
WhyLabs Integrates with NVIDIA NIM to Deliver GenAI Applications with Security and Control
Jun 2, 2024
- AI Observability
- Generative AI
- Integrations
- LLM Security
- LLMs
- Partnerships
OWASP Top 10 Essential Tips for Securing LLMs: Guide to Improved LLM Safety
May 21, 2024
- LLMs
- LLM Security
- Generative AI
7 Ways to Evaluate and Monitor LLMs
May 13, 2024
- LLMs
- Generative AI
How to Distinguish User Behavior and Data Drift in LLMs
May 7, 2024
- LLMs
- Generative AI