blog bg left
Back to Blog

Robust & Responsible AI Newsletter - Issue #4

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

ISSUE: December 2022

🕚  TL;DR

Trying to keep up with MLOps, but only have 10 minutes? Here is your shortlist:

Attend: The Robust & Responsible AI Summit 2023! We're excited to announce the inaugural R2AI Summit. Join us on Jan 26 for a half-day event featuring data leaders pioneering responsible AI - including Andrew Ng, Founder of DeepLearning.AI!

Read: The State of AI Report 2022. Read about the biggest breakthroughs, business impacts, social trends, and safety concerns from this year and what's coming next.

Watch: It’s here! The R2AI podcasts of 2022. Enjoy an eclectic mix of interviews with engineers, scientists, and product managers working to ensure that AI is robust and responsible.

💡 Open Source Spotlight

There's a lot going on in the world of open source tooling! Here is what's new:

ZenML's been busy this quarter. After a year of community feedback, ten months of development effort, and tens of thousands of code changes, ZenML has unveiled two major releases - see what’s new in the release notes.

Simplified data science workflows. Incorporating extensive user feedback, MLflow 2.0 simplifies data science workflows and delivers innovative, first-class tools for MLOps. Dive into the details of the new release and learn how to get started!

Profiling large amounts of data just got easier. Users can now use Fugue with whylogs on top of Spark, Dask, or Ray, to easily profile large-scale data for use cases such as anomaly detection, drift detection, and data validation.

📚 What MLOps experts are reading

Keeping up with the latest on MLOps can be a full-time job. Here are the highlights:

The future of NLP is bright. Check out a new guide to the high-impact, fast-changing technology driving huge growth in AI research, applications, and investment.

Technology readiness levels for ML systems. Nature Communications published a framework that defines a principled process of Machine Learning system formation, from research to production, for various domains and data scenarios.

The gap between aspiration and reality. 84% of organizations view responsible AI as a top management issue, but only a quarter have mature RAI programs. Learn more about the research highlighting RAI aspiration vs. reality in the report.

☕️ What’s brewing at WhyLabs

At WhyLabs, we are focused on making AI observability as easy and intuitive as your coffee machine. Here are our latest releases:

When it comes to an ML monitoring solution - should you build or buy? We’ve written a guide to help you make the right decision with detailed discussions of both options. Download your copy now!

WhyLabs integration highlights. With the whylogs and Apache Spark integration, users can achieve large scale data profiling and easily apply it into existing data and ML pipelines. Also, AIShield and WhyLabs have partnered to make it trivial for companies relying on AI to maintain the security and reliability of their models.

ML monitoring in under 5 minutes. It only takes a few minutes and a few lines of code to monitor your ML models and data pipelines. This short post will show you how to monitor common issues with your ML models, such as data drift, concept drift, data quality, and performance!

🎟️ Robust & Responsible AI, at an event near you

If you're looking for high quality events, we've got you covered. As a perk, you will always have a friend, because somebody from WhyLabs is either speaking or attending!

Robust & Responsible AI Summit | Jan 26, 2023 | Virtual

This half-day event includes talks, fireside chats, panels, AMAs, and more featuring data leaders pioneering the technologies, processes, and standards shaping Responsible AI!

Live Interview: Why Graph Query Language Matters | Jan 12, 2023 | Virtual

Jason Koo, Developer Advocate at Neo4j, will be joining the Rsqrd AI Community podcast to discuss why Graph Query Language (GQL) and graph databases matter.

PyData Seattle | April 26-28, 2023 | Seattle, WA

3-days of talks, tutorials, and discussions to bring attendees the latest project features along with cutting edge use cases.

ODSC East | May 9-11, 2023 | Boston, MA

Over the course of 3 days, ODSC East will provide expert-led instruction in machine learning, deep learning, NLP, MLOps, and more through hands-on training sessions, immersive workshops, and talks. Register now for early bird pricing!

Join the Community

Join the Robust & Responsible AI (Rsqrd) Community on Slack to connect with other practitioners, share ideas, and learn about exciting new techniques. Attend the community live chats or check out YouTube to see all the recordings.

If you want to help support whylogs, the open-source standard for data logging, check out our GitHub and give us a star.

📬  Subscribe to the Robust & Responsible AI newsletter to get the latest Data-Centric AI and MLOps news delivered quarterly to your inbox!

Other posts

Get Early Access to the First Purpose-Built Monitoring Solution for LLMs

We’re excited to announce our private beta release of LangKit, the first purpose-built large language model monitoring solution! Join the responsible LLM revolution by signing up for early access.

Mind Your Models: 5 Ways to Implement ML Monitoring in Production

We’ve outlined five easy ways to monitor your ML models in production to ensure they are robust and responsible by monitoring for concept drift, data drift, data quality, AI explainability and more.

Simplifying ML Deployment: A Conversation with BentoML's Founder & CEO Chaoyu Yang

A summary of the live interview with Chaoyu Yang, Founder & CEO at BentoML, on putting machine learning models in production and BentoML's role in simplifying deployment.

Data Drift vs. Concept Drift and Why Monitoring for Them is Important

Data drift and concept drift are two common challenges that can impact ML models on production. In this blog, we'll explore the differences between these two types of drift and why monitoring for them is crucial.

Robust & Responsible AI Newsletter - Issue #5

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

Detecting Financial Fraud in Real-Time: A Guide to ML Monitoring

Fraud is a significant challenge for financial institutions and businesses. As fraudsters constantly adapt their tactics, it’s essential to implement a robust ML monitoring system to ensure that models effectively detect fraud and minimize false positives.

How to Troubleshoot Embeddings Without Eye-balling t-SNE or UMAP Plots

WhyLabs' scalable approach to monitoring high dimensional embeddings data means you don’t have to eye-ball pretty UMAP plots to troubleshoot embeddings!

Achieving Ethical AI with Model Performance Tracing and ML Explainability

With Model Performance Tracing and ML Explainability, we’ve accelerated our customers’ journey toward achieving the three goals of ethical AI - fairness, accountability and transparency.

Detecting and Fixing Data Drift in Computer Vision

In this tutorial, Magdalena Konkiewicz from Toloka focuses on the practical part of data drift detection and fixing it on a computer vision example.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo
loading...