blog bg left
Back to Blog

Robust & Responsible AI Newsletter - Issue #5

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

ISSUE: March 2023

📬  Subscribe to get the latest Data-Centric AI and MLOps news delivered to your inbox!

🕙 TL;DR

Trying to keep up with MLOps, but only have 10 minutes? Here is your shortlist:

Attend: 4 ML Monitoring workshops. Join us for a series of hands-on workshops to learn the basics of ML monitoring, AI observability, and the tools and techniques to effectively manage ML models and AI systems. Register for one or all of the sessions!

Read: The industry-wide neglect of data design and data quality. Read Cassie Kozyrkov's post on how the art of making good data is terribly neglected, but even when you do have data, there’s a chance you’re missing something: data quality.

Watch: R2AI Summit. Andrew Ng on the Data-centric AI toolchain and innovations; Mailchimp’s Maya Wilson on using GPT3 at scale; Shopify’s Alicia Bargar on feature stores, and more. Check out all the on-demand sessions from the Robust and Responsible AI Summit!

☕ What's brewing at WhyLabs

At WhyLabs, we're focused on making AI observability as easy and intuitive as your coffee machine. Here are our latest releases:

Embeddings: Stop eye-balling pretty t-SNE or UMAP plots to troubleshoot! WhyLabs' scalable approach to monitoring high dimensional embeddings data means you don’t have to explore it by hand. Read how it’s easier than ever to troubleshoot embeddings!

Accelerate your ethical AI journey. We’ve expanded our platform with Performance Tracing and Model Explainability to accelerate customers’ journey toward achieving the three goals of ethical AI - fairness, accountability, and transparency.

📚 What MLOps experts are reading

Keeping up with the latest on MLOps can be a full-time job. Here are the highlights:

Reinforcement Learning with Human Feedback (RLHF). An exciting innovation behind the success of ChatGPT and InstructGPT, RLHF has been the subject of several blog posts and explanations. Here’s one of our favorites from Hugging Face.

Advancing trustworthy AI systems. NIST released an AI Risk Management Framework to equip organizations and individuals with approaches that help foster the responsible design, development, deployment, and use of AI systems over time.

💡 Open source spotlight

There's a lot going on in the world of open source tooling! Here is what's new:

TensorFlow Decision Forests is production ready. TensorFlow promises fast training and improved prediction performance on tabular datasets. Read about all the new features, including distributed training and hyper-parameter tuning.

The rise and regulation of ChatGPT3. OpenAI creates a new AI classifier to distinguish between AI-written and human-written text, and the creator of ChatGPT explains why we should regulate AI.

🎟️ Robust & Responsible AI, at an event near you

If you're looking for high quality events, we've got you covered. As a perk, you will always have a friend, because somebody from WhyLabs is either speaking or attending!

PyData Seattle | April 26 - 28, 2023 | Seattle, WA

3-days of talks, tutorials, and discussions to bring attendees the latest project features along with cutting-edge use cases. Register to join the PyData Community in Seattle with this 10% discount code!

ODSC East | May 9 - 11, 2023 | Boston, MA

Over the course of 3 days, ODSC East will provide expert-led instruction in machine learning, deep learning, NLP, MLOps, and more through hands-on training sessions, immersive workshops, and talks. Register now for 50% off!

ML Monitoring Fundamentals Workshop Series | March 2023 | Virtual

A series of hands-on workshops to learn the basics of ML monitoring, AI observability, and tools and techniques to effectively manage ML models and AI systems. Register for one or all of the workshops!

Join the Community

Join the Robust & Responsible AI (R2AI) Community on Slack to connect with other practitioners, share ideas, and learn about exciting new techniques. Attend the community live chats or check out YouTube to see all the recordings.

If you want to help support whylogs, the open-source standard for data logging, check out our GitHub and give us a star.

📬  Subscribe to the Robust & Responsible AI newsletter to get the latest Data-Centric AI and MLOps news delivered quarterly to your inbox!

Other posts

Glassdoor Decreases Latency Overhead and Improves Data Monitoring with WhyLabs

The Glassdoor team describes their integration latency challenges and how they were able to decrease latency overhead and improve data monitoring with WhyLabs.

Understanding and Monitoring Embeddings in Amazon SageMaker with WhyLabs

WhyLabs and Amazon Web Services (AWS) explore the various ways embeddings are used, issues that can impact your ML models, how to identify those issues and set up monitors to prevent them in the future!

Data Drift Monitoring and Its Importance in MLOps

It's important to continuously monitor and manage ML models to ensure ML model performance. We explore the role of data drift management and why it's crucial in your MLOps pipeline.

Ensuring AI Success in Healthcare: The Vital Role of ML Monitoring

Discover how ML monitoring plays a crucial role in the Healthcare industry to ensure the reliability, compliance, and overall safety of AI-driven systems.

WhyLabs Recognized by CB Insights GenAI 50 among the Most Innovative Generative AI Startups

WhyLabs has been named on CB Insights’ first annual GenAI 50 list, named as one of the world’s top 50 most innovative companies developing generative AI applications and infrastructure across industries.

Hugging Face and LangKit: Your Solution for LLM Observability

See how easy it is to generate out-of-the-box text metrics for Hugging Face LLMs and monitor them in WhyLabs to identify how model performance and user interaction are changing over time.

7 Ways to Monitor Large Language Model Behavior

Discover seven ways to track and monitor Large Language Model behavior using metrics for ChatGPT’s responses for a fixed set of 200 prompts across 35 days.

Safeguarding and Monitoring Large Language Model (LLM) Applications

We explore the concept of observability and validation in the context of language models, and demonstrate how to effectively safeguard them using guardrails.

Robust & Responsible AI Newsletter - Issue #6

A quarterly roundup of the hottest LLM, ML and Data-Centric AI news, including industry highlights, what’s brewing at WhyLabs, and more.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo
loading...