Robust & Responsible AI Newsletter - Issue #5
- WhyLabs
Mar 10, 2023
Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.
ISSUE: March 2023
📬 Subscribe to get the latest Data-Centric AI and MLOps news delivered to your inbox!
🕙 TL;DR
Trying to keep up with MLOps, but only have 10 minutes? Here is your shortlist:
Attend: 4 ML Monitoring workshops. Join us for a series of hands-on workshops to learn the basics of ML monitoring, AI observability, and the tools and techniques to effectively manage ML models and AI systems. Register for one or all of the sessions!
Read: The industry-wide neglect of data design and data quality. Read Cassie Kozyrkov's post on how the art of making good data is terribly neglected, but even when you do have data, there’s a chance you’re missing something: data quality.
Watch: R2AI Summit. Andrew Ng on the Data-centric AI toolchain and innovations; Mailchimp’s Maya Wilson on using GPT3 at scale; Shopify’s Alicia Bargar on feature stores, and more. Check out all the on-demand sessions from the Robust and Responsible AI Summit!
☕ What's brewing at WhyLabs
At WhyLabs, we're focused on making AI observability as easy and intuitive as your coffee machine. Here are our latest releases:
Embeddings: Stop eye-balling pretty t-SNE or UMAP plots to troubleshoot! WhyLabs' scalable approach to monitoring high dimensional embeddings data means you don’t have to explore it by hand. Read how it’s easier than ever to troubleshoot embeddings!
Accelerate your ethical AI journey. We’ve expanded our platform with Performance Tracing and Model Explainability to accelerate customers’ journey toward achieving the three goals of ethical AI - fairness, accountability, and transparency.
📚 What MLOps experts are reading
Keeping up with the latest on MLOps can be a full-time job. Here are the highlights:
Reinforcement Learning with Human Feedback (RLHF). An exciting innovation behind the success of ChatGPT and InstructGPT, RLHF has been the subject of several blog posts and explanations. Here’s one of our favorites from Hugging Face.
Advancing trustworthy AI systems. NIST released an AI Risk Management Framework to equip organizations and individuals with approaches that help foster the responsible design, development, deployment, and use of AI systems over time.
💡 Open source spotlight
There's a lot going on in the world of open source tooling! Here is what's new:
TensorFlow Decision Forests is production ready. TensorFlow promises fast training and improved prediction performance on tabular datasets. Read about all the new features, including distributed training and hyper-parameter tuning.
The rise and regulation of ChatGPT3. OpenAI creates a new AI classifier to distinguish between AI-written and human-written text, and the creator of ChatGPT explains why we should regulate AI.
🎟️ Robust & Responsible AI, at an event near you
If you're looking for high quality events, we've got you covered. As a perk, you will always have a friend, because somebody from WhyLabs is either speaking or attending!
PyData Seattle | April 26 - 28, 2023 | Seattle, WA
3-days of talks, tutorials, and discussions to bring attendees the latest project features along with cutting-edge use cases. Register to join the PyData Community in Seattle with this 10% discount code!
ODSC East | May 9 - 11, 2023 | Boston, MA
Over the course of 3 days, ODSC East will provide expert-led instruction in machine learning, deep learning, NLP, MLOps, and more through hands-on training sessions, immersive workshops, and talks. Register now for 50% off!
ML Monitoring Fundamentals Workshop Series | March 2023 | Virtual
A series of hands-on workshops to learn the basics of ML monitoring, AI observability, and tools and techniques to effectively manage ML models and AI systems. Register for one or all of the workshops!
Join the Community
Join the Robust & Responsible AI (R2AI) Community on Slack to connect with other practitioners, share ideas, and learn about exciting new techniques. Attend the community live chats or check out YouTube to see all the recordings.
If you want to help support whylogs, the open-source standard for data logging, check out our GitHub and give us a star.
📬 Subscribe to the Robust & Responsible AI newsletter to get the latest Data-Centric AI and MLOps news delivered quarterly to your inbox!
Other posts
Understanding and Implementing the NIST AI Risk Management Framework (RMF) with WhyLabs
Dec 10, 2024
- AI risk management
- AI Observability
- AI security
- NIST RMF implementation
- AI compliance
- AI risk mitigation
Best Practicies for Monitoring and Securing RAG Systems in Production
Oct 8, 2024
- Retrival-Augmented Generation (RAG)
- LLM Security
- Generative AI
- ML Monitoring
- LangKit
How to Evaluate and Improve RAG Applications for Safe Production Deployment
Jul 17, 2024
- AI Observability
- LLMs
- LLM Security
- LangKit
- RAG
- Open Source
WhyLabs Integrates with NVIDIA NIM to Deliver GenAI Applications with Security and Control
Jun 2, 2024
- AI Observability
- Generative AI
- Integrations
- LLM Security
- LLMs
- Partnerships
OWASP Top 10 Essential Tips for Securing LLMs: Guide to Improved LLM Safety
May 21, 2024
- LLMs
- LLM Security
- Generative AI
7 Ways to Evaluate and Monitor LLMs
May 13, 2024
- LLMs
- Generative AI
How to Distinguish User Behavior and Data Drift in LLMs
May 7, 2024
- LLMs
- Generative AI