Robust & Responsible AI Newsletter - Issue #4
- WhyLabs
Dec 22, 2022
Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.
ISSUE: December 2022
🕚 TL;DR
Trying to keep up with MLOps, but only have 10 minutes? Here is your shortlist:
Attend: The Robust & Responsible AI Summit 2023! We're excited to announce the inaugural R2AI Summit. Join us on Jan 26 for a half-day event featuring data leaders pioneering responsible AI - including Andrew Ng, Founder of DeepLearning.AI!
Read: The State of AI Report 2022. Read about the biggest breakthroughs, business impacts, social trends, and safety concerns from this year and what's coming next.
Watch: It’s here! The R2AI podcasts of 2022. Enjoy an eclectic mix of interviews with engineers, scientists, and product managers working to ensure that AI is robust and responsible.
💡 Open Source Spotlight
There's a lot going on in the world of open source tooling! Here is what's new:
ZenML's been busy this quarter. After a year of community feedback, ten months of development effort, and tens of thousands of code changes, ZenML has unveiled two major releases - see what’s new in the release notes.
Simplified data science workflows. Incorporating extensive user feedback, MLflow 2.0 simplifies data science workflows and delivers innovative, first-class tools for MLOps. Dive into the details of the new release and learn how to get started!
Profiling large amounts of data just got easier. Users can now use Fugue with whylogs on top of Spark, Dask, or Ray, to easily profile large-scale data for use cases such as anomaly detection, drift detection, and data validation.
📚 What MLOps experts are reading
Keeping up with the latest on MLOps can be a full-time job. Here are the highlights:
The future of NLP is bright. Check out a new guide to the high-impact, fast-changing technology driving huge growth in AI research, applications, and investment.
Technology readiness levels for ML systems. Nature Communications published a framework that defines a principled process of Machine Learning system formation, from research to production, for various domains and data scenarios.
The gap between aspiration and reality. 84% of organizations view responsible AI as a top management issue, but only a quarter have mature RAI programs. Learn more about the research highlighting RAI aspiration vs. reality in the report.
☕️ What’s brewing at WhyLabs
At WhyLabs, we are focused on making AI observability as easy and intuitive as your coffee machine. Here are our latest releases:
When it comes to an ML monitoring solution - should you build or buy? We’ve written a guide to help you make the right decision with detailed discussions of both options. Download your copy now!
WhyLabs integration highlights. With the whylogs and Apache Spark integration, users can achieve large scale data profiling and easily apply it into existing data and ML pipelines. Also, AIShield and WhyLabs have partnered to make it trivial for companies relying on AI to maintain the security and reliability of their models.
ML monitoring in under 5 minutes. It only takes a few minutes and a few lines of code to monitor your ML models and data pipelines. This short post will show you how to monitor common issues with your ML models, such as data drift, concept drift, data quality, and performance!
🎟️ Robust & Responsible AI, at an event near you
If you're looking for high quality events, we've got you covered. As a perk, you will always have a friend, because somebody from WhyLabs is either speaking or attending!
Robust & Responsible AI Summit | Jan 26, 2023 | Virtual
This half-day event includes talks, fireside chats, panels, AMAs, and more featuring data leaders pioneering the technologies, processes, and standards shaping Responsible AI!
Live Interview: Why Graph Query Language Matters | Jan 12, 2023 | Virtual
Jason Koo, Developer Advocate at Neo4j, will be joining the Rsqrd AI Community podcast to discuss why Graph Query Language (GQL) and graph databases matter.
PyData Seattle | April 26-28, 2023 | Seattle, WA
3-days of talks, tutorials, and discussions to bring attendees the latest project features along with cutting edge use cases.
ODSC East | May 9-11, 2023 | Boston, MA
Over the course of 3 days, ODSC East will provide expert-led instruction in machine learning, deep learning, NLP, MLOps, and more through hands-on training sessions, immersive workshops, and talks. Register now for early bird pricing!
Join the Community
Join the Robust & Responsible AI (Rsqrd) Community on Slack to connect with other practitioners, share ideas, and learn about exciting new techniques. Attend the community live chats or check out YouTube to see all the recordings.
If you want to help support whylogs, the open-source standard for data logging, check out our GitHub and give us a star.
📬 Subscribe to the Robust & Responsible AI newsletter to get the latest Data-Centric AI and MLOps news delivered quarterly to your inbox!
Other posts
Understanding and Implementing the NIST AI Risk Management Framework (RMF) with WhyLabs
Dec 10, 2024
- AI risk management
- AI Observability
- AI security
- NIST RMF implementation
- AI compliance
- AI risk mitigation
Best Practicies for Monitoring and Securing RAG Systems in Production
Oct 8, 2024
- Retrival-Augmented Generation (RAG)
- LLM Security
- Generative AI
- ML Monitoring
- LangKit
How to Evaluate and Improve RAG Applications for Safe Production Deployment
Jul 17, 2024
- AI Observability
- LLMs
- LLM Security
- LangKit
- RAG
- Open Source
WhyLabs Integrates with NVIDIA NIM to Deliver GenAI Applications with Security and Control
Jun 2, 2024
- AI Observability
- Generative AI
- Integrations
- LLM Security
- LLMs
- Partnerships
OWASP Top 10 Essential Tips for Securing LLMs: Guide to Improved LLM Safety
May 21, 2024
- LLMs
- LLM Security
- Generative AI
7 Ways to Evaluate and Monitor LLMs
May 13, 2024
- LLMs
- Generative AI
How to Distinguish User Behavior and Data Drift in LLMs
May 7, 2024
- LLMs
- Generative AI