Robust & Responsible AI Newsletter - Issue #6
- WhyLabs
Jul 10, 2023
ISSUE: July 2023
Keeping up with LLMs: Risks, Acquisitions, Integrations & More!
📬 Subscribe to get the latest LLM, ML and Data-Centric AI news delivered to your inbox!
🕙 TL;DR
Attend: Intro to LLM Monitoring Workshop. Join us on July 19th for a hands-on workshop on effective techniques for evaluating, troubleshooting, and monitoring large language models using the open-source text metrics toolkit - LangKit!
Read: Databricks picks up MosaicML. The acquisition showcases Databricks’ commitment to democratizing AI and reinforcing the company’s Lakehouse platform as a leading environment for building generative AI and LLMs. Read the press release to learn more.
Watch: Data + AI & Snowflake Summit on-demand. Both Snowflake and Databricks held their summits last week, and now you can watch the sessions on-demand. Catch up on the latest innovations and advancements announced at Snowflakes’ Summit and Databricks’ Data + AI Summit.
☕ What's brewing at WhyLabs
Safeguarding LLMs has never been more important. That's why earlier this month, WhyLabs unveiled LangKit, a powerful open-source library designed to detect and prevent malicious prompts, toxicity, hallucinations, and jailbreak attempts. Check out the official launch announcement and hear what industry luminary Andrew Ng had to say!
BYOF: Bring Your Own Functions. With the release of whylogs 1.2.0, UDFs, the foundation for monitoring complex data, are now available out-of-the-box. Learn more about crafting custom metrics to fit your unique business or research objectives.
Databricks + WhyLabs. Databricks and WhyLabs have partnered to enable a unique integration that makes it possible to compute all key telemetry data necessary for AI monitoring directly in Apache Spark. Learn more about the integration!
📚 What MLOps experts are reading
Training on generated data makes models forget. A significant proportion of people paid to train AI models may be themselves outsourcing the work to AI. Read about what researchers are calling 'Model Collapse' and the importance of using data collected from genuine human interactions to sustain the benefits of training from large-scale data scraped from the web.
Language models are changing AI: the need for holistic evaluation. The Center for Research on Foundation Models (CRFM) introduces a benchmarking approach to provide transparency by evaluating language models across a broad range of scenarios and metrics. Read the paper to learn about the three key elements of holistic evaluation.
The false promise of imitating proprietary LLMs. Recent experiments have shown that while imitation models initially appear to perform competitively with ChatGPT, closer evaluation reveals that they fail to bridge the capabilities gap on tasks that are not heavily represented in the imitation data. Read their detailed findings.
💡 Open source spotlight
5 green flags to look for in open-source. According to a GitHub report, 90% of companies use open-source in some way - but not all solutions and providers are created equal. Read why it’s important to scrutinize your options carefully!
BentoML launches OpenLLM. Run inference with any open-source LLM, deploy models to the cloud or on-premises, and build powerful AI applications. OpenLLM supports a range of open-source LLMs and offers flexible API. Learn more about it!
Introducing the FlyteCallback for Hugging Face. The newly introduced FlyteCallback for Hugging Face's Trainer was developed to address the previous challenges around GPU training. Read about their practical approach to balancing cost efficiency and usability.
🎟️ Robust & Responsible AI, at an event near you
Data Science Salon Meetup | July 26 | Seattle, WA
Connect with local practitioners and managers over food and drinks while hearing about opportunities and trends of using generative AI and ML in retail and e-commerce. Save your spot now!
Intro to ML Monitoring: Data Drift, Quality, Bias and Explainability | Aug 2 | Virtual
In this workshop, we’ll cover how to ensure model reliability and performance to implement your own AI observability solution from start to finish. Register to attend now!
Combining the Power of LLMs with Computer Vision | Aug 9 | Virtual
Jacob Marks, ML Engineer at Voxel51 will be a guest on the R2AI podcast to discuss the applications of LLMs with computer vision. Register to join the live chat!
Current 2023 | Sept 26-27 | San Jose, California
Join the event that’s all about Apache Kafka and real-time streaming to keep up with what’s hot and what’s next! Don’t miss WhyLabs’ Senior SDE, Anthony Naddeo’s session on going beyond data type validation for data quality monitoring.
Join 1,000+ AI practitioners
Join the R2AI community on Slack - connect with other practitioners, share ideas and learn. Catch up on workshops and talks on the WhyLabs YouTube channel.
Other posts
Best Practicies for Monitoring and Securing RAG Systems in Production
Oct 8, 2024
- Retrival-Augmented Generation (RAG)
- LLM Security
- Generative AI
- ML Monitoring
- LangKit
How to Evaluate and Improve RAG Applications for Safe Production Deployment
Jul 17, 2024
- AI Observability
- LLMs
- LLM Security
- LangKit
- RAG
- Open Source
WhyLabs Integrates with NVIDIA NIM to Deliver GenAI Applications with Security and Control
Jun 2, 2024
- AI Observability
- Generative AI
- Integrations
- LLM Security
- LLMs
- Partnerships
OWASP Top 10 Essential Tips for Securing LLMs: Guide to Improved LLM Safety
May 21, 2024
- LLMs
- LLM Security
- Generative AI
7 Ways to Evaluate and Monitor LLMs
May 13, 2024
- LLMs
- Generative AI
How to Distinguish User Behavior and Data Drift in LLMs
May 7, 2024
- LLMs
- Generative AI