WhyLabs Announces SCA with AWS to Accelerate Responsible Generative AI Adoption
- WhyLabs
- Partnerships
- News
- Generative AI
Nov 14, 2023
Today we announced a Strategic Collaboration Agreement (SCA) with Amazon Web Services (AWS) to help enterprises accelerate the development of AI-powered applications! The SCA is designed to further accelerate AI Observability for AWS customers and demonstrates a commitment to helping enterprises adopt generative and predictive AI technologies safely and responsibly.
The WhyLabs Platform gives customers the power to control the health of AI-enabled applications by surfacing and preventing undesirable AI behavior, including leakage of sensitive data, presence of malicious prompts, toxic responses, problematic topics, hallucinations, as well as attempts of jailbreaks. The ability to control and observe the health of AI applications is critical for the success and ROI of every experience powered by either predictive or generative AI models.
WhyLabs will be at AWS re:Invent - contact us to schedule a meeting.
The SCA between WhyLabs and AWS gives shared customers an easy way to switch on observability for AI applications through the WhyLabs Platform on AWS. WhyLabs has already driven massive value in some of the most advanced AI organizations:
“Glassdoor is actively integrating large language models (LLMs) into a range of products that power our community experience. ML Observability is fundamental to our ability to track model performance and deliver consistently positive user experiences,” said Rolland He, Manager, Machine Learning Science at Glassdoor. “Partnering with WhyLabs helps us ensure that observability is enabled across our portfolio of AI applications.”
“In order for Snappt to create a scalable solution for fraud detection and maintain 99.8% accuracy, AI is essential,” said Mahmoud Lababidi, Director of Machine Learning at Snappt. “With WhyLabs Observability tools, we have critical visibility into the health of the AI applications, ensuring robustness and reliability across our product portfolio.”
To further expand the value proposition and optimize customer experience, WhyLabs is also using AI capabilities built on AWS, including the ability to bring the foundation models (FMs) available on Amazon Bedrock—a fully managed service that makes FMs from leading AI companies accessible via an API to build and scale generative AI applications—to enable LLM-powered evaluation and hallucination detection.
WhyLabs is available in AWS Marketplace, providing companies with accelerated implementation, streamlined procurement, and consolidated billing.
“WhyLabs is helping to accelerate AI adoption, making it even easier for our customers to observe and control the quality of AI models they customize and build using Amazon Bedrock and Amazon SageMaker,” said Alessya Visnjic, Chief Executive Officer, WhyLabs. “We are leveraging the power of AWS to help pave the way for every organization to harness the power of generative AI and push the boundaries of customer experience, while prioritizing safety and security.”
Read the full press release announcement here.
To learn more about AWS and WhyLabs' partnership, visit our partner page.
Other posts
Understanding and Implementing the NIST AI Risk Management Framework (RMF) with WhyLabs
Dec 10, 2024
- AI risk management
- AI Observability
- AI security
- NIST RMF implementation
- AI compliance
- AI risk mitigation
Best Practicies for Monitoring and Securing RAG Systems in Production
Oct 8, 2024
- Retrival-Augmented Generation (RAG)
- LLM Security
- Generative AI
- ML Monitoring
- LangKit
How to Evaluate and Improve RAG Applications for Safe Production Deployment
Jul 17, 2024
- AI Observability
- LLMs
- LLM Security
- LangKit
- RAG
- Open Source
WhyLabs Integrates with NVIDIA NIM to Deliver GenAI Applications with Security and Control
Jun 2, 2024
- AI Observability
- Generative AI
- Integrations
- LLM Security
- LLMs
- Partnerships
OWASP Top 10 Essential Tips for Securing LLMs: Guide to Improved LLM Safety
May 21, 2024
- LLMs
- LLM Security
- Generative AI
7 Ways to Evaluate and Monitor LLMs
May 13, 2024
- LLMs
- Generative AI
How to Distinguish User Behavior and Data Drift in LLMs
May 7, 2024
- LLMs
- Generative AI