blog bg left
Back to Blog

WhyLabs Achieves SOC 2 Type 2 Certification!

AI observability is mission-critical for production ML applications. At WhyLabs, we are committed to making AI observability ubiquitous and available to every AI practitioner by removing the barriers for the adoption of this essential technology. One of these barriers is the need for data privacy and security assurance.

We are very happy to announce that we successfully completed our SOC 2 Type 2 examination with zero exceptions. WhyLabs is committed to ensuring our current, and future customers are well informed about the robust capabilities and security of the WhyLabs AI Observatory platform. A part of that commitment is our guarantee to have our business policies and practices evaluated and validated by independent third parties.

What is SOC 2 Compliance?

System and Organization Controls (SOC) reports are issued to organizations that provide services like WhyLabs, and whose controls have been evaluated by a third party against defined standards. SOC 2 is one of the most comprehensive certifications within SOC and is broadly considered the most trusted third-party security verification.

WhyLabs’ successful SOC 2 Type 2 examination was focused on controls as they relate to security. This designation recognizes that WhyLabs meets all the infrastructure and data control policy requirements to regularly monitor for malicious or unrecognized activity, monitor user access levels, and document system configuration changes. The results reveal that our information and systems are thoroughly protected against unauthorized access, disclosure of information, and damage to systems. The report is available to customers and prospects evaluating the effectiveness of WhyLabs’ policies and procedures for controlling our services.

Our relentless commitment to your security

We know that security and data privacy are critical for all our customers and users. We designed our SaaS platform, the WhyLabs AI Observatory, from first principles with privacy built in. The raw data never leaves the customer perimeter.  Our approach is to profile model inputs and outputs continuously but capture only statistical profiles of the underlying data. These statistical profiles do not contain proprietary information or PII, and for added security, all statistical profiles are encrypted during transfer and at rest.

WhyLabs was designed to remove barriers for organizations to adopt and optimize ML applications - with peace of mind that their data is secure. We want customers to focus on achieving healthy models and healthy data without worrying about threats to data and privacy. Our successful SOC2 Type 2 certification is only one of the stepping stones in our commitment to security.

To learn more about security at WhyLabs, visit our security page or join our Slack. To request The WhyLabs SOC 2 Type 2 report, please contact your account manager or email [email protected].

Run AI with Certainty!

Other posts

Get Early Access to the First Purpose-Built Monitoring Solution for LLMs

We’re excited to announce our private beta release of LangKit, the first purpose-built large language model monitoring solution! Join the responsible LLM revolution by signing up for early access.

Mind Your Models: 5 Ways to Implement ML Monitoring in Production

We’ve outlined five easy ways to monitor your ML models in production to ensure they are robust and responsible by monitoring for concept drift, data drift, data quality, AI explainability and more.

Simplifying ML Deployment: A Conversation with BentoML's Founder & CEO Chaoyu Yang

A summary of the live interview with Chaoyu Yang, Founder & CEO at BentoML, on putting machine learning models in production and BentoML's role in simplifying deployment.

Data Drift vs. Concept Drift and Why Monitoring for Them is Important

Data drift and concept drift are two common challenges that can impact ML models on production. In this blog, we'll explore the differences between these two types of drift and why monitoring for them is crucial.

Robust & Responsible AI Newsletter - Issue #5

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

Detecting Financial Fraud in Real-Time: A Guide to ML Monitoring

Fraud is a significant challenge for financial institutions and businesses. As fraudsters constantly adapt their tactics, it’s essential to implement a robust ML monitoring system to ensure that models effectively detect fraud and minimize false positives.

How to Troubleshoot Embeddings Without Eye-balling t-SNE or UMAP Plots

WhyLabs' scalable approach to monitoring high dimensional embeddings data means you don’t have to eye-ball pretty UMAP plots to troubleshoot embeddings!

Achieving Ethical AI with Model Performance Tracing and ML Explainability

With Model Performance Tracing and ML Explainability, we’ve accelerated our customers’ journey toward achieving the three goals of ethical AI - fairness, accountability and transparency.

Detecting and Fixing Data Drift in Computer Vision

In this tutorial, Magdalena Konkiewicz from Toloka focuses on the practical part of data drift detection and fixing it on a computer vision example.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo
loading...