blog bg left
Back to Blog

AIShield and WhyLabs: Threat Detection and Monitoring for AI

AI Security, its impact, and challenges

Around the world, the adoption of artificial intelligence (AI) and its impact on businesses and society stands at a turning point. The cybersecurity of AI in AI-first companies is mission critical. Still, security is typically an afterthought in ML systems. Would TikTok succeed in a highly competitive attention economy without its AI recommendation engine working properly? What if it is attacked? Would Grammarly succeed without its AI engine? What if it is compromised? What if AI-powered security systems are attacked?

The reality is that AI can be attacked and existing cybersecurity measures are insufficient to protect against such attacks. Gartner reported that 2 in 5 organizations had AI Security incidents and privacy breaches. Their study also suggested that many breaches and incidents are going unreported or undetected. AI systems are at the epicentre of security, safety, and privacy concerns.

Fortunately, AIShield and WhyLabs are partnering to make it trivial for companies relying on AI to maintain the security and reliability of their models. Using AIShield and WhyLabs, users can prevent both AI attacks and failures, ensuring that their models drive value for the business.

AIShield – Providing a one-stop AI Security Solution

AIShield is an AI-security solution designed to protect AI systems in the face of emerging security threats. AIShield brings vulnerability assessment and security hardening to the consumer’s AI-based devices and cloud solutions. It has been developed to natively support automation with microservice-based REST-API offerings for organizations to achieve scale rapidly.

Distinctive features to deliver radical user value of affordable security at scale:

  • Vulnerability scanning - Analysis for various types of AI/ML models against different attack types such as theft, poisoning, evasion, and inference
  • Endpoint protection - Threat-informed defense generation
  • Intrusion detection prevention - Real-time prevention and monitoring of new attacks in the cloud and on devices
  • Threat intelligence feed - Active threat hunting and incident report triggers

AIShield is available in cloud-native SaaS configurations designed with an API-first approach with detailed dashboards available for various stakeholders across all industries.

To learn more about AIShield, visit their website.

WhyLabs – AI Observability

The WhyLabs AI Observatory helps data scientists and machine learning engineers prevent AI failures, thus building the reliability of and trust in their machine learning models. WhyLabs offers model performance monitoring to prevent model performance degradation, with features such as drift detection, intelligent anomaly detection, and a series of dashboards to help users continuously improve the performance of their models.

WhyLabs uses whylogs, the open standard for data logging, to gather telemetry about the data flowing through customers’ models. whylogs generates whylogs profiles–statistical summaries of data–which can then be uploaded to WhyLabs to monitor deployed models. The platform is uniquely scalable, secure, and easy to use because it relies on these data profiles rather than on raw data.

To learn more about WhyLabs AI Observatory, click here.

Real-time defense and threat insights

“You can’t mitigate what you can’t detect.”

The key requirements of addressing security threat response for AI systems are that you must first detect that the AI system is under attack, then send that information to your Security Operation Centre for response and analysis. AIShield and WhyLabs AI observatory deliver a solution that exactly does this.

Enterprises can generate a threat-informed endpoint defense model by integrating AIShield vulnerability and defense APIs within their AI development workflow. AIShield analyses vulnerabilities against an exhaustive attack repository and creates a threat-informed endpoint defense model that can be placed alongside your original model in the target environment. This defense model can be used to generate real-time protection against AI model attacks.

Figure 1: Securing AI workflows with AIShield

AIShield’s threat-informed endpoint defense model can be integrated with the whylogs logging agent for real-time telemetry of attacks on the AI model. The defense model can be configured to send telemetry to the WhyLabs AI Observatory as soon as an attack is detected. If there are any observed anomalies with respect to the baseline data, automated alerts are generated in the WhyLabs AI Observatory-enabled dashboards.

AIShield-WhyLabs integration provides a novel ability for AI asset owners to get comprehensive security insights for ML models, along with observability parameters already tracked using the WhyLabs AI such as model performance metrics, drift detection, and others.

Find more about the technical integration here.

Figure 2: Threat-informed endpoint defense deployment along with whylogs agent integration sending telemetry to WhyLabs AI Observatory

Imagine this- A healthcare Medtech Provider spent multimillion dollars over 4 years to bring an AI-powered non-invasive software solution into the market. The idea was to combat the challenge of early cancer detection. The plan was set- Launch a one-of-a-kind product and capture a large market share with an innovative business model of pay-per-use via API. However, attackers and hactivitists showed them that the ground reality was very different. On one hand, the attackers launched novel model extraction attacks without breaching traditional cybersecurity controls and on the other hand, hactivitists launched attacks to demonstrate biases and performance shortfall using poisonings and evasion attacks. This resulted in the extraction of their numero uno USP for financial gains and a loss of reputation. To add salt to the injury, the organization observed a drift in data, and its impact on the model and had to work on assuring that their product was safe to use. And the story takes a drastic turn when they want to deploy the model again as they need to fulfill upcoming cybersecurity requirements to provide adequate security control and therefore assurance that the algorithm is not harming patients and it can be monitored. This story is an excellent example of why getting a one-stop solution for monitoring and security needs is a matter of survival first and then growth.


AIShield’s AI model security solution enables enterprises to get real-time insights into the security posture of their AI assets. The seamless integration of AIShield’s security insights on WhyLabs AI observability platform delivers strong value for enterprises. It acts as a one-stop platform for comprehensive insights into ML workloads in one place and brings security hardening to AI-powered enterprises with robustness against novel risks lurking in the present and immediate future.

To learn more about WhyLabs, contact us to schedule a demo or sign-up for a WhyLabs starter account to monitor datasets and ML models for free, no credit card required.

To learn more about AIShield, visit their website.

Other posts

How to Troubleshoot Embeddings Without Eye-balling t-SNE or UMAP Plots

WhyLabs' scalable approach to monitoring high dimensional embeddings data means you don’t have to eye-ball pretty UMAP plots to troubleshoot embeddings!

Robust & Responsible AI Newsletter - Issue #5

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

Detecting Financial Fraud in Real-Time: A Guide to ML Monitoring

Fraud is a significant challenge for financial institutions and businesses. As fraudsters constantly adapt their tactics, it’s essential to implement a robust ML monitoring system to ensure that models effectively detect fraud and minimize false positives.

Achieving Ethical AI with Model Performance Tracing and ML Explainability

With Model Performance Tracing and ML Explainability, we’ve accelerated our customers’ journey toward achieving the three goals of ethical AI - fairness, accountability and transparency.

Detecting and Fixing Data Drift in Computer Vision

In this tutorial, Magdalena Konkiewicz from Toloka focuses on the practical part of data drift detection and fixing it on a computer vision example.

BigQuery Data Monitoring with WhyLabs

We’re excited to announce the release of a no-code solution for data monitoring in Google BigQuery, making it simple to monitor your data quality without writing a single line of code.

Robust & Responsible AI Newsletter - Issue #4

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

WhyLabs Private Beta: Real-time Data Monitoring on Prem

We’re excited to announce our Private Beta release of an extension service for the Profile Store, enabling production use cases of whylogs on customers' premises.

Understanding Kolmogorov-Smirnov (KS) Tests for Data Drift on Profiled Data

We experiment with statistical tests, Kolmogorov-Smirnov (KS) specifically, applied to full datasets and dataset profiles and compare the results.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo