blog bg left
Back to Blog

Achieving Ethical AI with Model Performance Tracing and ML Explainability

In today’s world of omnipresent AI applications, one topic that is receiving an increasing amount of attention is the ethical aspect of this technology. At WhyLabs we are big proponents of Robust & Responsible AI, which is why we’ve expanded our platform with Performance Tracing and Model Explainability. These new capabilities will accelerate our customers in their journey toward achieving the three goals of ethical AI - fairness, accountability and transparency.

See how WhyLabs can help you achieve ethical AI and enable ML performance Tracing and Explainability - sign up for a free starter account or request a demo!

Why should you care?  

According to an article from the Harvard Business Review, “failing to operationalize data and AI ethics is a threat to the bottom line”. There have been numerous cases in recent years showing how the lack of proper consideration of fairness, transparency, and privacy led to public scandals, entire projects being scrapped and even lawsuits being filed. The repercussions of those cases are still resonating in the AI community, driving the development of best practices around ensuring ethical AI and leading to the inclusion of these topics in the curricula of AI-oriented courses (e.g. Managing Machine Learning Projects on Coursera.org). The consensus is that the best ethical risk mitigation policy is to design the AI products anticipating the issues before they occur and implementing tools to detect them from day one.

How can you leverage WhyLabs to ensure AI ethics in your projects?

The WhyLabs platform can address the three aspects of ethical AI with the following functionalities:

Fairness

  • Segments - aggregating the data into groups based on the model’s input features or additional attributes is key to detecting fairness and bias issues in your model, as it allows for tracking group-specific metrics.
  • Tracing dashboard - in this view you can inspect the performance metrics and volume of the overall or segmented dataset over time, juxtaposing the metrics of one segment to another or to the overall dataset values, as well as comparing those metrics for different time ranges or profiles. The broad range of possibilities allows for a fine-grained analysis and detection of potential fairness issues.
WhyLabs Tracing Dashboard

Transparency

  • Explainability dashboard - in this tab you can inspect what features have contributed most to the predictions that your model is generating, which enables you to understand what is driving its decisions - and whether these most important features are really relevant. For example, you wouldn’t want a model performing mortgage eligibility assessment to have its predictions influenced by the gender of the applicant.
WhyLabs Explainability Dashboard

Accountability

  • Monitoring - this key value that WhyLabs provides is crucial to maintaining accountability over the AI product, as it keeps the responsible team informed about any concerning behavior of their system.
  • User-friendly UI - the WhyLabs platform can be utilized by various user groups, providing insights into the AI system’s health for technical and non-technical audiences, democratizing the awareness of the AI solutions across the organization.
  • Notifications - the monitors tracking the telemetry of your models and data can trigger alerts, which depending on their severity can reach not only the ML/DS engineering teams, but also the product managers and stakeholders, leading to an increased attention in case the rules of ethical AI are breached.
WhyLabs Monitor Manager

What ethical questions will you be able to answer if you monitor your AI solution with WhyLabs?

  • Is my model fair with respect to different user groups?
  • Are there any differences among the error rates for different user groups?
  • Is my model making predictions based on features that can be introducing bias?
  • Am I monitoring for model drift to ensure my software remains fair over time?

To see how WhyLabs can help you achieve ethical AI and enable ML performance Tracing and Explainability, sign up for a free account or request a demo.

Check out our Performance Tracing and Model Explainability documentation to learn more, or if you’re interested in learning how you can apply data and/or model monitoring to your organization, please contact us, and we would be happy to talk!

Resources

Sources:

  1. https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
  2. https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4t6dA
  3. https://resources.sei.cmu.edu/asset_files/FactSheet/2019_010_001_636622.pdf

Other posts

How to Troubleshoot Embeddings Without Eye-balling t-SNE or UMAP Plots

WhyLabs' scalable approach to monitoring high dimensional embeddings data means you don’t have to eye-ball pretty UMAP plots to troubleshoot embeddings!

Robust & Responsible AI Newsletter - Issue #5

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

Detecting Financial Fraud in Real-Time: A Guide to ML Monitoring

Fraud is a significant challenge for financial institutions and businesses. As fraudsters constantly adapt their tactics, it’s essential to implement a robust ML monitoring system to ensure that models effectively detect fraud and minimize false positives.

Detecting and Fixing Data Drift in Computer Vision

In this tutorial, Magdalena Konkiewicz from Toloka focuses on the practical part of data drift detection and fixing it on a computer vision example.

BigQuery Data Monitoring with WhyLabs

We’re excited to announce the release of a no-code solution for data monitoring in Google BigQuery, making it simple to monitor your data quality without writing a single line of code.

Robust & Responsible AI Newsletter - Issue #4

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

WhyLabs Private Beta: Real-time Data Monitoring on Prem

We’re excited to announce our Private Beta release of an extension service for the Profile Store, enabling production use cases of whylogs on customers' premises.

Understanding Kolmogorov-Smirnov (KS) Tests for Data Drift on Profiled Data

We experiment with statistical tests, Kolmogorov-Smirnov (KS) specifically, applied to full datasets and dataset profiles and compare the results.

Re-imagine Data Monitoring with whylogs and Apache Spark

An overview of how the whylogs integration with Apache Spark achieves large scale data profiling, and how users can apply this integration into existing data and ML pipelines.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo
loading...