blog bg left
Back to Blog

WhyLabs Private Beta: Real-time, No-code, Cloud Storage Data Profiling

Developers are busy enough as it is. For some, devoting development time to integrating DevOps tools and monitoring solutions can be enough of a deterrent that it ends up being perpetually kicked to the next sprint. We've talked to enough users at this point to understand that having no-code integration options is mandatory for certain use cases.

We've started working on a no-code integration option for WhyLabs that lets certain users bypass the need to integrate whylogs into their data pipeline. We already have a container based solution that enables you to black box the integration, but that involves hosting your own container. Some users find that more burdensome than just writing some code.

The first iteration of this solution is aimed at users who already have most of their data in cloud storage (starting with S3/AWS), don't want to invest any development time into profiling their data with whylogs, and don't mind permitting WhyLabs to ingest from their S3 bucket (without permanently storing any data). In the future, we'll support every cloud storage solution and give parts of this solution on-prem capabilities, so it can be deployed using tools like Terraform.

How it works

We're starting simple, essentially hosting our existing whylogs container ourselves and hooking it up to user S3 accounts via S3 events. The primary use case is real-time data monitoring; we won't be addressing historical data imports just yet.

The onboarding experience will consist of a few updates to your bucket:

  • Adding a policy to an S3 bucket that lets our AWS account download data.
  • Adding a special tag to your S3 bucket that we look out for so we know you own it.
  • Enabling S3 events directed at our SNS topic.

And then a few steps in our UI:

  • Specifying which column in your dataset represents time, or deciding to use the upload time instead.
  • Specifying the types of your columns if the inferred whylogs types aren't what you want.
  • Specifying the WhyLabs org and model id that you want to import the data into

At that point, you'll be able to upload csv and parquet files to your bucket and we'll automatically download and profile data. The image below is the rough sequence of events.

  1. Your application will periodically upload data to S3.
  2. AWS will trigger an event with a reference to that file in S3 to our SNS topic.
  3. SNS invokes our Lambda function.
  4. We validate the bucket tags and read any associated metadata in our back end (which org/model this belongs to, etc.).
  5. Pipe it over to Kinesis.
  6. Pipe it over into our whylogs container.
  7. Download the data from S3, profile it with whylogs, and delete it from the container.
  8. Upload the profile to WhyLabs for monitoring.

One of WhyLab's biggest strengths is that we don't need (or want) the raw data, we only need the profiles we generate with whylogs from the data. The only reason we're accessing data here is because we have a strong signal that some users would rather we "just do it" if it means they don't have to do any dev work. That said, we still have no interest in retaining the raw data and we drop it as soon as we profile it. We'll have better self hosting/on-prem options in the future for this system for people who can't share any data and who don't want to integrate whylogs into their architecture manually.

Wrapping up

We have several integration options depending on your needs as well. Integrating our open source whylogs library manually will always give the most flexibility, and our whylogs container is a happy middle ground between manual and no-code if you're willing to host it.

Early access and feedback

If you're interested in early access or have any feedback on the feature, reach out to us on Slack or email and mention this post, or fill out this google form. We'll be working on releasing this over the coming weeks and we'll follow up by adding support for Google Cloud and Azure eventually as well.

Other posts

Re-imagine Data Monitoring with whylogs and Apache Spark

An overview of how the whylogs integration with Apache Spark achieves large scale data profiling, and how users can apply this integration into existing data and ML pipelines.

ML Monitoring in Under 5 Minutes

A quick guide to using whylogs and WhyLabs to monitor common issues with your ML models to surface data drift, concept drift, data quality, and performance issues.

AIShield and WhyLabs: Threat Detection and Monitoring for AI

The seamless integration of AIShield’s security insights on WhyLabs AI observability platform delivers comprehensive insights into ML workloads and brings security hardening to AI-powered enterprises.

Large Scale Data Profiling with whylogs and Fugue on Spark, Ray or Dask

Profiling large-scale data for use cases such as anomaly detection, drift detection, and data validation with Fugue on Spark, Ray or Dask.

Monitoring Image Data with whylogs v1

When operating computer vision systems, data quality and data drift issues always pose the risk of model performance degradation. Whylabs provides a simple yet highly customizable solution for maintaining observability into data to detect issues and take action sooner.

Data and ML Monitoring is Easier with whylogs v1.1

The release of whylogs v1.1 brings many features to the whylogs data logging API, making it even easier to monitor your data and ML models!

Model Monitoring for Financial Fraud Classification

Model monitoring is helping the financial services industry avoid huge losses caused by performance degradation in their fraud transaction models.

Robust & Responsible AI Newsletter - Issue #3

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

Data Quality Monitoring in Apache Airflow with whylogs

To make the most of whylogs within your existing Apache Airflow pipelines, we’ve created the whylogs Airflow provider. Using an example, we’ll show how you can use whylogs and Airflow to make your workflow more responsible, scalable, and efficient.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo