blog bg left
Back to Blog

BigQuery Data Monitoring with WhyLabs

You can now monitor the quality of your data in Google BigQuery with whylogs without writing any code. This is the first truly no-code solution for data monitoring that WhyLabs offers and we started with BigQuery because of its popularity, managed infrastructure, and integration options. Data quality monitoring is a key process for ensuring that the data your analytics and machine learning applications rely on is sound. This whylogs integration is a good fit for anyone who uses BigQuery to store their data and wants to avoid writing any code to monitor the quality of that data on an ongoing basis.

The core of the integration is an Apache Beam template that we publish to a public GCS bucket. The template can be used to create a Dataflow job that consumes from BigQuery in a few different ways, depending on how you configure it.

How to use it

Before starting, you’ll need to head over to WhyLabs and create a free account to get your organization id, model id, and api key. API keys can be generated from the settings menu after you log in. You’ll supply these parameters to the Dataflow job below.

To use the integration, you'll need a GCP account that has access to the BigQuery and Dataflow services. This section will have examples that use the Google Cloud console. Start by opening the Dataflow service and creating a job from a template.

Next, select the Custom Template option.

For the template location, enter whylabs-dataflow-templates/batch_bigquery_template/latest/batch_bigquery_template.json.  You'll see the form automatically expand to highlight additional parameters you have to supply.

In this example we'll profile one of the public datasets hosted by Google using the following configuration options.

  • Output GCS path - gs://template_test_bucket/my_job/profile. Pick a bucket you own here. This determines where the whylogs profiles are written to.
  • Input Mode - BIGQUERY_TABLE. This tells the template to consume an entire BigQuery table.
  • Date column - block_timestamp. This is the column in the dataset that should be used for time. It should have a type of TIMESTAMP in the BigQuery schema. The dataset we'll be using happens to use this name. It will be different for your data.
  • Organization ID - Something like org-abc123. This is the organization id of your WhyLabs account. You can get a free one by signing up at hub.whylabsapp.com.
  • Model ID - The model id that you'll upload these whylogs profiles to. You can create one for free by signing up at hub.whylabsapp.com.
  • API Key - An API key from your WhyLabs account. You can generate one from the settings menu.

There are a few additional parameters to set as well. You'll find them under the optional parameters section. They're considered optional because they're conditionally required depending on which input mode you select.

We only need to supply two more, but the image shows all of the optional parameters as well.

  • Input BigQuery Table - bigquery-public-data.crypto_bitcoin_cash.transactions. It's a reasonably sized free dataset with a time column.
  • Pandas Grouper frequency - Y. The job uses pandas to split by time under the hood. This tells it to split by year. In this example we'll generate roughly 15 whylogs profiles, one for each year of data.

Create the job and you'll see a graph like the following.

Once the job finishes, you'll see the profiles in GCS and your WhyLabs account. We created profiles for several years of data here, but the normal use case would be to create a profile for every day or hour as you receive new data.

All of this could have been run from the gcloud command line too with a command like the following.

gcloud dataflow flex-template run "my-job" \
        --template-file-gcs-location gs://whylabs-dataflow-templates/batch_bigquery_template/latest/batch_bigquery_template.json \
        --parameters input-mode=BIGQUERY_TABLE \
        --parameters input-bigquery-table='bigquery-public-data.crypto_bitcoin_cash.transactions' \
        --parameters date-column=block_timestamp \
        --parameters date-grouping-frequency=Y \
        --parameters org-id=MY_ORG_ID \
        --parameters dataset-id=MY_MODEL_ID \
        --parameters output=gs://my-bucket/dataset-timestmap-test/dataset_profile \
        --parameters api-key=MY_KEY \
        --region us-central1 

How it works

The job is designed to generate a handful of whylogs profiles. For a model that you're monitoring on a daily basis, the job will end up producing a profile per day of data. If there are too many days in the job then the amount of time it takes to generate profiles will start to increase. While you could use this job to generate a profile for every day of data for several years, it would certainly not perform well.

If you have use cases that would benefit from generating many profiles per job then reach out to us. We have several Dataflow pipeline configurations that we didn't publish as templates and one of them might suit your needs.

Once you have everything set up, the next thing you'll want to do is set up monitors on your data.

What's next

We'll be adding a few features over time to this integration. The most exciting one is support for streaming mode, which will allow real time profiling of Dataflow based data pipelines.

We'll also be adding additional data sources. This is technically a Dataflow integration that happens to support BigQuery as an input right now, but Dataflow can consume more services than just BigQuery.

Get started by creating a free WhyLabs account.

Resources

Other posts

Achieving Ethical AI with Model Performance Tracing and ML Explainability

With Model Performance Tracing and ML Explainability, we’ve accelerated our customers’ journey toward achieving the three goals of ethical AI - fairness, accountability and transparency.

Robust & Responsible AI Newsletter - Issue #4

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

WhyLabs Private Beta: Real-time Data Monitoring on Prem

We’re excited to announce our Private Beta release of an extension service for the Profile Store, enabling production use cases of whylogs on customers' premises.

Understanding Kolmogorov-Smirnov (KS) Tests for Data Drift on Profiled Data

We experiment with statistical tests, Kolmogorov-Smirnov (KS) specifically, applied to full datasets and dataset profiles and compare the results.

Re-imagine Data Monitoring with whylogs and Apache Spark

An overview of how the whylogs integration with Apache Spark achieves large scale data profiling, and how users can apply this integration into existing data and ML pipelines.

ML Monitoring in Under 5 Minutes

A quick guide to using whylogs and WhyLabs to monitor common issues with your ML models to surface data drift, concept drift, data quality, and performance issues.

AIShield and WhyLabs: Threat Detection and Monitoring for AI

The seamless integration of AIShield’s security insights on WhyLabs AI observability platform delivers comprehensive insights into ML workloads and brings security hardening to AI-powered enterprises.

Large Scale Data Profiling with whylogs and Fugue on Spark, Ray or Dask

Profiling large-scale data for use cases such as anomaly detection, drift detection, and data validation with Fugue on Spark, Ray or Dask.

Monitoring Image Data with whylogs v1

When operating computer vision systems, data quality and data drift issues always pose the risk of model performance degradation. Whylabs provides a simple yet highly customizable solution for maintaining observability into data to detect issues and take action sooner.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo
loading...