blog bg left
Back to Blog

BigQuery Data Monitoring with WhyLabs

You can now monitor the quality of your data in Google BigQuery with whylogs without writing any code. This is the first truly no-code solution for data monitoring that WhyLabs offers and we started with BigQuery because of its popularity, managed infrastructure, and integration options. Data quality monitoring is a key process for ensuring that the data your analytics and machine learning applications rely on is sound. This whylogs integration is a good fit for anyone who uses BigQuery to store their data and wants to avoid writing any code to monitor the quality of that data on an ongoing basis.

The core of the integration is an Apache Beam template that we publish to a public GCS bucket. The template can be used to create a Dataflow job that consumes from BigQuery in a few different ways, depending on how you configure it.

How to use it

Before starting, you’ll need to head over to WhyLabs and create a free account to get your organization id, model id, and api key. API keys can be generated from the settings menu after you log in. You’ll supply these parameters to the Dataflow job below.

To use the integration, you'll need a GCP account that has access to the BigQuery and Dataflow services. This section will have examples that use the Google Cloud console. Start by opening the Dataflow service and creating a job from a template.

Next, select the Custom Template option.

For the template location, enter whylabs-dataflow-templates/batch_bigquery_template/latest/batch_bigquery_template.json.  You'll see the form automatically expand to highlight additional parameters you have to supply.

In this example we'll profile one of the public datasets hosted by Google using the following configuration options.

  • Output GCS path - gs://template_test_bucket/my_job/profile. Pick a bucket you own here. This determines where the whylogs profiles are written to.
  • Input Mode - BIGQUERY_TABLE. This tells the template to consume an entire BigQuery table.
  • Date column - block_timestamp. This is the column in the dataset that should be used for time. It should have a type of TIMESTAMP in the BigQuery schema. The dataset we'll be using happens to use this name. It will be different for your data.
  • Organization ID - Something like org-abc123. This is the organization id of your WhyLabs account. You can get a free one by signing up at
  • Model ID - The model id that you'll upload these whylogs profiles to. You can create one for free by signing up at
  • API Key - An API key from your WhyLabs account. You can generate one from the settings menu.

There are a few additional parameters to set as well. You'll find them under the optional parameters section. They're considered optional because they're conditionally required depending on which input mode you select.

We only need to supply two more, but the image shows all of the optional parameters as well.

  • Input BigQuery Table - bigquery-public-data.crypto_bitcoin_cash.transactions. It's a reasonably sized free dataset with a time column.
  • Pandas Grouper frequency - Y. The job uses pandas to split by time under the hood. This tells it to split by year. In this example we'll generate roughly 15 whylogs profiles, one for each year of data.

Create the job and you'll see a graph like the following.

Once the job finishes, you'll see the profiles in GCS and your WhyLabs account. We created profiles for several years of data here, but the normal use case would be to create a profile for every day or hour as you receive new data.

All of this could have been run from the gcloud command line too with a command like the following.

gcloud dataflow flex-template run "my-job" \
        --template-file-gcs-location gs://whylabs-dataflow-templates/batch_bigquery_template/latest/batch_bigquery_template.json \
        --parameters input-mode=BIGQUERY_TABLE \
        --parameters input-bigquery-table='bigquery-public-data.crypto_bitcoin_cash.transactions' \
        --parameters date-column=block_timestamp \
        --parameters date-grouping-frequency=Y \
        --parameters org-id=MY_ORG_ID \
        --parameters dataset-id=MY_MODEL_ID \
        --parameters output=gs://my-bucket/dataset-timestmap-test/dataset_profile \
        --parameters api-key=MY_KEY \
        --region us-central1 

How it works

The job is designed to generate a handful of whylogs profiles. For a model that you're monitoring on a daily basis, the job will end up producing a profile per day of data. If there are too many days in the job then the amount of time it takes to generate profiles will start to increase. While you could use this job to generate a profile for every day of data for several years, it would certainly not perform well.

If you have use cases that would benefit from generating many profiles per job then reach out to us. We have several Dataflow pipeline configurations that we didn't publish as templates and one of them might suit your needs.

Once you have everything set up, the next thing you'll want to do is set up monitors on your data.

What's next

We'll be adding a few features over time to this integration. The most exciting one is support for streaming mode, which will allow real time profiling of Dataflow based data pipelines.

We'll also be adding additional data sources. This is technically a Dataflow integration that happens to support BigQuery as an input right now, but Dataflow can consume more services than just BigQuery.

Get started by creating a free WhyLabs account.


Other posts

Get Early Access to the First Purpose-Built Monitoring Solution for LLMs

We’re excited to announce our private beta release of LangKit, the first purpose-built large language model monitoring solution! Join the responsible LLM revolution by signing up for early access.

Mind Your Models: 5 Ways to Implement ML Monitoring in Production

We’ve outlined five easy ways to monitor your ML models in production to ensure they are robust and responsible by monitoring for concept drift, data drift, data quality, AI explainability and more.

Simplifying ML Deployment: A Conversation with BentoML's Founder & CEO Chaoyu Yang

A summary of the live interview with Chaoyu Yang, Founder & CEO at BentoML, on putting machine learning models in production and BentoML's role in simplifying deployment.

Data Drift vs. Concept Drift and Why Monitoring for Them is Important

Data drift and concept drift are two common challenges that can impact ML models on production. In this blog, we'll explore the differences between these two types of drift and why monitoring for them is crucial.

Robust & Responsible AI Newsletter - Issue #5

Every quarter we send out a roundup of the hottest MLOps and Data-Centric AI news including industry highlights, what’s brewing at WhyLabs, and more.

Detecting Financial Fraud in Real-Time: A Guide to ML Monitoring

Fraud is a significant challenge for financial institutions and businesses. As fraudsters constantly adapt their tactics, it’s essential to implement a robust ML monitoring system to ensure that models effectively detect fraud and minimize false positives.

How to Troubleshoot Embeddings Without Eye-balling t-SNE or UMAP Plots

WhyLabs' scalable approach to monitoring high dimensional embeddings data means you don’t have to eye-ball pretty UMAP plots to troubleshoot embeddings!

Achieving Ethical AI with Model Performance Tracing and ML Explainability

With Model Performance Tracing and ML Explainability, we’ve accelerated our customers’ journey toward achieving the three goals of ethical AI - fairness, accountability and transparency.

Detecting and Fixing Data Drift in Computer Vision

In this tutorial, Magdalena Konkiewicz from Toloka focuses on the practical part of data drift detection and fixing it on a computer vision example.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Book a demo