blog bg left
Back to Blog

Integrating whylogs into your Kafka ML Pipeline

It is well known that machine learning models consume vast amounts of data, both during training and in production.  Kafka is an important part of many event-driven machine learning platforms because it  allows data consumers to be decoupled from producers which has benefits for horizontal scaling and ease of reconfiguration.  A Kafka topic might deliver events containing multiple feature elements, such as web access logs, point of sale transactions, or call log duration.  Any continuous stream of real-time data is a potential source of Kafka events.

Data streams typically power data-driven decisions, either through BI applications or ML/AI applications.  In either case, ensuring data quality is critical to making these applications are trustworthy and reliable.  Evaluating the quality of data in the Kafka stream is a non-trivial task due to large volumes of data and latency requirements.  This is an ideal job for whylogs, an open-source package for Python or Java that uses Apache DataSketches to monitor and detect statistical anomalies in streaming data.  Whylogs produces compact statistical profiles of time series data that can help detect data drift and distribution changes over time.  Most important for Kafka integration, Whylogs profiles can be merged so your monitoring pipeline can scale horizontally and still provide a continuous statistical profile of your entire data stream.

In addition to Kafka, whylogs can be integrated into a variety of data pipelines, including MLflow, SageMaker, and on Spark Pipelines.  This article will discuss how to use whylogs to monitor streams of data supplied by Kafka.

Monitoring Events through Kafka

Shown below is a simple python shim that consumes events from a Kafka topic and processes them through whylogs. Each Kafka event represents a row of JSON-encoded features in a stream of training data.  The example consumes up to 100 events at a time because processing batches is more efficient than processing individual events.  The session.logger in this example is configured to produce a new statistical profile every minute, as long as data is flowing.

from kafka import KafkaConsumer
from whylogs import get_or_create_session
import datetime
import json
import pandas as pd

deserializer=lambda x: json.loads(x.decode('utf-8'))
consumer = KafkaConsumer(bootstrap_servers='localhost:9092',
                         value_deserializer=deserializer)

consumer.subscribe(['whylogs-stream'])
session = get_or_create_session()
with session.logger(dataset_name="dataset", with_rotation_time="1m") as logger:
    while True:
        record = consumer.poll(timeout_ms=10000, 
                               max_records=100, 
                               update_offsets=True)
        for k,v in record.items():
            df = pd.DataFrame([row.value for row in v])
            logger.log_dataframe(df)

In production, this consumer might be considerably more complicated.  This example automatically updates the partition offset as soon as the events are consumed.  To avoid losing data due to service disruptions, it would be best to advance the offset pointer only after the whylogs profile covering that event has been persisted to long-term storage.

This example also subscribes to a single Kafka topic and processes events for all partitions of that topic.  However Topics are often divided into multiple partitions to allow horizontal scaling of consumers.   Whylogs can monitor separate partitions and the resulting profiles can be merged into a single profile that covers all events for a topic.

After running this example for some time, we will begin to accumulate profiles for batches of events.  We could examine those profiles individually, but our view would be limited to a single batch.  To graphically display the distribution of feature values over time, we can plot several whylogs profiles at once.

from whylogs import DatasetProfile
import glob

def from_file(fname) -> DatasetProfile:
    with open(fname, 'rb') as fp:
        return DatasetProfile.from_protobuf_string(fp.read()) 

# load whylogs profiles from disk
files = "output/dataset/*/protobuf/*.bin"
profiles = [from_file(fname) for fname in glob.glob(files)]

from whylogs.viz import ProfileVisualizer
viz = ProfileVisualizer()
viz.set_profiles(profiles)
viz.plot_distribution("fico_range_high", ts_format="00:%M:%S")

Merging profiles

Any monitoring solution is likely to fall behind a real-time stream if events are produced at too great a rate. The usual solution for Kafka consumers that fall behind is to run more consumers!

Kafka event streams can be partitioned so each consumer sees only a portion of the events in the stream.  Whylogs can monitor multiple partitions of a topic and later merge the profiles from the same time period without losing statistical power.

This code fragment shows the basics of merging Whylogs profiles to take advantage of horizontal scaling. The merge operation consolidates profiles from smaller time periods into a single profile that covers the entire time range.  The same merge operation will also consolidate profiles that monitor different features over the same time period. That is useful if separate Kafka topics stream events containing distinct model features.

from whylogs import DatasetProfile
import glob

profiles = "output/dataset/*/protobuf/*.bin"

merged = None
for fname in glob.glob(profiles):
    print(f'open {fname}')
    with open(fname, 'rb') as fp:
        p = DatasetProfile.from_protobuf_string(fp.read())
        if merged is None:
            merged = p
        else:
            merged.merge(p)
# `merged` will be a single profile that accumulates all the statistical 
# measures from individual profiles.

Conclusion

Whylogs can help monitor your ML data pipeline no matter how you structure your data pipeline, but is it particularly easy to monitor Kafka event streams.  The resulted profiles are available in protobuf, json and csv formats.  These profiles can be used for manual analysis or continuous monitoring.

If you are considering to use whylogs for your projects, join the Slack community to discuss ideas and share feedback on the library.

If you are looking for a monitoring solution for Kafka data streams, WhyLabs offers a SaaS platform built on top of whylogs.  The platform helps consolidate and visualize whylog profiles over extended time periods and across many features.  Monitoring and alerting can be enabled on any metrics with a configuration-free set-up.  Once set-up, configurable thresholds can send alerts when data quality metrics deviates in the data stream, catching issues like data distribution drifts, data corruption and loss.   A graphically rich dashboard helps you quickly zero-in on the time frame when problems started. Check out the WhyLabs Platform sandbox to see these features in action.

Other posts

AI Observability for All

We’re excited to announce our new Starter edition: a free tier of our model monitoring solution that allows users to access all of the features of the WhyLabs AI observability platform. It is entirely self-service, meaning that users can sign up for an account and get started right away.

Observability in Production: Monitoring Data Drift with WhyLabs and Valohai

What works today might not work tomorrow. And when a model is in real-world use, serving the faulty predictions can lead to catastrophic consequences...

Why You Need ML Monitoring

Machine learning models are increasingly becoming key to businesses of all shapes and sizes, performing myriad functions... If a machine learning model is providing value to a business, it’s essential that the model remains performant.

Data Labeling Meets Data Monitoring with Superb AI and WhyLabs

Data quality is the key to a performant machine learning model. That’s why WhyLabs and Superb AI are on a mission to ensure that data scientists and machine learning engineers have access to tools designed specifically for their needs and workflows.

Running and Monitoring Distributed ML with Ray and whylogs

Running and monitoring distributed ML systems can be challenging. Fortunately, Ray makes parallelizing Python processes easy, and the open source whylogs enables users to monitor ML models in production, even if those models are running in a distributed environment.

Monitor your SageMaker model with WhyLabs

In this blog post, we will dive into the WhyLabs AI Observatory, a data and ML monitoring and observability platform, and show how it complements Amazon SageMaker.
pre footer decoration
pre footer decoration
pre footer decoration

Run AI With Certainty

Get started for free
loading...