Data distribution debugging in machine learning pipelines

Stefan Grafberger, Paul Groth, Julia Stoyanovich, Sebastian Schelter

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Machine learning (ML) is increasingly used to automate impactful decisions, and the risks arising from this widespread use are garnering attention from policy makers, scientists, and the media. ML applications are often brittle with respect to their input data, which leads to concerns about their correctness, reliability, and fairness. In this paper, we describe mlinspect, a library that helps diagnose and mitigate technical bias that may arise during preprocessing steps in an ML pipeline. We refer to these problems collectively as data distribution bugs. The key idea is to extract a directed acyclic graph representation of the dataflow from a preprocessing pipeline and to use this representation to automatically instrument the code with predefined inspections. These inspections are based on a lightweight annotation propagation approach to propagate metadata such as lineage information from operator to operator. In contrast to existing work, mlinspect operates on declarative abstractions of popular data science libraries like estimator/transformer pipelines and does not require manual code instrumentation. We discuss the design and implementation of the mlinspect library and give a comprehensive end-to-end example that illustrates its functionality.

    Original languageEnglish (US)
    Pages (from-to)1103-1126
    Number of pages24
    JournalVLDB Journal
    Volume31
    Issue number5
    DOIs
    StatePublished - Sep 2022

    Keywords

    • Data debugging
    • Data preparation for machine learning
    • Machine learning pipelines

    ASJC Scopus subject areas

    • Information Systems
    • Hardware and Architecture

    Fingerprint

    Dive into the research topics of 'Data distribution debugging in machine learning pipelines'. Together they form a unique fingerprint.

    Cite this