Interpreting black-box classifiers using instance-level visual explanations

Paolo Tamagnini, Josua Krause, Aritra Dasgupta, Enrico Bertini

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    To realize the full potential of machine learning in diverse real-world domains, it is necessary for model predictions to be readily interpretable and actionable for the human in the loop. Analysts, who are the users but not the developers of machine learning models, often do not trust a model because of the lack of transparency in associating predictions with the underlying data space. To address this problem, we propose Rivelo, a visual analytics interface that enables analysts to understand the causes behind predictions of binary classifiers by interactively exploring a set of instance-level explanations. These explanations are model-agnostic, treating a model as a black box, and they help analysts in interactively probing the high-dimensional binary data space for detecting features relevant to predictions. We demonstrate the utility of the interface with a case study analyzing a random forest model on the sentiment of Yelp reviews about doctors.

    Original languageEnglish (US)
    Title of host publicationProceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, HILDA 2017
    PublisherAssociation for Computing Machinery, Inc
    ISBN (Electronic)9781450350297
    DOIs
    StatePublished - May 14 2017
    Event2nd Workshop on Human-In-the-Loop Data Analytics, HILDA 2017 - Chicago, United States
    Duration: May 14 2017 → …

    Publication series

    NameProceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, HILDA 2017

    Other

    Other2nd Workshop on Human-In-the-Loop Data Analytics, HILDA 2017
    CountryUnited States
    CityChicago
    Period5/14/17 → …

    Keywords

    • Classification
    • Explanation
    • Machine learning
    • Visual analytics

    ASJC Scopus subject areas

    • Computational Theory and Mathematics
    • Computer Science Applications
    • Information Systems

    Fingerprint Dive into the research topics of 'Interpreting black-box classifiers using instance-level visual explanations'. Together they form a unique fingerprint.

  • Cite this

    Tamagnini, P., Krause, J., Dasgupta, A., & Bertini, E. (2017). Interpreting black-box classifiers using instance-level visual explanations. In Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, HILDA 2017 [3077260] (Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, HILDA 2017). Association for Computing Machinery, Inc. https://doi.org/10.1145/3077257.3077260