Peter Xenopoulos, Claudio Silva, Gromit Chan, Harish Doraiswamy, Luis Gustavo Nonato, Brian Barr

Research output: Contribution to journalConference articlepeer-review


Local explainability methods - those which seek to generate an explanation for each prediction - are increasingly prevalent. However, results from different local explainability methods are difficult to compare since they may be parameterdependant, unstable due to sampling variability, or in various scales and dimensions. We propose GALE, a topology-based framework to extract a simplified representation from a set of local explanations. GALE models the relationship between the explanation space and model predictions to generate a topological skeleton, which we use to compare local explanation outputs. We demonstrate that GALE can not only reliably identify differences between explainability techniques but also provides stable representations. Then, we show how our framework can be used to identify appropriate parameters for local explainability methods. Our framework is simple, does not require complex optimizations, and can be broadly applied to most local explanation methods.

Original languageEnglish (US)
Pages (from-to)322-331
Number of pages10
JournalProceedings of Machine Learning Research
StatePublished - 2022
EventICML Workshop on Topology, Algebra, and Geometry in Machine Learning, TAG:ML 2022 - Virtual, Online, United States
Duration: Jul 20 2022 → …

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability


Dive into the research topics of 'GALE: GLOBALLY ASSESSING LOCAL EXPLANATIONS'. Together they form a unique fingerprint.

Cite this