Classifier-agnostic saliency map extraction

Konrad Zolna, Krzysztof J. Geras, Kyunghyun Cho

Research output: Contribution to journalArticlepeer-review

Abstract

Currently available methods for extracting saliency maps identify parts of the input which are the most important to a specific fixed classifier. We show that this strong dependence on a given classifier hinders their performance. To address this problem, we propose classifier-agnostic saliency map extraction, which finds all parts of the image that any classifier could use, not just one given in advance. We observe that the proposed approach extracts higher quality saliency maps than prior work while being conceptually simple and easy to implement. The method sets the new state of the art result for localization task on the ImageNet data, outperforming all existing weakly-supervised localization techniques, despite not using the ground truth labels at the inference time. The code reproducing the results is available at https://github.com/kondiz/casme.

Original languageEnglish (US)
Article number102969
JournalComputer Vision and Image Understanding
Volume196
DOIs
StatePublished - Jul 2020

Keywords

  • Convolutional neural networks
  • Image classification
  • Saliency map
  • Weakly supervised localization

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Classifier-agnostic saliency map extraction'. Together they form a unique fingerprint.

Cite this