Abstract
Currently available methods for extracting saliency maps identify parts of the input which are the most important to a specific fixed classifier. We show that this strong dependence on a given classifier hinders their performance. To address this problem, we propose classifier-agnostic saliency map extraction, which finds all parts of the image that any classifier could use, not just one given in advance. We observe that the proposed approach extracts higher quality saliency maps than prior work while being conceptually simple and easy to implement. The method sets the new state of the art result for localization task on the ImageNet data, outperforming all existing weakly-supervised localization techniques, despite not using the ground truth labels at the inference time. The code reproducing the results is available at https://github.com/kondiz/casme.
Original language | English (US) |
---|---|
Article number | 102969 |
Journal | Computer Vision and Image Understanding |
Volume | 196 |
DOIs | |
State | Published - Jul 2020 |
Keywords
- Convolutional neural networks
- Image classification
- Saliency map
- Weakly supervised localization
ASJC Scopus subject areas
- Software
- Signal Processing
- Computer Vision and Pattern Recognition