Classifier-Agnostic saliency map extraction

Konrad Zolna, Krzysztof J. Geras, Kyunghyun Cho

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Extracting saliency maps, which indicate parts of the image important to classification, requires many tricks to achieve satisfactory performance when using classifier-dependent methods. Instead, we propose classifier-agnostic saliency map extraction. This allows to find all parts of the image that any classifier could use, not just one given in advance. This way we extract much higher quality saliency maps.

Original languageEnglish (US)
Title of host publication33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019
PublisherAAAI press
Pages10087-10088
Number of pages2
ISBN (Electronic)9781577358091
StatePublished - 2019
Event33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Annual Conference on Innovative Applications of Artificial Intelligence, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019 - Honolulu, United States
Duration: Jan 27 2019Feb 1 2019

Publication series

Name33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019

Conference

Conference33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Annual Conference on Innovative Applications of Artificial Intelligence, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019
Country/TerritoryUnited States
CityHonolulu
Period1/27/192/1/19

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Classifier-Agnostic saliency map extraction'. Together they form a unique fingerprint.

Cite this