Papers
arxiv:2011.08891

Use HiResCAM instead of Grad-CAM for faithful explanations of convolutional neural networks

Published on Nov 17, 2020
Authors:
,

Abstract

Explanation methods facilitate the development of models that learn meaningful concepts and avoid exploiting spurious correlations. We illustrate a previously unrecognized limitation of the popular neural network explanation method Grad-<PRE_TAG>CAM</POST_TAG>: as a side effect of the gradient averaging step, Grad-<PRE_TAG>CAM</POST_TAG> sometimes highlights locations the model did not actually use. To solve this problem, we propose HiRes<PRE_TAG>CAM</POST_TAG>, a novel class-specific explanation method that is guaranteed to highlight only the locations the model used to make each prediction. We prove that HiRes<PRE_TAG>CAM</POST_TAG> is a generalization of CAM and explore the relationships between HiRes<PRE_TAG>CAM</POST_TAG> and other gradient-based explanation methods. Experiments on PASCAL VOC 2012, including crowd-sourced evaluations, illustrate that while HiRes<PRE_TAG>CAM</POST_TAG>'s explanations faithfully reflect the model, Grad-<PRE_TAG>CAM</POST_TAG> often expands the attention to create bigger and smoother visualizations. Overall, this work advances convolutional neural network explanation approaches and may aid in the development of trustworthy models for sensitive applications.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2011.08891 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2011.08891 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2011.08891 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.