TY  - CONF
AU  - Hiller, Bjarne C.
AU  - Bader, Sebastian
AU  - Singh, Devesh
AU  - Kirste, Thomas
AU  - Becker, Martin
AU  - Dyrba, Martin
A3  - Palm, Christoph
A3  - Breininger, Katharina
A3  - Deserno, Thomas
A3  - Handels, Heinz
A3  - Maier, Andreas
A3  - Maier-Hein, Klaus H.
A3  - Tolxdorff, Thomas M.
TI  - Evaluating the Fidelity of Explanations for Convolutional Neural Networks in Alzheimer’s Disease Detection
CY  - Wiesbaden
PB  - Springer Fachmedien Wiesbaden
M1  - DZNE-2025-00484
SN  - 978-3-658-47421-8 (print)
T2  - Informatik aktuell
SP  - 76 - 81
PY  - 2025
AB  - The black-box nature of deep learning still prevents its widespread clinical use due to the high risk of hidden biases and prediction errors. Over the last decade, various explanation methods have been proposed to reveal the latent mechanisms of neural networks and support their decisions. However, interpreting the explanations themselves can be challenging, and there is still little consensus on how to evaluate the quality of explanations. To investigate the fidelity of explanations provided by prominent feature attribution methods for Convolutional Neural Networks in Alzheimer’s Disease (AD) detection, this paper applies relevance-guided perturbation to the Magnetic Resonance Imaging (MRI) input images. According to the fidelity metric, the AD class probability showed the steepest decline when the perturbation was guided by Integrated Gradients or DeepLift. We conclude by highlighting the role of the reference image in feature attribution with regard to AD detection from MRI images. The source code for the experiments is publicly available on GitHub at https://github.com/bckrlab/ad-fidelity.
T2  - German Conference on Medical Image Computing
CY  - 9 Mar 2025 - 11 Mar 2025, Regensburg (Germany)
Y2  - 9 Mar 2025 - 11 Mar 2025
M2  - Regensburg, Germany
LB  - PUB:(DE-HGF)8 ; PUB:(DE-HGF)7
DO  - DOI:10.1007/978-3-658-47422-5_18
UR  - https://pub.dzne.de/record/277812
ER  -