Conference Presentation (Other) DZNE-2020-00970

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Generalizability vs. Robustness: Investigating Medical Imaging Networks Using Adversarial Examples



2018

MICCAI 2018, GranadaGranada, Spain, 16 Sep 2018 - 16 Sep 20182018-09-162018-09-16

Abstract: In this paper, for the first time, we propose an evaluation method for deep learning models that assesses the performance of a model not only in an unseen test scenario, but also in extreme cases of noise, outliers and ambiguous input data. To this end, we utilize adversarial examples, images that fool machine learning models, while looking imperceptibly different from original data, as a measure to evaluate the robustness of a variety of medical imaging models. Through extensive experiments on skin lesion classification and whole brain segmentation with state-of-the-art networks such as Inception and UNet, we show that models that achieve comparable performance regarding generalizability may have significant variations in their perception of the underlying data manifold, leading to an extensive performance gap in their robustness.


Contributing Institute(s):
  1. Image Analysis (AG Reuter)
Research Program(s):
  1. 345 - Population Studies and Genetics (POF3-345) (POF3-345)

Appears in the scientific report 2018
Click to display QR Code for this record

The record appears in these collections:
Document types > Presentations > Conference Presentations
Institute Collections > BN DZNE > BN DZNE-AG Reuter
Public records
Publications Database

 Record created 2020-08-14, last modified 2020-09-25


External link:
Download fulltext
Fulltext
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)