<?xml version="1.0" encoding="UTF-8"?>
<collection>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd http://dublincore.org/schemas/xmls/qdc/dcterms.xsd"><dc:language>eng</dc:language><dc:creator>Chatterjee, Soumick</dc:creator><dc:creator>Yassin, Hadya</dc:creator><dc:creator>Dubost, Florian</dc:creator><dc:creator>Nürnberger, Andreas</dc:creator><dc:creator>Speck, Oliver</dc:creator><dc:title>Weakly-supervised segmentation using inherently-explainable classification models and their application to brain tumour classification</dc:title><dc:subject>info:eu-repo/classification/ddc/610</dc:subject><dc:description>Deep learning has demonstrated significant potential in medical imaging; however, the opacity of “black-box” models hinders clinical trust, while segmentation tasks typically necessitate laborious, hard-to-obtain pixel-wise annotations. To address these challenges simultaneously, this paper introduces a framework for three inherently explainable classifiers (GP-UNet, GP-ShuffleUNet, and GP-ReconResNet). By integrating a global pooling mechanism, these networks generate localisation heatmaps that directly influence classification decisions, offering inherent interpretability without relying on potentially unreliable post-hoc methods. These heatmaps are subsequently thresholded to achieve weakly-supervised segmentation, requiring only image-level classification labels for training. Validated on two datasets for multi-class brain tumour classification, the proposed models achieved a peak F1-score of 0.93. For the weakly-supervised segmentation task, a median Dice score of 0.728 (95% CI: 0.715–0.739) was recorded. Notably, on a subset of tumour-only images, the best model achieved an accuracy of 98.7%, outperforming state-of-the-art glioma grading binary classifiers. Furthermore, comparative Precision-Recall analysis validated the framework’s robustness against severe class imbalance, establishing a direct correlation between diagnostic confidence and segmentation fidelity. These results demonstrate that the proposed framework successfully combines high diagnostic accuracy with essential transparency, offering a promising direction for trustworthy clinical decision support.</dc:description><dc:source>Neurocomputing 682, 133460 (2026). doi:10.1016/j.neucom.2026.133460</dc:source><dc:type>info:eu-repo/semantics/article</dc:type><dc:type>info:eu-repo/semantics/publishedVersion</dc:type><dc:publisher>Elsevier</dc:publisher><dc:date>2026</dc:date><dc:rights>info:eu-repo/semantics/closedAccess</dc:rights><dc:coverage>DE</dc:coverage><dc:identifier>https://pub.dzne.de/record/286092</dc:identifier><dc:identifier>https://pub.dzne.de/search?p=id:%22DZNE-2026-00388%22</dc:identifier><dc:audience>Researchers</dc:audience><dc:relation>info:eu-repo/semantics/altIdentifier/issn/1872-8286</dc:relation><dc:relation>info:eu-repo/semantics/altIdentifier/doi/10.1016/j.neucom.2026.133460</dc:relation><dc:relation>info:eu-repo/semantics/altIdentifier/issn/0925-2312</dc:relation></oai_dc:dc>

</collection>