%0 Journal Article
%A Arco, Juan E.
%A Jimenez-Mesa, Carmen
%A Ortiz, Andrés
%A Ramírez, Javier
%A Levin, Johannes
%A Górriz, Juan M.
%T Explainable Intermodality Medical Information Transfer Using Siamese Autoencoders
%J IEEE transactions on radiation and plasma medical sciences
%V 10
%N 2
%@ 2469-7311
%C New York, NY
%I IEEE
%M DZNE-2026-00155
%P 192 - 209
%D 2026
%X Medical imaging fusion combines complementary information from multiple modalities to enhance diagnostic accuracy. However, evaluating the quality of fused images remains challenging, with many studies relying solely on classification performance, which may lead to incorrect conclusions. We introduce a novel framework for improving image fusion, focusing on preserving fine-grained details. Our model uses a siamese autoencoder to process T1-MRI and FDG-PET images in the context of Alzheimer’s disease (AD). The framework optimizes fusion by minimizing reconstruction error between generated and input images, while maximizing differences between modalities through cosine distance. Additionally, we propose a supervised variant, incorporating binary cross-entropy loss between diagnostic labels and probabilities. Fusion quality is rigorously assessed through three tests: 1) classification of AD patients and controls using fused images; 2) an atlas-based occlusion test for identifying regions relevant to cognitive decline; and 3) analysis of structural–functional relationships via Euclidean distance. Results show an AUC of 0.92 for AD detection, reveal the involvement of brain regions linked to preclinical AD stages, and demonstrate preserved structural–functional brain networks, indicating that subtle differences are successfully captured through our fusion approach.
%F PUB:(DE-HGF)16
%9 Journal Article
%R 10.1109/TRPMS.2025.3577309
%U https://pub.dzne.de/record/285030