Abstract/Journal Article DZNE-2025-01506

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
AI Superresolution: Converting T1‐weighted MRI from 3T to 7T resolution toward enhanced imaging biomarkers for Alzheimer’s disease

 ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;

2025

Alzheimer’s Association International Conference, AAIC 25, TorontoToronto, Canada, 27 Jul 2025 - 31 Jul 20252025-07-272025-07-31 Alzheimer's and dementia 21(Suppl 8), e109817 () [10.1002/alz70862_109817]

This record in other databases:    

Please use a persistent id in citations: doi:

Abstract: High-resolution (7T) MRI facilitates in vivo imaging of fine anatomical structures selectively affected in Alzheimer's disease (AD), including medial temporal lobe subregions. However, 7T data is challenging to acquire and largely unavailable in clinical settings. Here, we use deep learning to synthesize 7T resolution T1-weighted MRI images from lower-resolution (3T) images.Paired 7T and 3T T1-weighted images were acquired from 178 participants (134 clinically unimpaired, 48 impaired) from the Swedish BioFINDER-2 study. To synthesize 7T-resolution images from 3T images, we trained two models: a specialized U-Net, and a U-Net mixed with a generative adversarial network (U-Net-GAN) on 80% of the data. We evaluated model performance on the remaining 20%, compared to models from the literature (V-Net, WATNet), using image-based performance metrics and by surveying five blinded MRI professionals based on subjective quality. For n = 11 participants, amygdalae were automatically segmented with FastSurfer on 3T and synthetic-7T images, and compared to a manually segmented 'ground truth'. To assess downstream performance, FastSurfer was run on n = 3,168 triplets of matched 3T and AI-generated synthetic-7T images, and a multi-class random forest model classifying clinical diagnosis was trained on both datasets.Synthetic-7T images were generated for images in the test set (Figure 1A). Image metrics suggested the U-Net as the top performing model (Figure 1B), though blinded experts qualitatively rated the GAN-U-Net as the best looking images, exceeding even real 7T images (Figure 1C). Automated segmentations of amygdalae from the synthetic GAN-U-Net model were more similar to manually segmented amygdalae, compared to the original 3T they were synthesized from, in 9/11 images (Figure 2). Classification obtained modest performance (accuracy∼60%) but did not differ across real or synthetic images (Figure 3A). Synthetic image models used slightly different features for classification (Figure 3B).Synthetic T1-weighted images approaching 7T resolution can be generated from 3T images, which may improve image quality and segmentation, without compromising performance in downstream tasks. This approach holds promise for better measurement of deep cortical or subcortical structures relevant to AD. Work is ongoing toward improving performance, generalizability and clinical utility.

Keyword(s): Humans (MeSH) ; Alzheimer Disease: diagnostic imaging (MeSH) ; Magnetic Resonance Imaging: methods (MeSH) ; Female (MeSH) ; Male (MeSH) ; Aged (MeSH) ; Deep Learning (MeSH) ; Neuroimaging: methods (MeSH) ; Image Processing, Computer-Assisted: methods (MeSH) ; Brain: diagnostic imaging (MeSH) ; Sweden (MeSH)

Classification:

Contributing Institute(s):
  1. Clinical Cognitive Neuroscience (AG Berron)
Research Program(s):
  1. 353 - Clinical and Health Care Research (POF4-353) (POF4-353)

Appears in the scientific report 2025
Database coverage:
Medline ; OpenAccess ; Clarivate Analytics Master Journal List ; Current Contents - Clinical Medicine ; DEAL Wiley ; Essential Science Indicators ; IF >= 10 ; JCR ; SCOPUS ; Science Citation Index Expanded ; Web of Science Core Collection
Click to display QR Code for this record

The record appears in these collections:
Document types > Articles > Journal Article
Document types > Presentations > Abstracts
Institute Collections > MD DZNE > MD DZNE-AG Berron
Full Text Collection
Public records
Publications Database

 Record created 2025-12-30, last modified 2025-12-30


OpenAccess:
Download fulltext PDF Download fulltext PDF (PDFA)
External link:
Download fulltextFulltext by Pubmed Central
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)