000283149 001__ 283149
000283149 005__ 20260108145323.0
000283149 0247_ $$2doi$$a10.1002/alz70856_106600
000283149 0247_ $$2ISSN$$a1552-5260
000283149 0247_ $$2ISSN$$a1552-5279
000283149 037__ $$aDZNE-2026-00045
000283149 082__ $$a610
000283149 1001_ $$aGicquel, Malo$$b0
000283149 1112_ $$aAlzheimer’s Association International Conference$$cToronto$$d2025-07-27 - 2025-07-31$$gAAIC 25$$wCanada
000283149 245__ $$aAI Superresolution: Converting T1‐weighted MRI from 3T to 7T resolution toward enhanced imaging biomarkers for Alzheimer's disease
000283149 260__ $$c2025
000283149 3367_ $$0PUB:(DE-HGF)1$$2PUB:(DE-HGF)$$aAbstract$$babstract$$mabstract$$s1767880284_14469
000283149 3367_ $$033$$2EndNote$$aConference Paper
000283149 3367_ $$2BibTeX$$aINPROCEEDINGS
000283149 3367_ $$2DRIVER$$aconferenceObject
000283149 3367_ $$0PUB:(DE-HGF)16$$2PUB:(DE-HGF)$$aJournal Article$$mjournal
000283149 3367_ $$2DataCite$$aOutput Types/Conference Abstract
000283149 3367_ $$2ORCID$$aOTHER
000283149 520__ $$aBackground:High-resolution (7T) MRI facilitates in vivo imaging of fine anatomical structures selectively affected in Alzheimer's disease (AD), including medial temporal lobe subregions. However, 7T data is challenging to acquire and largely unavailable in clinical settings. Here, we use deep learning to synthesize 7T resolution T1-weighted MRI images from lower-resolution (3T) images.Method:Paired 7T and 3T T1-weighted images were acquired from 178 participants (134 clinically unimpaired, 48 impaired) from the Swedish BioFINDER-2 study. To synthesize 7T-resolution images from 3T images, we trained two models: a specialized U-Net, and a U-Net mixed with a generative adversarial network (U-Net-GAN) on 80% of the data. We evaluated model performance on the remaining 20%, compared to models from the literature (V-Net, WATNet), using image-based performance metrics and by surveying five blinded MRI professionals based on subjective quality. For n = 11 participants, amygdalae were automatically segmented with FastSurfer on 3T and synthetic-7T images, and compared to a manually segmented “ground truth”. To assess downstream performance, FastSurfer was run on n = 3,168 triplets of matched 3T and AI-generated synthetic-7T images, and a multi-class random forest model classifying clinical diagnosis was trained on both datasets.Result:Synthetic-7T images were generated for images in the test set (Figure 1A). Image metrics suggested the U-Net as the top performing model (Figure 1B), though blinded experts qualitatively rated the GAN-U-Net as the best looking images, exceeding even real 7T images (Figure 1C). Automated segmentations of amygdalae from the synthetic GAN-U-Net model were more similar to manually segmented amygdalae, compared to the original 3T they were synthesized from, in 9/11 images (Figure 2). Classification obtained modest performance (accuracy∼60%) but did not differ across real or synthetic images (Figure 3A). Synthetic image models used slightly different features for classification (Figure 3B).Conclusion:Synthetic T1-weighted images approaching 7T resolution can be generated from 3T images, which may improve image quality and segmentation, without compromising performance in downstream tasks. This approach holds promise for better measurement of deep cortical or subcortical structures relevant to AD. Work is ongoing toward improving performance, generalizability and clinical utility.
000283149 536__ $$0G:(DE-HGF)POF4-353$$a353 - Clinical and Health Care Research (POF4-353)$$cPOF4-353$$fPOF IV$$x0
000283149 588__ $$aDataset connected to CrossRef, Journals: pub.dzne.de
000283149 7001_ $$aFlood, Gabrielle$$b1
000283149 7001_ $$aZhao, Ruoyi$$b2
000283149 7001_ $$aWuestefeld, Anika$$b3
000283149 7001_ $$aSpotorno, Nicola$$b4
000283149 7001_ $$aStrandberg, Olof$$b5
000283149 7001_ $$aXiao, Yu$$b6
000283149 7001_ $$aÅström, Kalle$$b7
000283149 7001_ $$aWisse, Laura E. M.$$b8
000283149 7001_ $$avan Westen, Danielle$$b9
000283149 7001_ $$0P:(DE-2719)2812972$$aBerron, David$$b10$$udzne
000283149 7001_ $$aHansson, Oskar$$b11
000283149 7001_ $$aVogel, Jacob W.$$b12
000283149 773__ $$0PERI:(DE-600)2201940-6$$a10.1002/alz70856_106600$$gVol. 21, no. S2, p. e106600$$nS2$$pe106600$$tAlzheimer's and dementia$$v21$$x1552-5260$$y2025
000283149 8564_ $$uhttps://pub.dzne.de/record/283149/files/DZNE-2026-00045.pdf$$yRestricted
000283149 8564_ $$uhttps://pub.dzne.de/record/283149/files/DZNE-2026-00045.pdf?subformat=pdfa$$xpdfa$$yRestricted
000283149 9101_ $$0I:(DE-588)1065079516$$6P:(DE-2719)2812972$$aDeutsches Zentrum für Neurodegenerative Erkrankungen$$b10$$kDZNE
000283149 9131_ $$0G:(DE-HGF)POF4-353$$1G:(DE-HGF)POF4-350$$2G:(DE-HGF)POF4-300$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$aDE-HGF$$bGesundheit$$lNeurodegenerative Diseases$$vClinical and Health Care Research$$x0
000283149 915__ $$0StatID:(DE-HGF)3001$$2StatID$$aDEAL Wiley$$d2025-01-06$$wger
000283149 915__ $$0StatID:(DE-HGF)0100$$2StatID$$aJCR$$bALZHEIMERS DEMENT : 2022$$d2025-01-06
000283149 915__ $$0StatID:(DE-HGF)0200$$2StatID$$aDBCoverage$$bSCOPUS$$d2025-01-06
000283149 915__ $$0StatID:(DE-HGF)0300$$2StatID$$aDBCoverage$$bMedline$$d2025-01-06
000283149 915__ $$0StatID:(DE-HGF)0199$$2StatID$$aDBCoverage$$bClarivate Analytics Master Journal List$$d2025-01-06
000283149 915__ $$0StatID:(DE-HGF)0160$$2StatID$$aDBCoverage$$bEssential Science Indicators$$d2025-01-06
000283149 915__ $$0StatID:(DE-HGF)1110$$2StatID$$aDBCoverage$$bCurrent Contents - Clinical Medicine$$d2025-01-06
000283149 915__ $$0StatID:(DE-HGF)0113$$2StatID$$aWoS$$bScience Citation Index Expanded$$d2025-01-06
000283149 915__ $$0StatID:(DE-HGF)0150$$2StatID$$aDBCoverage$$bWeb of Science Core Collection$$d2025-01-06
000283149 915__ $$0StatID:(DE-HGF)9910$$2StatID$$aIF >= 10$$bALZHEIMERS DEMENT : 2022$$d2025-01-06
000283149 9201_ $$0I:(DE-2719)5000070$$kAG Berron$$lClinical Cognitive Neuroscience$$x0
000283149 980__ $$aabstract
000283149 980__ $$aEDITORS
000283149 980__ $$aVDBINPRINT
000283149 980__ $$ajournal
000283149 980__ $$aI:(DE-2719)5000070
000283149 980__ $$aUNRESTRICTED