%0 Journal Article
%A Bendella, Zeynep
%A Wichtmann, Barbara Daria
%A Clauberg, Ralf
%A Keil, Vera C
%A Lehnen, Nils C
%A Haase, Robert
%A Sáez, Laura C
%A Wiest, Isabella C
%A Kather, Jakob Nikolas
%A Endler, Christoph
%A Radbruch, Alexander
%A Paech, Daniel
%A Deike, Katerina
%T Chat GPT-4 shows high agreement in MRI protocol selection compared to board-certified neuroradiologists.
%J European journal of radiology
%V 193
%@ 0720-048X
%C Amsterdam [u.a.]
%I Elsevier Science
%M DZNE-2025-01116
%P 112416
%D 2025
%X The aim of this study was to determine whether ChatGPT-4 can correctly suggest MRI protocols and additional MRI sequences based on real-world Radiology Request Forms (RRFs) as well as to investigate the ability of ChatGPT-4 to suggest time saving protocols.Retrospectively, 1,001 RRFs of our Department of Neuroradiology (in-house dataset), 200 RRFs of an independent Department of General Radiology (independent dataset) and 300 RRFs from an external, foreign Department of Neuroradiology (external dataset) were included. Patients' age, sex, and clinical information were extracted from the RRFs and used to prompt ChatGPT- 4 to choose an adequate MRI protocol from predefined institutional lists. Four independent raters then assessed its performance. Additionally, ChatGPT-4 was tasked with creating case-specific protocols aimed at saving time.Two and 7 of 1,001 protocol suggestions of ChatGPT-4 were rated 'unacceptable' in the in-house dataset for reader 1 and 2, respectively. No protocol suggestions were rated 'unacceptable' in both the independent and external dataset. When assessing the inter-reader agreement, Coheńs weighted ĸ ranged from 0.88 to 0.98 (each p < 0.001). ChatGPT-4's freely composed protocols were approved in 766/1,001 (76.5
%K ChatGPT-4 (Other)
%K Large language model (LLM) (Other)
%K MRI protocol (Other)
%K Radiology request form (Other)
%F PUB:(DE-HGF)16
%9 Journal Article
%$ pmid:40961911
%R 10.1016/j.ejrad.2025.112416
%U https://pub.dzne.de/record/281369