Journal Article DZNE-2022-01674

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
CerebNet: A fast and reliable deep-learning pipeline for detailed cerebellum sub-segmentation.

 ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;

2022
Academic Press Orlando, Fla.

NeuroImage 264, 119703 () [10.1016/j.neuroimage.2022.119703]

This record in other databases:    

Please use a persistent id in citations: doi:

Abstract: Quantifying the volume of the cerebellum and its lobes is of profound interest in various neurodegenerative and acquired diseases. Especially for the most common spinocerebellar ataxias (SCA), for which the first antisense oligonculeotide-base gene silencing trial has recently started, there is an urgent need for quantitative, sensitive imaging markers at pre-symptomatic stages for stratification and treatment assessment. This work introduces CerebNet, a fully automated, extensively validated, deep learning method for the lobular segmentation of the cerebellum, including the separation of gray and white matter. For training, validation, and testing, T1-weighted images from 30 participants were manually annotated into cerebellar lobules and vermal sub-segments, as well as cerebellar white matter. CerebNet combines FastSurferCNN, a UNet-based 2.5D segmentation network, with extensive data augmentation, e.g. realistic non-linear deformations to increase the anatomical variety, eliminating additional preprocessing steps, such as spatial normalization or bias field correction. CerebNet demonstrates a high accuracy (on average 0.87 Dice and 1.742mm Robust Hausdorff Distance across all structures) outperforming state-of-the-art approaches. Furthermore, it shows high test-retest reliability (average ICC >0.97 on OASIS and Kirby) as well as high sensitivity to disease effects, including the pre-ataxic stage of spinocerebellar ataxia type 3 (SCA3). CerebNet is compatible with FreeSurfer and FastSurfer and can analyze a 3D volume within seconds on a consumer GPU in an end-to-end fashion, thus providing an efficient and validated solution for assessing cerebellum sub-structure volumes. We make CerebNet available as source-code (https://github.com/Deep-MI/FastSurfer).

Keyword(s): Humans (MeSH) ; Image Processing, Computer-Assisted: methods (MeSH) ; Deep Learning (MeSH) ; Magnetic Resonance Imaging: methods (MeSH) ; Reproducibility of Results (MeSH) ; Cerebellum: diagnostic imaging (MeSH) ; CerebNet ; Cerebellum ; Computational neuroimaging ; Deep learning

Classification:

Contributing Institute(s):
  1. Patient Studies (Bonn) (Patient Studies (Bonn))
  2. Artificial Intelligence in Medicine (AG Reuter)
  3. Clinical Neuroimaging (AG Radbruch)
  4. Clinical Research Coordination (Clinical Research (Bonn))
Research Program(s):
  1. 353 - Clinical and Health Care Research (POF4-353) (POF4-353)
  2. 354 - Disease Prevention and Healthy Aging (POF4-354) (POF4-354)

Appears in the scientific report 2022
Database coverage:
Medline ; Creative Commons Attribution CC BY 4.0 ; DOAJ ; OpenAccess ; Article Processing Charges ; BIOSIS Previews ; Biological Abstracts ; Clarivate Analytics Master Journal List ; Current Contents - Life Sciences ; DOAJ Seal ; Ebsco Academic Search ; Essential Science Indicators ; Fees ; IF >= 5 ; JCR ; NationallizenzNationallizenz ; SCOPUS ; Science Citation Index Expanded ; Web of Science Core Collection
Click to display QR Code for this record

The record appears in these collections:
Institute Collections > BN DZNE > BN DZNE-Clinical Research (Bonn)
Institute Collections > BN DZNE > BN DZNE-Patient Studies (Bonn)
Document types > Articles > Journal Article
Institute Collections > BN DZNE > BN DZNE-AG Radbruch
Institute Collections > BN DZNE > BN DZNE-AG Reuter
Full Text Collection
Public records
Publications Database

 Record created 2022-11-24, last modified 2024-09-18


OpenAccess:
Download fulltext PDF Download fulltext PDF (PDFA)
External link:
Download fulltextFulltext by Pubmed Central
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)