000268456 001__ 268456
000268456 005__ 20240303002617.0
000268456 0247_ $$2doi$$a10.3390/jimaging10020045
000268456 0247_ $$2pmid$$apmid:38392093
000268456 0247_ $$2pmc$$apmc:PMC10889835
000268456 0247_ $$2altmetric$$aaltmetric:83453229
000268456 037__ $$aDZNE-2024-00208
000268456 041__ $$aEnglish
000268456 082__ $$a004
000268456 1001_ $$00000-0001-7594-1188$$aChatterjee, Soumick$$b0
000268456 245__ $$aExploration of Interpretability Techniques for Deep COVID-19 Classification Using Chest X-ray Images.
000268456 260__ $$aBasel$$bMDPI$$c2024
000268456 3367_ $$2DRIVER$$aarticle
000268456 3367_ $$2DataCite$$aOutput Types/Journal article
000268456 3367_ $$0PUB:(DE-HGF)16$$2PUB:(DE-HGF)$$aJournal Article$$bjournal$$mjournal$$s1709035489_12958
000268456 3367_ $$2BibTeX$$aARTICLE
000268456 3367_ $$2ORCID$$aJOURNAL_ARTICLE
000268456 3367_ $$00$$2EndNote$$aJournal Article
000268456 520__ $$aThe outbreak of COVID-19 has shocked the entire world with its fairly rapid spread, and has challenged different sectors. One of the most effective ways to limit its spread is the early and accurate diagnosing of infected patients. Medical imaging, such as X-ray and computed tomography (CT), combined with the potential of artificial intelligence (AI), plays an essential role in supporting medical personnel in the diagnosis process. Thus, in this article, five different deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and DenseNet161) and their ensemble, using majority voting, have been used to classify COVID-19, pneumoniæ and healthy subjects using chest X-ray images. Multilabel classification was performed to predict multiple pathologies for each patient, if present. Firstly, the interpretability of each of the networks was thoroughly studied using local interpretability methods-occlusion, saliency, input X gradient, guided backpropagation, integrated gradients, and DeepLIFT-and using a global technique-neuron activation profiles. The mean micro F1 score of the models for COVID-19 classifications ranged from 0.66 to 0.875, and was 0.89 for the ensemble of the network models. The qualitative results showed that the ResNets were the most interpretable models. This research demonstrates the importance of using interpretability methods to compare different models before making a decision regarding the best performing model.
000268456 536__ $$0G:(DE-HGF)POF4-353$$a353 - Clinical and Health Care Research (POF4-353)$$cPOF4-353$$fPOF IV$$x0
000268456 588__ $$aDataset connected to CrossRef, PubMed, , Journals: pub.dzne.de
000268456 650_7 $$2Other$$aCOVID-19
000268456 650_7 $$2Other$$achest X-ray
000268456 650_7 $$2Other$$adeep learning
000268456 650_7 $$2Other$$ainterpretability analysis
000268456 650_7 $$2Other$$amodel ensemble
000268456 650_7 $$2Other$$amultilabel image classification
000268456 650_7 $$2Other$$apneumonia
000268456 7001_ $$00000-0002-9732-4292$$aSaad, Fatima$$b1
000268456 7001_ $$00000-0003-4760-2263$$aSarasaen, Chompunuch$$b2
000268456 7001_ $$aGhosh, Suhita$$b3
000268456 7001_ $$aKrug, Valerie$$b4
000268456 7001_ $$aKhatun, Rupali$$b5
000268456 7001_ $$aMishra, Rahul$$b6
000268456 7001_ $$aDesai, Nirja$$b7
000268456 7001_ $$aRadeva, Petia$$b8
000268456 7001_ $$aRose, Georg$$b9
000268456 7001_ $$00000-0002-1717-4133$$aStober, Sebastian$$b10
000268456 7001_ $$0P:(DE-2719)2810706$$aSpeck, Oliver$$b11
000268456 7001_ $$00000-0003-4311-0624$$aNürnberger, Andreas$$b12
000268456 773__ $$0PERI:(DE-600)2824270-1$$a10.3390/jimaging10020045$$gVol. 10, no. 2, p. 45 -$$n2$$p45$$tJournal of imaging$$v10$$x2313-433X$$y2024
000268456 8564_ $$uhttps://pub.dzne.de/record/268456/files/DZNE-2024-00208.pdf$$yOpenAccess
000268456 8564_ $$uhttps://pub.dzne.de/record/268456/files/DZNE-2024-00208.pdf?subformat=pdfa$$xpdfa$$yOpenAccess
000268456 909CO $$ooai:pub.dzne.de:268456$$pdnbdelivery$$pdriver$$pVDB$$popen_access$$popenaire
000268456 9101_ $$0I:(DE-588)1065079516$$6P:(DE-2719)2810706$$aDeutsches Zentrum für Neurodegenerative Erkrankungen$$b11$$kDZNE
000268456 9131_ $$0G:(DE-HGF)POF4-353$$1G:(DE-HGF)POF4-350$$2G:(DE-HGF)POF4-300$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$aDE-HGF$$bGesundheit$$lNeurodegenerative Diseases$$vClinical and Health Care Research$$x0
000268456 9141_ $$y2024
000268456 915__ $$0StatID:(DE-HGF)0200$$2StatID$$aDBCoverage$$bSCOPUS$$d2023-08-23
000268456 915__ $$0LIC:(DE-HGF)CCBY4$$2HGFVOC$$aCreative Commons Attribution CC BY 4.0
000268456 915__ $$0StatID:(DE-HGF)0112$$2StatID$$aWoS$$bEmerging Sources Citation Index$$d2023-08-23
000268456 915__ $$0StatID:(DE-HGF)0100$$2StatID$$aJCR$$bJ IMAGING : 2022$$d2023-08-23
000268456 915__ $$0StatID:(DE-HGF)0501$$2StatID$$aDBCoverage$$bDOAJ Seal$$d2023-04-12T14:59:23Z
000268456 915__ $$0StatID:(DE-HGF)0500$$2StatID$$aDBCoverage$$bDOAJ$$d2023-04-12T14:59:23Z
000268456 915__ $$0StatID:(DE-HGF)0700$$2StatID$$aFees$$d2023-08-23
000268456 915__ $$0StatID:(DE-HGF)0150$$2StatID$$aDBCoverage$$bWeb of Science Core Collection$$d2023-08-23
000268456 915__ $$0StatID:(DE-HGF)9900$$2StatID$$aIF < 5$$d2023-08-23
000268456 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
000268456 915__ $$0StatID:(DE-HGF)0030$$2StatID$$aPeer Review$$bDOAJ : Anonymous peer review$$d2023-04-12T14:59:23Z
000268456 915__ $$0StatID:(DE-HGF)0561$$2StatID$$aArticle Processing Charges$$d2023-08-23
000268456 915__ $$0StatID:(DE-HGF)0300$$2StatID$$aDBCoverage$$bMedline$$d2023-08-23
000268456 915__ $$0StatID:(DE-HGF)0320$$2StatID$$aDBCoverage$$bPubMed Central$$d2023-08-23
000268456 915__ $$0StatID:(DE-HGF)0199$$2StatID$$aDBCoverage$$bClarivate Analytics Master Journal List$$d2023-08-23
000268456 9201_ $$0I:(DE-2719)1340009$$kAG Speck$$lLinking imaging projects iNET$$x0
000268456 980__ $$ajournal
000268456 980__ $$aVDB
000268456 980__ $$aUNRESTRICTED
000268456 980__ $$aI:(DE-2719)1340009
000268456 9801_ $$aFullTexts