Logo of Robert Koch InstituteLogo of Robert Koch Institute
Publication Server of Robert Koch Instituteedoc
de|en
View Item 
  • edoc-Server Home
  • Artikel in Fachzeitschriften
  • Artikel in Fachzeitschriften
  • View Item
  • edoc-Server Home
  • Artikel in Fachzeitschriften
  • Artikel in Fachzeitschriften
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.
All of edoc-ServerCommunity & CollectionTitleAuthorSubjectThis CollectionTitleAuthorSubject
PublishLoginRegisterHelp
StatisticsView Usage Statistics
All of edoc-ServerCommunity & CollectionTitleAuthorSubjectThis CollectionTitleAuthorSubject
PublishLoginRegisterHelp
StatisticsView Usage Statistics
View Item 
  • edoc-Server Home
  • Artikel in Fachzeitschriften
  • Artikel in Fachzeitschriften
  • View Item
  • edoc-Server Home
  • Artikel in Fachzeitschriften
  • Artikel in Fachzeitschriften
  • View Item
2022-12-15Zeitschriftenartikel
Analysing cerebrospinal fluid with explainable deep learning: From diagnostics to insights
Schweizer, Leonille
Seegerer, Philipp
Kim, Hee-yeong
Saitenmacher, René
Muench, Amos
Barnick, Liane
Osterloh, Anja
Dittmayer, Carsten
Jödicke, Ruben
Pehl, Deborah
Reinhardt, Annekathrin
Ruprecht, Klemens
Stenzel, Werner
Wefers, Annika K.
Harter, Patrick N.
Schüller, Ulrich
Heppner, Frank L.
Alber, Maximilian
Müler, Klaus-Robert
Klauschen, Frederick
Aim Analysis of cerebrospinal fluid (CSF) is essential for diagnostic workup of patients with neurological diseases and includes differential cell typing. The current gold standard is based on microscopic examination by specialised technicians and neuropathologists, which is time-consuming, labour-intensive and subjective. Methods We, therefore, developed an image analysis approach based on expert annotations of 123,181 digitised CSF objects from 78 patients corresponding to 15 clinically relevant categories and trained a multiclass convolutional neural network (CNN). Results The CNN classified the 15 categories with high accuracy (mean AUC 97.3%). By using explainable artificial intelligence (XAI), we demonstrate that the CNN identified meaningful cellular substructures in CSF cells recapitulating human pattern recognition. Based on the evaluation of 511 cells selected from 12 different CSF samples, we validated the CNN by comparing it with seven board-certified neuropathologists blinded for clinical information. Inter-rater agreement between the CNN and the ground truth was non-inferior (Krippendorff's alpha 0.79) compared with the agreement of seven human raters and the ground truth (mean Krippendorff's alpha 0.72, range 0.56–0.81). The CNN assigned the correct diagnostic label (inflammatory, haemorrhagic or neoplastic) in 10 out of 11 clinical samples, compared with 7–11 out of 11 by human raters. Conclusions Our approach provides the basis to overcome current limitations in automated cell classification for routine diagnostics and demonstrates how a visual explanation framework can connect machine decision-making with cell properties and thus provide a novel versatile and quantitative method for investigating CSF manifestations of various neurological diseases.
Files in this item
Thumbnail
Neuropathology Appl Neurobio - 2022 - Schweizer - Analysing cerebrospinal fluid with explainable deep learning From.pdf — Adobe PDF — 6.517 Mb
MD5: 6bf34e52850f12f76759c1d5586f68a6
Cite
BibTeX
EndNote
RIS
(CC BY 3.0 DE) Namensnennung 3.0 Deutschland(CC BY 3.0 DE) Namensnennung 3.0 Deutschland
Details
Terms of Use Imprint Policy Data Privacy Statement Contact

The Robert Koch Institute is a Federal Institute

within the portfolio of the Federal Ministry of Health

© Robert Koch Institute

All rights reserved unless explicitly granted.