2022-12-15Zeitschriftenartikel
Analysing cerebrospinal fluid with explainable deep learning: From diagnostics to insights
Schweizer, Leonille
Seegerer, Philipp
Kim, Hee-yeong
Saitenmacher, René
Muench, Amos
Barnick, Liane
Osterloh, Anja
Dittmayer, Carsten
Jödicke, Ruben
Pehl, Deborah
Reinhardt, Annekathrin
Ruprecht, Klemens
Stenzel, Werner
Wefers, Annika K.
Harter, Patrick N.
Schüller, Ulrich
Heppner, Frank L.
Alber, Maximilian
Müler, Klaus-Robert
Klauschen, Frederick
Aim
Analysis of cerebrospinal fluid (CSF) is essential for diagnostic workup of patients with neurological diseases and includes differential cell typing. The current gold standard is based on microscopic examination by specialised technicians and neuropathologists, which is time-consuming, labour-intensive and subjective.
Methods
We, therefore, developed an image analysis approach based on expert annotations of 123,181 digitised CSF objects from 78 patients corresponding to 15 clinically relevant categories and trained a multiclass convolutional neural network (CNN).
Results
The CNN classified the 15 categories with high accuracy (mean AUC 97.3%). By using explainable artificial intelligence (XAI), we demonstrate that the CNN identified meaningful cellular substructures in CSF cells recapitulating human pattern recognition. Based on the evaluation of 511 cells selected from 12 different CSF samples, we validated the CNN by comparing it with seven board-certified neuropathologists blinded for clinical information. Inter-rater agreement between the CNN and the ground truth was non-inferior (Krippendorff's alpha 0.79) compared with the agreement of seven human raters and the ground truth (mean Krippendorff's alpha 0.72, range 0.56–0.81). The CNN assigned the correct diagnostic label (inflammatory, haemorrhagic or neoplastic) in 10 out of 11 clinical samples, compared with 7–11 out of 11 by human raters.
Conclusions
Our approach provides the basis to overcome current limitations in automated cell classification for routine diagnostics and demonstrates how a visual explanation framework can connect machine decision-making with cell properties and thus provide a novel versatile and quantitative method for investigating CSF manifestations of various neurological diseases.