Logo of Robert Koch InstituteLogo of Robert Koch Institute
Publication Server of Robert Koch Instituteedoc
de|en
View Item 
  • edoc-Server Home
  • Artikel in Fachzeitschriften
  • Artikel in Fachzeitschriften
  • View Item
  • edoc-Server Home
  • Artikel in Fachzeitschriften
  • Artikel in Fachzeitschriften
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.
All of edoc-ServerCommunity & CollectionTitleAuthorSubjectThis CollectionTitleAuthorSubject
PublishLoginRegisterHelp
StatisticsView Usage Statistics
All of edoc-ServerCommunity & CollectionTitleAuthorSubjectThis CollectionTitleAuthorSubject
PublishLoginRegisterHelp
StatisticsView Usage Statistics
View Item 
  • edoc-Server Home
  • Artikel in Fachzeitschriften
  • Artikel in Fachzeitschriften
  • View Item
  • edoc-Server Home
  • Artikel in Fachzeitschriften
  • Artikel in Fachzeitschriften
  • View Item
2024-10-24Zeitschriftenartikel
AI Readiness in Healthcare through Storytelling XAI
Dubey, Akshat
Yang, Zewen
Hattab, Georges
Artificial Intelligence is rapidly advancing and radically impacting everyday life, driven by the increasing availability of computing power. Despite this trend, the adoption of AI in real-world healthcare is still limited. One of the main reasons is the trustworthiness of AI models and the potential hesitation of domain experts with model predictions. Explainable Artificial Intelligence (XAI) techniques aim to address these issues. However, explainability can mean different things to people with different backgrounds, expertise, and goals. To address the target audience with diverse needs, we develop storytelling XAI. In this research, we have developed an approach that combines multi-task distillation with interpretability techniques to enable audience-centric explainability. Using multi-task distillation allows the model to exploit the relationships between tasks, potentially improving interpretability as each task supports the other leading to an enhanced interpretability from the perspective of a domain expert. The distillation process allows us to extend this research to large deep models that are highly complex. We focus on both model-agnostic and model-specific methods of interpretability, supported by textual justification of the results in healthcare through our use case. Our methods increase the trust of both the domain experts and the machine learning experts to enable a responsible AI.
Files in this item
Thumbnail
2410.18725v2.pdf — Adobe PDF — 4.442 Mb
MD5: e55404275a05cd491e9108c989a4d042
Cite
BibTeX
EndNote
RIS
(CC BY 3.0 DE) Namensnennung 3.0 Deutschland(CC BY 3.0 DE) Namensnennung 3.0 Deutschland
Details
Terms of Use Imprint Policy Data Privacy Statement Contact

The Robert Koch Institute is a Federal Institute

within the portfolio of the Federal Ministry of Health

© Robert Koch Institute

All rights reserved unless explicitly granted.