What do we need to build explainable AI systems for the medical domain?

Black box Obligation Representation
DOI: 10.48550/arxiv.1712.09923 Publication Date: 2017-01-01
ABSTRACT
Artificial intelligence (AI) generally and machine learning (ML) specifically demonstrate impressive practical success in many different application domains, e.g. autonomous driving, speech recognition, or recommender systems. Deep approaches, trained on extremely large data sets using reinforcement methods have even exceeded human performance visual tasks, particularly playing games such as Atari, mastering the game of Go. Even medical domain there are remarkable results. The central problem models is that they regarded black-box if we understand underlying mathematical principles, lack an explicit declarative knowledge representation, hence difficulty generating explanatory structures. This calls for systems enabling to make decisions transparent, understandable explainable. A huge motivation our approach rising legal privacy aspects. new European General Data Protection Regulation entering into force May 25th 2018, will approaches difficult use business. does not imply a ban automatic obligation explain everything all time, however, must be possibility results re-traceable demand. In this paper outline some research topics context relatively area explainable-AI with focus medicine, which very special domain. due fact professionals working mostly distributed heterogeneous complex sources data. concentrate three sources: images, *omics text. We argue would help facilitate implementation AI/ML domain, transparency trust.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....