Explainable AI in Healthcare: Enhancing Trust through Interpretable Machine Learning Models
DOI:
10.63665/ijmlaidse.v1i1.03
Publication Date:
2025-04-18T11:00:20Z
AUTHORS (1)
ABSTRACT
As artificial intelligence continues to reshape the healthcare industry, a growing concern among professionals and patients is "black-box" nature of many machine learning models. While accuracy remains important, trust in AI decisions equally vital, especially critical areas like diagnosis treatment planning. This paper explores role Explainable Artificial Intelligence (XAI) building that by making outputs more transparent understandable. Using real-world datasets case study cardiovascular disease prediction, we evaluate how interpretable models explanation techniques SHAP LIME improve clinician acceptance decision-making. A structured questionnaire reveals insights from on their comfort reliance tools. contributes understanding for be truly effective healthcare, it must not only smart—but also explain itself.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....