Improving Patient Outcomes Through Explainable AI: A Path Towards Transparent and Accountable Healthcare Systems

Authors

  • Dr. B.M. Rajesh Associate Professor, Department of Computer Science, Dr. N.G.P. Arts and Science College - Kalapatti - Coimbatore. Author
  • Dr. G. Sumalatha Assistant professor, Computer Technology & Data Science, Sri Krishna Arts and Science College. Author
  • Kavipriya K Assistant Professor, Department of Computer Science and Applications, Christ Academy Institute for Advanced Studies, Bangalore. Author
  • Dr. Arijit Chakraborty Assistant Professor, Bachelor of Computer Applications, The Heritage Academy, Chowbaga Road, Anandapur, Kolkata: 700107, West Bengal. Author
  • Dr. R. Balakrishnan Associate Professor, School of Information Science, Presidency University, Rajanukunte, Yelahanka, Bengaluru, Karnataka Author
  • Dr. A. Kaliappan Associate Professor, School of Computing, SRM Institute of Science and Technology, Tiruchirappalli, India Author

DOI:

https://doi.org/10.4238/1ke27m44

Abstract

The role of modern models of artificial intelligence (AI) in healthcare systems has proven automating the processes of patient monitoring, diagnostics, and treatment suggestions, as well as the automation of healthcare systems and the more sophisticated automation of patients. There is an important issue however to the usage of these modern AI models whereby the results obtained are not necessarily self-explanatory. Such a situation in conjunction with the high level of AI systems used in healthcare models creates an erosion of the issues of responsibility and interpersonal trust toward the healthcare AI models. XAI systems emerged as a way to furnish a rational justification to an algorithm’s operation by describing the reasoning process. By developing a hybridization of LIME (Local Interpretable Model-Agnostic Explanations) feature attribution and case-level LIME interpretability with clinical reasoning and rule-driven verification, the new approaches of interpretability AI with global and local particularities dual explain with SHAP (SHAP, Shapley Additive exPlanations) exPlain, wo merging rule-based systems. Practitioners, in turn, AI model’s usage hinges on the global and local fidelity to the commonsense understanding of the medicine. Their overdependence on clinical heuristics and excessive reliance on heuristics favors optimized composite heuristics in the model’s usage prism. This is also underthemed with fairness-sensitive auditing to monitor more equitable differences in consideration of population cohorts. Early indications show better clinically oriented predictive performance with fewer barriers to interpretability. This balance aspires towards the right mix of effectiveness and Explainable AI. It enhances the understanding and use of AI models in the healthcare domain.

Downloads

Published

2026-01-06

Issue

Section

Articles

How to Cite

Improving Patient Outcomes Through Explainable AI: A Path Towards Transparent and Accountable Healthcare Systems. (2026). Genetics and Molecular Research, 25(1), 1-13. https://doi.org/10.4238/1ke27m44