INTERPRETABLE FEDERATED LEARNING FOR PRIVACY-CENTRIC GENOMIC DIAGNOSTICS: A MULTI-INSTITUTIONAL FRAMEWORK
DOI:
https://doi.org/10.4238/z0azdg35Keywords:
Federated Learning, Explainable Artificial Intelligence, Privacy-Preserving Healthcare, Attention-Based Deep Learning, Secure Model Aggregation, Medical Image Diagnostics, Distributed Clinical AIAbstract
The rapid digitization of healthcare systems has created large amounts of sensitive medical data that is generated from different diagnostic centers and hospitals. Deep learning has provided impressive performance and diagnostic accuracy. However, there are privacy, security, and compliance concerns related to centralized training methods. Therefore, this study presents the first healthcare diagnostics privacy-preserving framework, Hybrid Explainable Federated Attention Framework (HEFAF). The HEFAF framework combines explainable federated deep learning and a dual-level attention-based convolutional neural network with adaptive weighted aggregation for the improvement of performance and Explainability without raw patient data. The HEFAF framework has three starting contributions. First is the private federated optimizer that provides data privacy (adaptive privacy-aware federated optimizer). This optimizer applies different local learning rate adjustments for each federated participant based on the degree of data heterogeneity. Second, is the explainable weighted aggregation, where aggregation is done selectively to explain the attributable data. This also reduces the need to explain the data. This also reduces explainable data aggregation and computational load. Third is the explainable module where the SHAP (Shapley Additive exPlanations) and attention methods combine to give clinical Explainability to diagnostic data. The model was assessed on a distributed dataset of medical images, with 18,500 diagnosis samples contributed by five medical centers. As found in the experiments, the proposed HEFAF model demonstrates a diagnostic accuracy of 95.1%, 4.6% better than independent local models and 2.3% better than traditional federated averaging models. Additionally, the adaptive aggregation mechanism explained 31% of the reduction in the communication cost, and the explainability validation aligns 93% of the model-identified regions with expert-specified regions. The results validate that the proposed framework demonstrates the ability of privacy preservation, model robustness, and interpretability. This research bridges the components of secure distributed learning and explainable clinical decision support, providing an easily scalable and regulation-compliant framework for intelligent healthcare systems.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

