Comparison of Machine learning model with Explainable AI: Applicable to Dementia
Abstract
In the healthcare domain, clinical practice needs effective models with flourishing interpretability to address any issues, cases like Dementia, in which diagnosis needs a proper explanation for such urgent problems, need an accurate model with effective interpretability. In medical practice implementation of Machine Learning (ML) models presents difficulties because of a lack of clarity on how particular results are derived, even though their
outcomes are accurate. Traditional ML models with Explainable Artificial Intelligence (XAI) and without XAI by using Open Access Series of Imaging Studies (OASIS) dementia datasets are used to find out which has more interpretability to show the comparative analysis. SHAP (SHapley Additive exPlanations)/Local Interpretable Model-Agnostic Explanations (LIME) of XAI are used to provide explanations, whereas Metrics like Accuracy, Recall, Precision, F1-score, and AUC are used to evaluate the base model whose results are then compared with
other metrics to find the importance of interpretability in the model to overcome the gap between ML model and its implementation in clinical practice. The Traditional ML model provides good anticipating accuracy with an Area Under Curve (AUC) up to 0.94, but incorporating ML with the XAI model together gives better clinical results and enables medical professionals to build trust in predictions made by models. This clarifies the decision-
making capabilities of ML models, eliminating risk factors. Thus, this study describes the need for an effective way to diagnose diseases is not only through good models with high accuracy, but also with models providing interpretability and clarity on prediction. Therefore, leading to an analysis that indeed helps in addressing gaps in implementing models applicable to the medical domain.
References
Nagajyothi, D., & Reddy, C.V.R. (2025). Optimizing dementia prediction: A comparative performance study of ML and DL. Journal of Theoretical and Applied Information Technology, 103(11), 74–82. www.jatit.org
Morris, T., Liu, Z., Liu, L., & Zhao, X. (2024). Using a convolutional neural network and explainable AI to diagnose dementia based on MRI scans. NIH Research Report, https://doi.org/10.48550/arXiv.2406.18555
Poonam, K.M., Guha, R., & Chakrabarti, P.P. (2023). Frontotemporal dementia detection model based on an explainable machine learning approach. In Computational Intelligence in Data Science (IFIP Advances in Information and Communication Technology, Vol. 630, pp. 230–242). Springer, Cham. https://doi.org/10.1007/978-3-031-38296-3_18