5th International Congress of Applied Statistics (UYIK-2024), İstanbul, Turkey, 21 - 23 May 2024, pp.98
Artificial Intelligence (AI) is becoming increasingly involved in human life daily, and the diversity of its application areas is increasing. The healthcare sector is one of the areas where AI has been widely used recently. For example, in the healthcare sector, AI helps doctors diagnose diseases and provide preliminary information, and it is also used to predict and classify illnesses that individuals may experience in the future. Classification of heart attack risk, the subject of this study, is among the studies to which AI can be applied. While carrying out these studies, techniques such as machine learning, artificial neural networks, or deep learning are used, and high-accuracy results can be obtained. Despite this, the fact that many of the methods used are defined as black-box methods, that is, where the results obtained from the model cannot be explained retrospectively, causes AI to be entirely still not trusted. Thus, studies on explainable AI (XAI) methods aiming to provide local explanations for complex models created globally have increased rapidly in recent years. Explaining the models and results will increase confidence in AI applications. Based on this, methods defined as glass-box may be preferred. However, in this case, the prediction accuracy is expected to be lower compared to models with a complex structure. In this context, it is crucial to choose a model that provides the balance between prediction accuracy and explainability, especially in AI applications in critical sectors such as the healthcare sector. In addition, although it is essential to explain the models created in general, it is also of great importance to be able to provide individual explanations for the predictions made, especially in applications in the field of health. In this study, we will try to determine whether patients are at risk of experiencing a heart attack, considering their characteristics such as age and gender, as well as disease history values that may affect the risk of heart attacks, such as chest pain type, cholesterol, and sugar levels. In addition to logistic regression, naive Bayes, and decision tree models, black box methods such as multi-layer perceptrons and gradient boosting will be used for prediction. The methods used will be examined and compared in terms of explainability as well as prediction accuracy. In addition, the individual explanatory power of the models is another performance indicator to be examined. The results obtained in this context will guide healthcare stakeholders in choosing the most appropriate model among the models created to assist them in their decision-making processes.