Cervical spine diseases, particularly neck flatness, pose significant diagnostic and treatment challenges due to the complexity of spinal structures. This study explores the application of Explainable Artificial Intelligence (XAI) techniques, specifically Random Forest and Decision Tree algorithms, to classify and assess the severity of cervical spine diseases. The dataset consists of cervical spine curvature measurements, demographic information, and clinical features. To enhance model interpretability, SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) methods were integrated. These techniques provide a transparent framework for decision-making, allowing medical professionals to understand the reasoning behind AI-driven predictions. The study highlights the impact of feature selection and hyperparameter tuning on model performance, optimizing the classification process. experimental results indicate that the Random Forest algorithm achieved the highest classification accuracy at 88%, demonstrating robust predictive capabilities. The Decision Tree algorithm provided an interpretable alternative with an accuracy of 83%, enabling clear visualization of feature importance. A comparative analysis was conducted with existing literature, and findings suggest that XAI-powered models significantly improve diagnostic reliability. Additionally, application images from the dataset were incorporated into the findings section to provide a more comprehensive representation of the study. The results obtained by testing the models with independent data were also included. This research underscores the importance of integrating explainable AI into medical diagnosis, offering trustworthy, transparent, and clinically relevant insights for cervical spine disease assessment.
Açıklanabilir Yapay Zeka Random Forest Decision Tree Hiperparametre optimizasyonu Tıbbi görüntü analizi SHAP LIME
Cervical spine diseases, particularly neck flatness, pose significant diagnostic and treatment challenges due to the complexity of spinal structures. This study explores the application of Explainable Artificial Intelligence (XAI) techniques, specifically Random Forest and Decision Tree algorithms, to classify and assess the severity of cervical spine diseases. The dataset consists of cervical spine curvature measurements, demographic information, and clinical features. To enhance model interpretability, SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) methods were integrated. These techniques provide a transparent framework for decision-making, allowing medical professionals to understand the reasoning behind AI-driven predictions. The study highlights the impact of feature selection and hyperparameter tuning on model performance, optimizing the classification process. experimental results indicate that the Random Forest algorithm achieved the highest classification accuracy at 88%, demonstrating robust predictive capabilities. The Decision Tree algorithm provided an interpretable alternative with an accuracy of 83%, enabling clear visualization of feature importance. A comparative analysis was conducted with existing literature, and findings suggest that XAI-powered models significantly improve diagnostic reliability. Additionally, application images from the dataset were incorporated into the findings section to provide a more comprehensive representation of the study. The results obtained by testing the models with independent data were also included. This research underscores the importance of integrating explainable AI into medical diagnosis, offering trustworthy, transparent, and clinically relevant insights for cervical spine disease assessment.
Explainable Artificial Intelligence Cervical Spine Diseases Random Forest Decision Tree LIME Medical Diagnosis Hyperparameter Optimization Neck Disorders Feature Importance SHAP
Primary Language | English |
---|---|
Subjects | Planning and Decision Making |
Journal Section | Research Articles |
Authors | |
Early Pub Date | May 31, 2025 |
Publication Date | May 31, 2025 |
Submission Date | January 23, 2025 |
Acceptance Date | May 30, 2025 |
Published in Issue | Year 2025 Volume: 7 Issue: 1 |
This work is licensed under a Creative Commons Attribution 4.0 International License