Accurate forecasting of ICU patient outcomes is essential for clinical decision support. However, most high-performing machine learning models function as black boxes, limiting their interpretability and clinical adoption. This study introduces a graph-based explainable risk-prediction framework, in which patient–patient relations are modeled through a diagnosis-based similarity network. An undirected graph was derived from the eICU-CRD demo subset (PhysioNet v2.0) by linking individuals sharing three-digit ICD-9 categories, and a GCN was trained for in-hospital mortality prediction. Despite the dataset’s modest size and imbalance, meaningful discrimination was achieved (AUROC = 0.708; AUPRC = 0.308). A two-layer explainability analysis was applied to clarify the model’s decision process. Each prediction was driven by a combination of patient-specific clinical attributes and signals from a small number of influential neighbors, according to GNNExplainer. SubgraphX, a Shapley-value-based motif discovery method, identified compact and clinically coherent subgraphs with strong causal influence on the prediction. Consistency between edge- and motif-level explanations indicated that the model relies on stable relational patterns with clinical relevance. These findings suggest that integrating GNNs with structured explainability methods can transform a single risk score into a transparent, data-driven decision-support mechanism that provides interpretable and hypothesis-generating insights to clinicians.
| Primary Language | English |
|---|---|
| Subjects | Computing Applications in Health |
| Journal Section | Research Article |
| Authors | |
| Submission Date | December 4, 2025 |
| Acceptance Date | December 23, 2025 |
| Publication Date | December 31, 2025 |
| DOI | https://doi.org/10.26650/acin.1835775 |
| IZ | https://izlik.org/JA38PB99AH |
| Published in Issue | Year 2025 Volume: 9 Issue: 2 |