Research Article
BibTex RIS Cite

Explainable Graph Neural Networks in Intensive Care Unit Mortality Prediction: Edge-Level and Motif-Level Analysis

Year 2025, Volume: 9 Issue: 2, 770 - 789, 31.12.2025
https://doi.org/10.26650/acin.1835775
https://izlik.org/JA38PB99AH

Abstract

Accurate forecasting of ICU patient outcomes is essential for clinical decision support. However, most high-performing machine learning models function as black boxes, limiting their interpretability and clinical adoption. This study introduces a graph-based explainable risk-prediction framework, in which patient–patient relations are modeled through a diagnosis-based similarity network. An undirected graph was derived from the eICU-CRD demo subset (PhysioNet v2.0) by linking individuals sharing three-digit ICD-9 categories, and a GCN was trained for in-hospital mortality prediction. Despite the dataset’s modest size and imbalance, meaningful discrimination was achieved (AUROC = 0.708; AUPRC = 0.308). A two-layer explainability analysis was applied to clarify the model’s decision process. Each prediction was driven by a combination of patient-specific clinical attributes and signals from a small number of influential neighbors, according to GNNExplainer. SubgraphX, a Shapley-value-based motif discovery method, identified compact and clinically coherent subgraphs with strong causal influence on the prediction. Consistency between edge- and motif-level explanations indicated that the model relies on stable relational patterns with clinical relevance. These findings suggest that integrating GNNs with structured explainability methods can transform a single risk score into a transparent, data-driven decision-support mechanism that provides interpretable and hypothesis-generating insights to clinicians.

References

  • Adadi, A. and Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052 google scholar
  • Breiman, L. (2001). Statistical modeling: The two cultures. Statistical Science, 16(3), 199–215. doi: 10.1214/SS/1009213726 google scholar
  • Buchanan, B. G., & Shortliffe, E. H. (Eds.). (1984). Rule-based expert systems: The MYCIN experiments of the Stanford heuristic programming project (Addison Wesley, Rea…). Retrieved from https://www.shortliffe.net/Buchanan-Shortliffe-1984/MYCIN%20Book.htm google scholar
  • Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., and Elhadad, N. (2015). Intelligible models for healthcare: Predicting the risk of pneumonia and 30-day hospital readmission !emph[Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining];, 2015-August, 1721–1730. https://doi.org/10.1145/2783258.2788613/SUPPL_FILE/P1721.MP4 google scholar
  • Goldberger, A. L., Amaral, L. A., Glass, L., Hausdorff, J. M., Ivanov, P. C., Mark, R. G., … Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation, 101(23). https://doi.org/10.1161/01.CIR.101.23.E215 google scholar
  • Lin, K.-W., Kuo, Y.-C., Wang, H.-Y., & Tseng, Y.-J. (2025). KAT-GNN: A Knowledge-Augmented Temporal Graph Neural Network for Risk Prediction in Electronic Health Records Retrieved from https://arxiv.org/pdf/2511.01249 google scholar
  • Lu, H., & Uddin, S. (2021). A weighted patient network-based framework for predicting chronic diseases using GNs Scientific Reports, 11(1), 22607. DOI: 10.1038/S41598-021-01964-2 google scholar
  • Lundberg, S. M., Erion, G., Chen, H., Degrave, A., Prutkin, J. M., Nair, B., … Lee, S.-I. (n.d.). From Local Explanations to Global Understanding with Explainable Tree AI https://github.com/suinleelab/treeexplainer-study google scholar
  • Mao, C., Yao, L., & Luo, Y. (2022). MedGCN: Medication recommendation and lab test imputation via graph convolutional networks Journal of Biomedical Informatics, 127, 104000. DOI: 10.1016/J.JBI.2022.104000 google scholar
  • Pollard, T. J., Johnson, A. E. W., Raffa, J. D., Celi, L. A., Mark, R. G., & Badawi, O. (2018). The eICU collaborative research database is a freely available multi-center database for critical care research. Scientific Data, 5, https://doi.org/10.1038/SDATA.2018.178, google scholar
  • Sağıroğlu, Ş., & Demirezen, M. U. (2022). Yapay Zekâ ve Büyük Veri Kitap Serisi 4: Yorumlanabilir ve Açıklanabilir Yapay Zekâ ve Güncel Konular. google scholar
  • Sahu, M. K., & Roy, P. (2025). Similarity-Based Self-Construct Graph Model for Predicting Patient Criticalness Using Graph Neural Networks and Electronic Health Record Data Retrieved from https://arxiv.org/pdf/2508.00615 google scholar
  • Shapley, L. S. (1952). Value for n-person games The Shapley Value, (28), 307–317. https://doi.org/10.1017/CBO9780511528446.003 google scholar
  • Tjoa, E., & Guan, C. (2021). A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793–4813. https://doi.org/10.1109/TNNLS.2020.3027314 google scholar
  • Tong, C., Rocheteau, E., Veličković, P., Lane, N., & Liò, P., 2021. Predicting Patient Outcomes Using Graph Representation Learning Studies in Computational Intelligence, 1013, 281–293. DOI: 10.1007/978-3-030-93080-6_20 google scholar
  • Van Noorden, R., & Perkel, J. M. (2023). AI and science: What 1,600 researchers think. Nature, 621(7980), 672–675. https://doi.org/10.1038/D41586-023-02980-0https://doi.org/10.1038/D41586-023-02980-0 google scholar
  • Watson, D. S. (2022). Statistics of Interpretable Machine Learning Digital Ethics Lab Yearbook, 133–155. doi: 10.1007/978-3-031-09846-8_10 google scholar
  • Ying R, Bourgeois D, You J, Zitnik, M., & Leskovec, J. (2019). GNNExplainer: Generating Explanations for Graph Neural Networks. Advances in Neural Information Processing Systems, 32. Retrieved from https://arxiv.org/pdf/1903.03894 google scholar
  • The yuan H, Yu H, Wang J, Li, K., & Ji, S. (2021). Explainability of Graph Neural Networks via Subgraph Exploration The yuan H, Yu H, Wang J, Li, K., & Ji, S. (2021). Explainability of Graph Neural Networks via Subgraph Exploration google scholar

Year 2025, Volume: 9 Issue: 2, 770 - 789, 31.12.2025
https://doi.org/10.26650/acin.1835775
https://izlik.org/JA38PB99AH

Abstract

References

  • Adadi, A. and Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052 google scholar
  • Breiman, L. (2001). Statistical modeling: The two cultures. Statistical Science, 16(3), 199–215. doi: 10.1214/SS/1009213726 google scholar
  • Buchanan, B. G., & Shortliffe, E. H. (Eds.). (1984). Rule-based expert systems: The MYCIN experiments of the Stanford heuristic programming project (Addison Wesley, Rea…). Retrieved from https://www.shortliffe.net/Buchanan-Shortliffe-1984/MYCIN%20Book.htm google scholar
  • Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., and Elhadad, N. (2015). Intelligible models for healthcare: Predicting the risk of pneumonia and 30-day hospital readmission !emph[Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining];, 2015-August, 1721–1730. https://doi.org/10.1145/2783258.2788613/SUPPL_FILE/P1721.MP4 google scholar
  • Goldberger, A. L., Amaral, L. A., Glass, L., Hausdorff, J. M., Ivanov, P. C., Mark, R. G., … Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation, 101(23). https://doi.org/10.1161/01.CIR.101.23.E215 google scholar
  • Lin, K.-W., Kuo, Y.-C., Wang, H.-Y., & Tseng, Y.-J. (2025). KAT-GNN: A Knowledge-Augmented Temporal Graph Neural Network for Risk Prediction in Electronic Health Records Retrieved from https://arxiv.org/pdf/2511.01249 google scholar
  • Lu, H., & Uddin, S. (2021). A weighted patient network-based framework for predicting chronic diseases using GNs Scientific Reports, 11(1), 22607. DOI: 10.1038/S41598-021-01964-2 google scholar
  • Lundberg, S. M., Erion, G., Chen, H., Degrave, A., Prutkin, J. M., Nair, B., … Lee, S.-I. (n.d.). From Local Explanations to Global Understanding with Explainable Tree AI https://github.com/suinleelab/treeexplainer-study google scholar
  • Mao, C., Yao, L., & Luo, Y. (2022). MedGCN: Medication recommendation and lab test imputation via graph convolutional networks Journal of Biomedical Informatics, 127, 104000. DOI: 10.1016/J.JBI.2022.104000 google scholar
  • Pollard, T. J., Johnson, A. E. W., Raffa, J. D., Celi, L. A., Mark, R. G., & Badawi, O. (2018). The eICU collaborative research database is a freely available multi-center database for critical care research. Scientific Data, 5, https://doi.org/10.1038/SDATA.2018.178, google scholar
  • Sağıroğlu, Ş., & Demirezen, M. U. (2022). Yapay Zekâ ve Büyük Veri Kitap Serisi 4: Yorumlanabilir ve Açıklanabilir Yapay Zekâ ve Güncel Konular. google scholar
  • Sahu, M. K., & Roy, P. (2025). Similarity-Based Self-Construct Graph Model for Predicting Patient Criticalness Using Graph Neural Networks and Electronic Health Record Data Retrieved from https://arxiv.org/pdf/2508.00615 google scholar
  • Shapley, L. S. (1952). Value for n-person games The Shapley Value, (28), 307–317. https://doi.org/10.1017/CBO9780511528446.003 google scholar
  • Tjoa, E., & Guan, C. (2021). A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793–4813. https://doi.org/10.1109/TNNLS.2020.3027314 google scholar
  • Tong, C., Rocheteau, E., Veličković, P., Lane, N., & Liò, P., 2021. Predicting Patient Outcomes Using Graph Representation Learning Studies in Computational Intelligence, 1013, 281–293. DOI: 10.1007/978-3-030-93080-6_20 google scholar
  • Van Noorden, R., & Perkel, J. M. (2023). AI and science: What 1,600 researchers think. Nature, 621(7980), 672–675. https://doi.org/10.1038/D41586-023-02980-0https://doi.org/10.1038/D41586-023-02980-0 google scholar
  • Watson, D. S. (2022). Statistics of Interpretable Machine Learning Digital Ethics Lab Yearbook, 133–155. doi: 10.1007/978-3-031-09846-8_10 google scholar
  • Ying R, Bourgeois D, You J, Zitnik, M., & Leskovec, J. (2019). GNNExplainer: Generating Explanations for Graph Neural Networks. Advances in Neural Information Processing Systems, 32. Retrieved from https://arxiv.org/pdf/1903.03894 google scholar
  • The yuan H, Yu H, Wang J, Li, K., & Ji, S. (2021). Explainability of Graph Neural Networks via Subgraph Exploration The yuan H, Yu H, Wang J, Li, K., & Ji, S. (2021). Explainability of Graph Neural Networks via Subgraph Exploration google scholar
There are 19 citations in total.

Details

Primary Language English
Subjects Computing Applications in Health
Journal Section Research Article
Authors

Şebnem Akal 0000-0001-8239-2957

Submission Date December 4, 2025
Acceptance Date December 23, 2025
Publication Date December 31, 2025
DOI https://doi.org/10.26650/acin.1835775
IZ https://izlik.org/JA38PB99AH
Published in Issue Year 2025 Volume: 9 Issue: 2

Cite

APA Akal, Ş. (2025). Explainable Graph Neural Networks in Intensive Care Unit Mortality Prediction: Edge-Level and Motif-Level Analysis. Acta Infologica, 9(2), 770-789. https://doi.org/10.26650/acin.1835775
AMA 1.Akal Ş. Explainable Graph Neural Networks in Intensive Care Unit Mortality Prediction: Edge-Level and Motif-Level Analysis. ACIN. 2025;9(2):770-789. doi:10.26650/acin.1835775
Chicago Akal, Şebnem. 2025. “Explainable Graph Neural Networks in Intensive Care Unit Mortality Prediction: Edge-Level and Motif-Level Analysis”. Acta Infologica 9 (2): 770-89. https://doi.org/10.26650/acin.1835775.
EndNote Akal Ş (December 1, 2025) Explainable Graph Neural Networks in Intensive Care Unit Mortality Prediction: Edge-Level and Motif-Level Analysis. Acta Infologica 9 2 770–789.
IEEE [1]Ş. Akal, “Explainable Graph Neural Networks in Intensive Care Unit Mortality Prediction: Edge-Level and Motif-Level Analysis”, ACIN, vol. 9, no. 2, pp. 770–789, Dec. 2025, doi: 10.26650/acin.1835775.
ISNAD Akal, Şebnem. “Explainable Graph Neural Networks in Intensive Care Unit Mortality Prediction: Edge-Level and Motif-Level Analysis”. Acta Infologica 9/2 (December 1, 2025): 770-789. https://doi.org/10.26650/acin.1835775.
JAMA 1.Akal Ş. Explainable Graph Neural Networks in Intensive Care Unit Mortality Prediction: Edge-Level and Motif-Level Analysis. ACIN. 2025;9:770–789.
MLA Akal, Şebnem. “Explainable Graph Neural Networks in Intensive Care Unit Mortality Prediction: Edge-Level and Motif-Level Analysis”. Acta Infologica, vol. 9, no. 2, Dec. 2025, pp. 770-89, doi:10.26650/acin.1835775.
Vancouver 1.Akal Ş. Explainable Graph Neural Networks in Intensive Care Unit Mortality Prediction: Edge-Level and Motif-Level Analysis. ACIN [Internet]. 2025 Dec. 1;9(2):770-89. Available from: https://izlik.org/JA38PB99AH