Research Article

Hermai: A Hermeneutic Framework for Context-Aware and Narrative Explainable AI

Volume: 6 Number: 1 May 3, 2026

Hermai: A Hermeneutic Framework for Context-Aware and Narrative Explainable AI

Abstract

The increasing complexity of machine learning models has created a critical need for explainable artificial intelligence (XAI). While prominent post-hoc explanation frameworks like LIME and SHAP have provided foundational tools for model-agnostic interpretation, they often produce atomic feature-attribution scores that lack contextual realism and fail to provide holistic, actionable narratives. This paper introduces Hermai, a novel open-source Python framework designed to bridge this gap. Hermai is founded on the philosophical principles of hermeneutics, positing that true understanding arises from a dialogue between the parts (local predictions) and the whole (global model behavior). The framework makes three primary contributions: (1) a context-aware perturbation engine that generates plausible data points by respecting feature correlations, moving beyond the feature independence assumption of simpler methods; (2) a dual-mode explanatory system that generates both local, counterfactual-driven narratives for single predictions and general, rule-based explanations of the model's overall logic using surrogate models; and (3) the synthesis of these elements into human-readable, narrative explanations. We demonstrate Hermai's architecture, its practical application on a classic dataset, and argue that its hermeneutic approach offers a more intuitive, coherent, and trustworthy method for interpreting complex models.

Keywords

References

  1. [1] Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138-52160. https://doi.org/10.1109/ACCESS.2018.2870052
  2. [2] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?": Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. https://doi.org/10.1145/2939672.2939778
  3. [3] Lundberg, S. M., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. Advances in Neural Information Processing Systems, 30 (NIPS 2017).
  4. [4] Molnar, C. (2020). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Lulu.com.
  5. [5] Gadamer, H.-G. (1989). Truth and Method (2nd, rev. ed.). (J. Weinsheimer & D. G. Marshall, Trans.). Sheed & Ward.
  6. [6] European Union. (2024). Artificial Intelligence Act. Official Journal of the European Union.
  7. [7] Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic Attribution for Deep Networks. Proceedings of the 34th International Conference on Machine Learning, 70, 3319-3328.
  8. [8] Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31(2), 841-887.

Details

Primary Language

English

Subjects

Knowledge Representation and Reasoning

Journal Section

Research Article

Publication Date

May 3, 2026

Submission Date

October 3, 2025

Acceptance Date

April 24, 2026

Published in Issue

Year 2026 Volume: 6 Number: 1

APA
Şeker, Ş. E., & Arslanhan, Z. (2026). Hermai: A Hermeneutic Framework for Context-Aware and Narrative Explainable AI. Artificial Intelligence Theory and Applications, 6(1), 56-74. https://izlik.org/JA79HS46RD
AMA
1.Şeker ŞE, Arslanhan Z. Hermai: A Hermeneutic Framework for Context-Aware and Narrative Explainable AI. AITA. 2026;6(1):56-74. https://izlik.org/JA79HS46RD
Chicago
Şeker, Şadi Evren, and Zümra Arslanhan. 2026. “Hermai: A Hermeneutic Framework for Context-Aware and Narrative Explainable AI”. Artificial Intelligence Theory and Applications 6 (1): 56-74. https://izlik.org/JA79HS46RD.
EndNote
Şeker ŞE, Arslanhan Z (May 1, 2026) Hermai: A Hermeneutic Framework for Context-Aware and Narrative Explainable AI. Artificial Intelligence Theory and Applications 6 1 56–74.
IEEE
[1]Ş. E. Şeker and Z. Arslanhan, “Hermai: A Hermeneutic Framework for Context-Aware and Narrative Explainable AI”, AITA, vol. 6, no. 1, pp. 56–74, May 2026, [Online]. Available: https://izlik.org/JA79HS46RD
ISNAD
Şeker, Şadi Evren - Arslanhan, Zümra. “Hermai: A Hermeneutic Framework for Context-Aware and Narrative Explainable AI”. Artificial Intelligence Theory and Applications 6/1 (May 1, 2026): 56-74. https://izlik.org/JA79HS46RD.
JAMA
1.Şeker ŞE, Arslanhan Z. Hermai: A Hermeneutic Framework for Context-Aware and Narrative Explainable AI. AITA. 2026;6:56–74.
MLA
Şeker, Şadi Evren, and Zümra Arslanhan. “Hermai: A Hermeneutic Framework for Context-Aware and Narrative Explainable AI”. Artificial Intelligence Theory and Applications, vol. 6, no. 1, May 2026, pp. 56-74, https://izlik.org/JA79HS46RD.
Vancouver
1.Şadi Evren Şeker, Zümra Arslanhan. Hermai: A Hermeneutic Framework for Context-Aware and Narrative Explainable AI. AITA [Internet]. 2026 May 1;6(1):56-74. Available from: https://izlik.org/JA79HS46RD