Araştırma Makalesi
BibTex RIS Kaynak Göster

Örgütsel Yapay Zekâ Kararlarının Açıklanabilirlik Yükü ve Hesap Verebilirliği: Blok Zinciri Tabanlı Bir Yönetişim Modeli

Yıl 2026, Sayı: 89 , 343 - 366 , 30.04.2026
https://izlik.org/JA75RG97MP

Öz

Kurumsal karar alma alanında yapay zekânın (YZ) hızla yaygınlaşması, hesap verebilirlik krizini keskinleştirmiştir. Finans, sağlık ve insan kaynaklarında YZ sistemleri daha yüksek riskli karar çıktıları üzerinde etkili olurken, paydaşlar kararların nasıl verildiğine dair şeffaflık ve hatalar karşısında daha güçlü hesap verebilirlik talep etmektedir. Ancak açıklanabilirliğe ilişkin yaklaşımlar; silo temelli sorumluluk yapıları, zayıf denetim izleri ve yeterince tanımlanmamış bir “açıklanabilirlik yükü” nedeniyle sınırlı kalmaktadır. Bu yük, YZ kararlarına ilişkin açıklamaları üretme, sürdürme, doğrulama ve güvence altına alma emeğini ifade eder. Bu kavramsal-kuramsal makale, blok zincirini değiştirilemez bir yönetişim altyapısı olarak ele alarak bu yükün paydaşlar (geliştiriciler, veri sağlayıcılar, süreç sahipleri ve denetçiler) arasında nasıl dağıtılabileceğine ilişkin bir çerçeve kurar. Algoritmik hesap verebilirlik, kurumsal kuram ve dağıtık yönetişim literatürüne dayanarak; açıklama beklentilerini risk maruziyeti ve kapasiteye göre nicelleştiren ve tahsisleri akıllı sözleşmeler aracılığıyla uygulanabilir kılan biçimsel bir model önerilmektedir. Model, açıklanabilirlik faaliyetini ölçülebilir bir yük olarak görür ve şeffaflık ile meşruiyeti güvence altına almak için tahsis edilmesi gerektiğini savunur. Blok zinciriyle sağlanan denetlenebilirlik, paydaş güveni, düzenleyici uyum ve örgütsel öğrenme arasındaki ilişkiye dair test edilebilir beş önerme geliştirir; ayrıca AB Yapay Zekâ Tüzüğü (EU AI Act), GDPR Madde 22 ve yaklaşan YZ yönetişimi rejimleri için çıkarımlar sunar ve uygulamaya dönük yön vermektedir.

Etik Beyan

Bulunmamaktadır.

Destekleyen Kurum

Bulunmamaktadır.

Teşekkür

Bulunmamaktadır.

Kaynakça

  • Argote, L., & Miron-Spektor, E. (2011). Organizational learning: From experience to knowledge. Organization Science, 22(5), 1123-1137. https://doi.org/10.1287/orsc.1100.0621
  • Asif, R., Hassan, S. R., & Parr, G. (2023). Integrating a blockchain-based governance framework for responsible AI. Future Internet, 15(3), 97, 2-21. https://doi.org/10.3390/fi15030097
  • Atzei, N., Bartoletti, M., & Cimoli, T. (2017). A survey of attacks on Ethereum smart contracts (SoK). In M. Maffei & M. Ryan (Eds.), Principles of security and trust. POST 2017 (Lecture Notes in Computer Science, Vol. 10204, pp. 164-186). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-54455-6_8
  • Banerjee, G., Dhar, S., Roy, S., Syed, R., & Das, A. (2024). Explainability and transparency in designing responsible AI applications in the enterprise. In N. Naik, P. Jenkins, S. Prajapat, & P. Grace (Eds.), Contributions presented at The International Conference on Computing, Communication, Cybersecurity and AI, July 3-4, 2024, London, UK. C3AI 2024 (Lecture Notes in Networks and Systems, Vol. 884, pp. 420-431). Springer, Cham. https://doi.org/10.1007/978-3-031-74443-3_25
  • Beck, R., Müller-Bloch, C., & King, J. L. (2018). Governance in the blockchain economy: A framework and research agenda. Journal of the Association for Information Systems, 19(10). https://doi.org/10.17705/1jais.00518
  • Bovens, M. (2007). Analysing and assessing accountability: A conceptual framework. European Law Journal, 13, 447-468. https://doi.org/10.1111/j.1468-0386.2007.00378.x
  • Buterin, V. (2014). A next-generation smart contract and decentralized application platform. Ethereum White Paper. https://ethereum.org/en/whitepaper/
  • Butt, U. A., Amin, R., Aldabbas, H., Mehmood, M., Shaukat, M. W., & Raza, S. M. (2023). Deploying blockchains to simplify AI algorithm auditing. In 2023 IEEE 8th International Conference on Engineering Technologies and Applied Sciences (ICETAS), Bahrain (pp. 1-6). https://doi.org/10.1109/ICETAS59148.2023.10346420
  • Christidis, K., & Devetsikiotis, M. (2016). Blockchains and smart contracts for the Internet of Things. IEEE Access, 4, 2292-2303. https://doi.org/10.1109/ACCESS.2016.2566339
  • Croman, K., Decker, C., Eyal, I., Gencer, A. E., Juels, A., Kosba, A., Miller, A., Saxena, P., Shi, E., Sirer, E. G., Song, D., & Wattenhofer, R. (2016). On scaling decentralized blockchains. In J. Clark, S. Meiklejohn, P. Ryan, D. Wallach, M. Brenner, & K. Rohloff (Eds.), Financial cryptography and data security. FC 2016 (Lecture Notes in Computer Science, Vol. 9604, pp. 106-125). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-53357-4_8
  • De Bruijn, H., Warnier, M., & Janssen, M. (2022). The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making. Government Information Quarterly, 39(2), 101666, 1-8. https://doi.org/10.1016/j.giq.2021.101666
  • De Filippi, P., & Loveluck, B. (2016). The invisible politics of Bitcoin: Governance crisis of a decentralized infrastructure. Internet Policy Review, 5(3), 1-28. https://doi.org/10.14763/2016.3.427
  • Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62. https://doi.org/10.1145/2844110
  • DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147-160. https://doi.org/10.2307/2095101
  • Ehsan, U., & Riedl, M. O. (2020). Human-centered explainable AI: Towards a reflective sociotechnical approach. In C. Stephanidis, M. Kurosu, H. Degen, & L. Reinerman-Jones (Eds.), HCI International 2020—Late breaking papers: Multimodality and intelligence. HCII 2020 (Lecture Notes in Computer Science, Vol. 12424, pp. 449-466). Springer, Cham. https://doi.org/10.1007/978-3-030-60117-1_33
  • European Commission. (2021). Proposal for a regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). COM(2021) 206 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
  • European Parliament & Council. (2016). Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). Official Journal of the European Union, L119/1. https://eur-lex.europa.eu/eli/reg/2016/679/oj
  • Gebru, T., Morgenstern, J., Vecchione, B., Wortman Vaughan, J., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86-92. https://doi.org/10.1145/3458723
  • Gerlings, J. (2025). The relevance of explainable artificial intelligence (xAI) in high-risk decisions. Copenhagen Business School [PhD]. PhD Series No. 35.2025 https://doi.org/10.22439/phd.35.2025
  • Jarsania, P., Kumar, S., & Patel, R. (2025). TranspareGov-AI: A multi-stakeholder framework for auditable algorithmic decision-making in business processes. In 2025 IEEE International Conference on Artificial Intelligence for Learning and Optimization (ICoAILO), Bali, Indonesia (pp. 332-337). https://doi.org/10.1109/ICoAILO66760.2025.11156056
  • Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165, 633-705. https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3
  • Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S. J., & Doshi-Velez, F. (2019). Human evaluation of models built for interpretability. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7(1), 59-67. https://doi.org/10.1609/hcomp.v7i1.5280
  • Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36-43. https://doi.org/10.1145/3233231
  • Lumineau, F., Wang, W., & Schilke, O. (2021). Blockchain governance—A new way of organizing collaborations? Organization Science, 32(2), 500-521. https://doi.org/10.1287/orsc.2020.1379
  • Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30, 4765-4774. https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38. https://doi.org/10.1016/j.artint.2018.07.007
  • Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system. Bitcoin.org (pp. 1-9). https://bitcoin.org/bitcoin.pdf
  • Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2(1), 25-42. https://doi.org/10.1007/BF02639315
  • Ostrom, E. (2010). Beyond markets and states: Polycentric governance of complex economic systems. American Economic Review, 100(3), 641-672. https://doi.org/10.1257/aer.100.3.641
  • Parlak, B. (2025). Blockchain-assisted explainable decision traces (BAXDT): An approach for transparency and accountability in artificial intelligence systems. Knowledge-Based Systems, 307, 114402, 1-17. https://doi.org/10.1016/j.knosys.2025.114402
  • Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT’20). Association for Computing Machinery, New York, NY, USA (pp. 33-44). https://doi.org/10.1145/3351095.3372873
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’16). Association for Computing Machinery, New York, NY, USA (pp. 1135-1144). https://doi.org/10.1145/2939672.2939778
  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206-215. https://doi.org/10.1038/s42256-019-0048-x
  • Scott, W. R. (2014). Institutions and organizations: Ideas, interests, and identities (4th ed.). SAGE Publications.
  • Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., Chaudhary, V., Young, M., Crespo, J.-F., & Dennison, D. (2015). Hidden technical debt in machine learning systems. Advances in Neural Information Processing Systems, 28, 2503-2511. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2015/hash/86df7dcfd896fcaf2674f757a2463eba-Abstract.html
  • Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017). Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA (pp. 3-18). https://doi.org/10.1109/SP.2017.41
  • Suchman, M. C. (1995). Managing legitimacy: Strategic and institutional approaches. The Academy of Management Review, 20(3), 571-610. https://doi.org/10.2307/258788
  • Szabo, N. (1997). Formalizing and securing relationships on public networks. First Monday, 2(9). https://doi.org/10.5210/fm.v2i9.548
  • Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., & Ristenpart, T. (2016). Stealing machine learning models via prediction APIs. In 25th USENIX Security Symposium (pp. 601-618). USENIX Association. https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/tramer
  • Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99. https://doi.org/10.1093/idpl/ipx005
  • Wieringa, M. (2020). What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT ’20). Association for Computing Machinery, New York, NY, USA (pp. 1-18). https://doi.org/10.1145/3351095.3372833

Explainability Burden and Accountability of Organizational AI Decisions: A Blockchain Based Governance Model

Yıl 2026, Sayı: 89 , 343 - 366 , 30.04.2026
https://izlik.org/JA75RG97MP

Öz

The speed with which artificial intelligence has proliferated in organizational decision-making has intensified the accountability crisis. In finance, healthcare, and human resources, AI systems increasingly influence high-risk decision outcomes, with stakeholders demanding transparency concerning how decisions are made and accountability for failures. Yet prevailing understandings of explainability are limited by siloed responsibility structures, weak audit trails, and an under-specified “explainability burden”—the labor associated with the production, maintaining, verifying, and assuring of explanations for decisions made by AI. This conceptual article builds a framework for distributing that burden among key stakeholders (AI developers, data providers, process owners, auditors) using blockchain as an immutable governance infrastructure. Building on research in algorithmic accountability, institutional theory and distributed governance, the article proposes a framework for quantifying explanation expectations as a function of risk exposure and capacity and implementing these allocations through smart contracts as smart contracts on blockchain platforms. The model regards explainability as a measurable burden that must be strategically allocated to ensure transparency and legitimacy. I derive five testable propositions linking blockchain-based auditability to accountability outcomes, stakeholder trust, regulatory compliance and organizational learning, and provide insights for organizations dealing with the EU AI Act, GDPR Article 22 and upcoming AI governance regimes.

Etik Beyan

Not applicaple

Destekleyen Kurum

Not applicaple

Teşekkür

Not applicaple

Kaynakça

  • Argote, L., & Miron-Spektor, E. (2011). Organizational learning: From experience to knowledge. Organization Science, 22(5), 1123-1137. https://doi.org/10.1287/orsc.1100.0621
  • Asif, R., Hassan, S. R., & Parr, G. (2023). Integrating a blockchain-based governance framework for responsible AI. Future Internet, 15(3), 97, 2-21. https://doi.org/10.3390/fi15030097
  • Atzei, N., Bartoletti, M., & Cimoli, T. (2017). A survey of attacks on Ethereum smart contracts (SoK). In M. Maffei & M. Ryan (Eds.), Principles of security and trust. POST 2017 (Lecture Notes in Computer Science, Vol. 10204, pp. 164-186). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-54455-6_8
  • Banerjee, G., Dhar, S., Roy, S., Syed, R., & Das, A. (2024). Explainability and transparency in designing responsible AI applications in the enterprise. In N. Naik, P. Jenkins, S. Prajapat, & P. Grace (Eds.), Contributions presented at The International Conference on Computing, Communication, Cybersecurity and AI, July 3-4, 2024, London, UK. C3AI 2024 (Lecture Notes in Networks and Systems, Vol. 884, pp. 420-431). Springer, Cham. https://doi.org/10.1007/978-3-031-74443-3_25
  • Beck, R., Müller-Bloch, C., & King, J. L. (2018). Governance in the blockchain economy: A framework and research agenda. Journal of the Association for Information Systems, 19(10). https://doi.org/10.17705/1jais.00518
  • Bovens, M. (2007). Analysing and assessing accountability: A conceptual framework. European Law Journal, 13, 447-468. https://doi.org/10.1111/j.1468-0386.2007.00378.x
  • Buterin, V. (2014). A next-generation smart contract and decentralized application platform. Ethereum White Paper. https://ethereum.org/en/whitepaper/
  • Butt, U. A., Amin, R., Aldabbas, H., Mehmood, M., Shaukat, M. W., & Raza, S. M. (2023). Deploying blockchains to simplify AI algorithm auditing. In 2023 IEEE 8th International Conference on Engineering Technologies and Applied Sciences (ICETAS), Bahrain (pp. 1-6). https://doi.org/10.1109/ICETAS59148.2023.10346420
  • Christidis, K., & Devetsikiotis, M. (2016). Blockchains and smart contracts for the Internet of Things. IEEE Access, 4, 2292-2303. https://doi.org/10.1109/ACCESS.2016.2566339
  • Croman, K., Decker, C., Eyal, I., Gencer, A. E., Juels, A., Kosba, A., Miller, A., Saxena, P., Shi, E., Sirer, E. G., Song, D., & Wattenhofer, R. (2016). On scaling decentralized blockchains. In J. Clark, S. Meiklejohn, P. Ryan, D. Wallach, M. Brenner, & K. Rohloff (Eds.), Financial cryptography and data security. FC 2016 (Lecture Notes in Computer Science, Vol. 9604, pp. 106-125). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-53357-4_8
  • De Bruijn, H., Warnier, M., & Janssen, M. (2022). The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making. Government Information Quarterly, 39(2), 101666, 1-8. https://doi.org/10.1016/j.giq.2021.101666
  • De Filippi, P., & Loveluck, B. (2016). The invisible politics of Bitcoin: Governance crisis of a decentralized infrastructure. Internet Policy Review, 5(3), 1-28. https://doi.org/10.14763/2016.3.427
  • Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62. https://doi.org/10.1145/2844110
  • DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147-160. https://doi.org/10.2307/2095101
  • Ehsan, U., & Riedl, M. O. (2020). Human-centered explainable AI: Towards a reflective sociotechnical approach. In C. Stephanidis, M. Kurosu, H. Degen, & L. Reinerman-Jones (Eds.), HCI International 2020—Late breaking papers: Multimodality and intelligence. HCII 2020 (Lecture Notes in Computer Science, Vol. 12424, pp. 449-466). Springer, Cham. https://doi.org/10.1007/978-3-030-60117-1_33
  • European Commission. (2021). Proposal for a regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). COM(2021) 206 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
  • European Parliament & Council. (2016). Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). Official Journal of the European Union, L119/1. https://eur-lex.europa.eu/eli/reg/2016/679/oj
  • Gebru, T., Morgenstern, J., Vecchione, B., Wortman Vaughan, J., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86-92. https://doi.org/10.1145/3458723
  • Gerlings, J. (2025). The relevance of explainable artificial intelligence (xAI) in high-risk decisions. Copenhagen Business School [PhD]. PhD Series No. 35.2025 https://doi.org/10.22439/phd.35.2025
  • Jarsania, P., Kumar, S., & Patel, R. (2025). TranspareGov-AI: A multi-stakeholder framework for auditable algorithmic decision-making in business processes. In 2025 IEEE International Conference on Artificial Intelligence for Learning and Optimization (ICoAILO), Bali, Indonesia (pp. 332-337). https://doi.org/10.1109/ICoAILO66760.2025.11156056
  • Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165, 633-705. https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3
  • Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S. J., & Doshi-Velez, F. (2019). Human evaluation of models built for interpretability. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7(1), 59-67. https://doi.org/10.1609/hcomp.v7i1.5280
  • Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36-43. https://doi.org/10.1145/3233231
  • Lumineau, F., Wang, W., & Schilke, O. (2021). Blockchain governance—A new way of organizing collaborations? Organization Science, 32(2), 500-521. https://doi.org/10.1287/orsc.2020.1379
  • Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30, 4765-4774. https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38. https://doi.org/10.1016/j.artint.2018.07.007
  • Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system. Bitcoin.org (pp. 1-9). https://bitcoin.org/bitcoin.pdf
  • Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2(1), 25-42. https://doi.org/10.1007/BF02639315
  • Ostrom, E. (2010). Beyond markets and states: Polycentric governance of complex economic systems. American Economic Review, 100(3), 641-672. https://doi.org/10.1257/aer.100.3.641
  • Parlak, B. (2025). Blockchain-assisted explainable decision traces (BAXDT): An approach for transparency and accountability in artificial intelligence systems. Knowledge-Based Systems, 307, 114402, 1-17. https://doi.org/10.1016/j.knosys.2025.114402
  • Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT’20). Association for Computing Machinery, New York, NY, USA (pp. 33-44). https://doi.org/10.1145/3351095.3372873
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’16). Association for Computing Machinery, New York, NY, USA (pp. 1135-1144). https://doi.org/10.1145/2939672.2939778
  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206-215. https://doi.org/10.1038/s42256-019-0048-x
  • Scott, W. R. (2014). Institutions and organizations: Ideas, interests, and identities (4th ed.). SAGE Publications.
  • Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., Chaudhary, V., Young, M., Crespo, J.-F., & Dennison, D. (2015). Hidden technical debt in machine learning systems. Advances in Neural Information Processing Systems, 28, 2503-2511. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2015/hash/86df7dcfd896fcaf2674f757a2463eba-Abstract.html
  • Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017). Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA (pp. 3-18). https://doi.org/10.1109/SP.2017.41
  • Suchman, M. C. (1995). Managing legitimacy: Strategic and institutional approaches. The Academy of Management Review, 20(3), 571-610. https://doi.org/10.2307/258788
  • Szabo, N. (1997). Formalizing and securing relationships on public networks. First Monday, 2(9). https://doi.org/10.5210/fm.v2i9.548
  • Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., & Ristenpart, T. (2016). Stealing machine learning models via prediction APIs. In 25th USENIX Security Symposium (pp. 601-618). USENIX Association. https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/tramer
  • Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99. https://doi.org/10.1093/idpl/ipx005
  • Wieringa, M. (2020). What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT ’20). Association for Computing Machinery, New York, NY, USA (pp. 1-18). https://doi.org/10.1145/3351095.3372833
Toplam 41 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Teknoloji Yönetimi, İşletme , Organizasyon ve Yönetim Teorisi, Örgüt Kültürü
Bölüm Araştırma Makalesi
Yazarlar

Arif Yıldırım 0000-0002-4446-4865

Gönderilme Tarihi 1 Şubat 2026
Kabul Tarihi 29 Nisan 2026
Yayımlanma Tarihi 30 Nisan 2026
IZ https://izlik.org/JA75RG97MP
Yayımlandığı Sayı Yıl 2026 Sayı: 89

Kaynak Göster

APA Yıldırım, A. (2026). Explainability Burden and Accountability of Organizational AI Decisions: A Blockchain Based Governance Model. Dumlupınar Üniversitesi Sosyal Bilimler Dergisi, 89, 343-366. https://izlik.org/JA75RG97MP
AMA 1.Yıldırım A. Explainability Burden and Accountability of Organizational AI Decisions: A Blockchain Based Governance Model. Dumlupınar Üniversitesi Sosyal Bilimler Dergisi. 2026;(89):343-366. https://izlik.org/JA75RG97MP
Chicago Yıldırım, Arif. 2026. “Explainability Burden and Accountability of Organizational AI Decisions: A Blockchain Based Governance Model”. Dumlupınar Üniversitesi Sosyal Bilimler Dergisi, sy 89: 343-66. https://izlik.org/JA75RG97MP.
EndNote Yıldırım A (01 Nisan 2026) Explainability Burden and Accountability of Organizational AI Decisions: A Blockchain Based Governance Model. Dumlupınar Üniversitesi Sosyal Bilimler Dergisi 89 343–366.
IEEE [1]A. Yıldırım, “Explainability Burden and Accountability of Organizational AI Decisions: A Blockchain Based Governance Model”, Dumlupınar Üniversitesi Sosyal Bilimler Dergisi, sy 89, ss. 343–366, Nis. 2026, [çevrimiçi]. Erişim adresi: https://izlik.org/JA75RG97MP
ISNAD Yıldırım, Arif. “Explainability Burden and Accountability of Organizational AI Decisions: A Blockchain Based Governance Model”. Dumlupınar Üniversitesi Sosyal Bilimler Dergisi. 89 (01 Nisan 2026): 343-366. https://izlik.org/JA75RG97MP.
JAMA 1.Yıldırım A. Explainability Burden and Accountability of Organizational AI Decisions: A Blockchain Based Governance Model. Dumlupınar Üniversitesi Sosyal Bilimler Dergisi. 2026;:343–366.
MLA Yıldırım, Arif. “Explainability Burden and Accountability of Organizational AI Decisions: A Blockchain Based Governance Model”. Dumlupınar Üniversitesi Sosyal Bilimler Dergisi, sy 89, Nisan 2026, ss. 343-66, https://izlik.org/JA75RG97MP.
Vancouver 1.Arif Yıldırım. Explainability Burden and Accountability of Organizational AI Decisions: A Blockchain Based Governance Model. Dumlupınar Üniversitesi Sosyal Bilimler Dergisi [Internet]. 01 Nisan 2026;(89):343-66. Erişim adresi: https://izlik.org/JA75RG97MP

Dergimiz EBSCOhost, ULAKBİM/Sosyal Bilimler Veri Tabanında, SOBİAD ve Türk Eğitim İndeksi'nde yer alan uluslararası hakemli bir dergidir.