Derleme
BibTex RIS Kaynak Göster

Technical, Ethical, And Governance Approaches in Artificial Intelligence-Based Decision Support Systems: A Multi-Sectoral Review

Yıl 2025, Cilt: 9 Sayı: 1, 29 - 48, 30.06.2025
https://doi.org/10.33461/uybisbbd.1691218

Öz

This review study aims to systematically examine the technical, ethical, and governance dimensions of artificial intelligence-based decision support systems (AI-DSS). The applications of AI-DSS across critical sectors such as healthcare, finance, manufacturing/logistics, and environmental management are analyzed in terms of data quality, explainability, fairness, and accountability. The bibliometric foundation of the study is based on 4,824 bibliographic records retrieved from Web of Science, Scopus, and TR Dizin databases using the open-source BiBLoX software developed by Kesgin and Dözer (2025). Following the PRISMA protocol, a multi-stage screening process was implemented, resulting in a refined corpus of 1,273 publications subjected to detailed analysis. The findings highlight explainable AI (XAI) frameworks, federated learning-based privacy solutions, and fairness-aware decision mechanisms as key approaches to enhancing system reliability. Moreover, interdisciplinary collaboration, ethical oversight, and institutional governance structures are emphasized as critical enablers for the responsible and sustainable development of AI-DSS. This study aims to contribute a conceptual and practical perspective to the design of future decision support systems grounded in artificial intelligence.

Kaynakça

  • Antoniadi, A. M., Nam, J., Bae, J., ve Hwang, T. J. (2021). Current challenges and future opportunities for explainable artificial intelligence in medical imaging. Applied Sciences, 11(11), 5088. https://doi.org/10.3390/app11115088
  • Brnabic, A., & Hess, L. (2021). Systematic literature review of machine learning methods used in the analysis of real-world data for patient-provider decision making. BMC Medical Informatics and Decision Making, 21(1), Article 54. https://doi.org/10.1186/s12911-021-01403-2
  • Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., ve Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality ve Safety, 28(3), 231–237. https://doi.org/10.1136/bmjqs-2018-008370
  • Chen, Y., Zhang, X., ve Zhang, J. (2023). Adversarial fairness-aware federated learning for personalized healthcare. Applied Sciences, 13(18), 10258. https://doi.org/10.3390/app131810258
  • Cortès, C., ve Vapnik, V. (1995). Support-vector networks. Machine Learning, 20, 273–297. https://doi.org/10.1007/BF00994018
  • Donati, O., Macario, M., ve Karim, M. H. (2024). Event-Driven AI Workflows in Serverless Computing: Enabling Real-Time Data Processing and Decision-Making. Preprints. https://www.preprints.org/frontend/manuscript/2022188b96eaebc099930f462849fb5f/download_pub
  • Durkin, J. (1994). Expert systems: Design and development. Macmillan. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Telecommunications Policy, 42(10), 284–306. https://doi.org/10.1007/s11023-018-9482-5
  • Goodfellow, I., Bengio, Y., ve Courville, A. (2016). Deep learning. MIT Press.
  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., ve Pedreschi, D. (2019). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 93. https://doi.org/10.1145/3236009
  • Holzinger, A., Biemann, C., Pattichis, C. S., ve Kell, D. B. (2019). What do we need to build explainable AI systems for the medical domain? Arxiv Preprints. https://arxiv.org/abs/1712.09923
  • Javaid, M., Haleem, A., Singh, R. P., ve Suman, R. (2022). Role of artificial intelligence in smart manufacturing and logistics. International Journal of Industrial Engineering, 29(3), 355–367. https://doi.org/10.1142/S2424862221300040
  • Jurafsky, D., ve Martin, J. H. (2020). Speech and language processing (3rd ed.). Pearson.
  • Kaur, H., Sharma, A., ve Singh, R. (2022). Addressing fairness and bias in AI-based clinical DSS. ACM Transactions on Management Information Systems, 13(2), 1–23. https://doi.org/10.1145/3491209
  • Kesgin, K., ve Ozer, D. (2025). BiBLoX: A flask-based automatic bibliometrics and machine learning system for scientific trend prediction. Authorea Preprint. https://www.authorea.com/doi/full/10.22541/au.174106668.81840735
  • Kose, U., Deperlioglu, O., Alzubi, J., ve Patrut, B. (2021). Deep learning for medical decision support systems. https://10.1007/978-981-15-6325-6
  • Kostopoulos, G., Giannopoulos, V., ve Sarigiannidis, P. (2024). Explainability and transparency in deep learning DSS: A healthcare perspective. Electronics, 13(14), 2842. https://doi.org/10.3390/electronics13142842
  • Liao, Q. V., ve Varshney, K. R. (2021). Human-centered explainable ai (xai): From algorithms to user experiences. arXiv preprint. https://arxiv.org/abs/2110.10790.
  • Liu, B. (2012). Sentiment analysis and opinion mining. Morgan ve Claypool Publishers.
  • Rajkomar, A., Dean, J., ve Kohane, I. (2018). Machine learning in medicine. npj Digital Medicine, 1, 18. https://doi.org/10.1038/s41746-018-0029-1
  • Rieke, N., Hancox, J., Li, W., Milletari, F., Roth, H. R., Albarqouni, S., … Cardoso, M. J. (2020). The future of digital health with federated learning. npj Digital Medicine, 3, 119. https://doi.org/10.1038/s41746-020-00323-1
  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206–215. https://doi.org/10.1038/s42256-019-0048-x
  • Samek, W., Wiegand, T., ve Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint. https://arxiv.org/abs/1708.08296
  • Sharda, R., Delen, D., & Turban, E. (2020). Analytics, data science, & artificial intelligence: Systems for decision support (11th ed.). Pearson.
  • Shortliffe, E. H. (1976). Computer-based medical consultations: MYCIN. Elsevier.
  • Topol, E. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25, 44–56. https://doi.org/10.1038/s41591-018-0300-7
  • Turban, E., Sharda, R., ve Delen, D. (2011). Decision support and business intelligence systems (9th ed.). Pearson Education.
  • Zhou, Y., Li, H., Xiao, Z., & Qiu, J. (2023). A user-centered explainable artificial intelligence approach for financial fraud detection. Finance Research Letters, 58, 104309. https://doi.org/10.1016/j.frl.2023.104309

Yapay Zekâ Tabanlı Karar Destek Sistemlerinde Teknik, Etik ve Yönetişimsel Yaklaşımlar: Çok Sektörlü Bir Derleme Çalışması

Yıl 2025, Cilt: 9 Sayı: 1, 29 - 48, 30.06.2025
https://doi.org/10.33461/uybisbbd.1691218

Öz

Bu derleme çalışması, yapay zekâ tabanlı karar destek sistemlerinin (YZ-KDS) teknik, etik ve yönetişim boyutlarını sistematik bir analiz çerçevesinde incelemeyi amaçlamaktadır. Sağlık, finans, üretim/lojistik ve çevre yönetimi gibi kritik sektörlerde YZ-KDS sistemlerinin uygulamaları, veri kalitesi, açıklanabilirlik, adalet ve hesap verebilirlik bağlamında ele alınmıştır. Çalışmanın veri temeli, Kesgin ve Dözer (2025) tarafından geliştirilen BiBLoX yazılımı aracılığıyla Web of Science, Scopus ve TR Dizin veri tabanlarından çekilen 4824 bibliyografik kayda dayanmaktadır. PRISMA protokolü doğrultusunda yapılan filtreleme sonucunda 1273 yayın detaylı incelemeye alınmıştır. Bibliyometrik analiz sonuçlarına göre, açıklanabilir yapay zekâ çerçeveleri (XAI), federated learning tabanlı gizlilik çözümleri ve adil karar mekanizmaları, sistemlerin güvenilirliğini artırmada öne çıkan yaklaşımlar olarak belirlenmiştir. Ayrıca disiplinlerarası iş birliği, etik denetim ve kurumsal yönetişim süreçlerinin, bu sistemlerin sorumlu ve sürdürülebilir biçimde geliştirilmesi açısından kritik olduğu vurgulanmaktadır. Çalışma, yapay zekâ destekli karar destek sistemlerinin gelecekteki tasarımlarına hem teorik hem uygulamalı katkı sunmayı hedeflemektedir.

Kaynakça

  • Antoniadi, A. M., Nam, J., Bae, J., ve Hwang, T. J. (2021). Current challenges and future opportunities for explainable artificial intelligence in medical imaging. Applied Sciences, 11(11), 5088. https://doi.org/10.3390/app11115088
  • Brnabic, A., & Hess, L. (2021). Systematic literature review of machine learning methods used in the analysis of real-world data for patient-provider decision making. BMC Medical Informatics and Decision Making, 21(1), Article 54. https://doi.org/10.1186/s12911-021-01403-2
  • Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., ve Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality ve Safety, 28(3), 231–237. https://doi.org/10.1136/bmjqs-2018-008370
  • Chen, Y., Zhang, X., ve Zhang, J. (2023). Adversarial fairness-aware federated learning for personalized healthcare. Applied Sciences, 13(18), 10258. https://doi.org/10.3390/app131810258
  • Cortès, C., ve Vapnik, V. (1995). Support-vector networks. Machine Learning, 20, 273–297. https://doi.org/10.1007/BF00994018
  • Donati, O., Macario, M., ve Karim, M. H. (2024). Event-Driven AI Workflows in Serverless Computing: Enabling Real-Time Data Processing and Decision-Making. Preprints. https://www.preprints.org/frontend/manuscript/2022188b96eaebc099930f462849fb5f/download_pub
  • Durkin, J. (1994). Expert systems: Design and development. Macmillan. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Telecommunications Policy, 42(10), 284–306. https://doi.org/10.1007/s11023-018-9482-5
  • Goodfellow, I., Bengio, Y., ve Courville, A. (2016). Deep learning. MIT Press.
  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., ve Pedreschi, D. (2019). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 93. https://doi.org/10.1145/3236009
  • Holzinger, A., Biemann, C., Pattichis, C. S., ve Kell, D. B. (2019). What do we need to build explainable AI systems for the medical domain? Arxiv Preprints. https://arxiv.org/abs/1712.09923
  • Javaid, M., Haleem, A., Singh, R. P., ve Suman, R. (2022). Role of artificial intelligence in smart manufacturing and logistics. International Journal of Industrial Engineering, 29(3), 355–367. https://doi.org/10.1142/S2424862221300040
  • Jurafsky, D., ve Martin, J. H. (2020). Speech and language processing (3rd ed.). Pearson.
  • Kaur, H., Sharma, A., ve Singh, R. (2022). Addressing fairness and bias in AI-based clinical DSS. ACM Transactions on Management Information Systems, 13(2), 1–23. https://doi.org/10.1145/3491209
  • Kesgin, K., ve Ozer, D. (2025). BiBLoX: A flask-based automatic bibliometrics and machine learning system for scientific trend prediction. Authorea Preprint. https://www.authorea.com/doi/full/10.22541/au.174106668.81840735
  • Kose, U., Deperlioglu, O., Alzubi, J., ve Patrut, B. (2021). Deep learning for medical decision support systems. https://10.1007/978-981-15-6325-6
  • Kostopoulos, G., Giannopoulos, V., ve Sarigiannidis, P. (2024). Explainability and transparency in deep learning DSS: A healthcare perspective. Electronics, 13(14), 2842. https://doi.org/10.3390/electronics13142842
  • Liao, Q. V., ve Varshney, K. R. (2021). Human-centered explainable ai (xai): From algorithms to user experiences. arXiv preprint. https://arxiv.org/abs/2110.10790.
  • Liu, B. (2012). Sentiment analysis and opinion mining. Morgan ve Claypool Publishers.
  • Rajkomar, A., Dean, J., ve Kohane, I. (2018). Machine learning in medicine. npj Digital Medicine, 1, 18. https://doi.org/10.1038/s41746-018-0029-1
  • Rieke, N., Hancox, J., Li, W., Milletari, F., Roth, H. R., Albarqouni, S., … Cardoso, M. J. (2020). The future of digital health with federated learning. npj Digital Medicine, 3, 119. https://doi.org/10.1038/s41746-020-00323-1
  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206–215. https://doi.org/10.1038/s42256-019-0048-x
  • Samek, W., Wiegand, T., ve Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint. https://arxiv.org/abs/1708.08296
  • Sharda, R., Delen, D., & Turban, E. (2020). Analytics, data science, & artificial intelligence: Systems for decision support (11th ed.). Pearson.
  • Shortliffe, E. H. (1976). Computer-based medical consultations: MYCIN. Elsevier.
  • Topol, E. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25, 44–56. https://doi.org/10.1038/s41591-018-0300-7
  • Turban, E., Sharda, R., ve Delen, D. (2011). Decision support and business intelligence systems (9th ed.). Pearson Education.
  • Zhou, Y., Li, H., Xiao, Z., & Qiu, J. (2023). A user-centered explainable artificial intelligence approach for financial fraud detection. Finance Research Letters, 58, 104309. https://doi.org/10.1016/j.frl.2023.104309
Toplam 27 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Karar Desteği ve Grup Destek Sistemleri, Makine Öğrenme (Diğer)
Bölüm Derleme Makaleleri
Yazarlar

Kadir Kesgin 0000-0001-5973-8622

Yayımlanma Tarihi 30 Haziran 2025
Gönderilme Tarihi 4 Mayıs 2025
Kabul Tarihi 9 Haziran 2025
Yayımlandığı Sayı Yıl 2025 Cilt: 9 Sayı: 1

Kaynak Göster

APA Kesgin, K. (2025). Yapay Zekâ Tabanlı Karar Destek Sistemlerinde Teknik, Etik ve Yönetişimsel Yaklaşımlar: Çok Sektörlü Bir Derleme Çalışması. International Journal of Management Information Systems and Computer Science, 9(1), 29-48. https://doi.org/10.33461/uybisbbd.1691218