Araştırma Makalesi
BibTex RIS Kaynak Göster

Görünmez Tehditten Toplumsal Soruna: Yapay Zekâ Önyargısının Ekonomik ve Toplumsal Etkilerinin Değerlendirilmesi

Yıl 2025, Cilt: 11 Sayı: 2, 17 - 28, 31.12.2025

Öz

Yapay zekâ tabanlı sistemler günlük hayatımıza sıkı sıkıya yerleşmiştir. Yapay zekâ teknolojileri, ekonomik ve finansal işlemler, sağlık, eğitim, adalet ve kolluk kuvvetleri gibi birçok sektörlerde karar alma süreçlerinde giderek daha fazla kullanılmaktadır. Yapay zekâ sistemlerinin taşıdığı gizli önyargılar ise toplum hayatını olumsuz etkilemekte ve toplumun teknolojiye olan güvenini zedelemektedir. Yapılan çalışma, yapay zekâ sistemlerinin ürettiği önyargıların nedenlerini, ortaya çıkardığı etkileri ve alınabilecek teknik ve sosyopolitik önlemleri açıklığa kavuşturmayı amaçlamaktadır. Teoride ve uygulamada yapay zekâ önyargısının incelendiği çalışmada elde edilen bulgular şunlardır; Yapay zekâ önyargısı, alınan kararlarda toplumsal adaletsizliğe ve ayrımcılığa yol açmaktadır. Yapay zekâ önyargısı ile etkin bir mücadele politikası hem teknik hem de etik hususları içeren bütüncül bir yaklaşım gerektirir. Nitel ve çeşitlendirilmiş veri kümeleri ile verilerinin kalitesi ve temsiliyeti artırılmalıdır. Aynı zamanda yapay zekâ sistemlerini yöneten etik yönergeler ve düzenleyici standartlar oluşturulmalıdır. Standartların belirlenmesinde ve algoritmaların değerlendirilmesinde ulusal ve uluslararası düzeyde disiplinlerarası iş birliğine gidilmelidir.

Kaynakça

  • Bai, T., Luo, J., Zhao, J., Wen, B., Wang, Q. (2021), “Recent Advances in Adversarial Training for Adversarial Robustness”, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21), https://www.ijcai.org/proceedings/2021/0591.pdf 05.11.2025.
  • Bansal, C., Pandey, K. K., Goel, R. ve Sharma, A. (2023), “Srinivas Jangirala Artificial Intelligence (AI) Bias İmpacts: Classification Framework for Effective Mitigation”, Issues in Information Systems, 24(4): 367-389.
  • Baeza-Yates, R. ve Murgai, L. (2024), “Bias and the Web”, Introduction to Digital Humanism, Editors: Hannes Werthner, Carlo Ghezzi, Jeff Kramer, Julian Nida-Rümelin, Bashar Nuseibeh, Erich Prem, Allison Stanger, (435-462), Springer, India.
  • Butvinik, D., (25.07.2022), “Bias and Fairness of AI-Based Systems within Financial Crime”, NICE Actimize, https://www.niceactimize.com/blog/fraud-bias-and-fairness-of-ai-based-systems-within-financial- crime/ 05.09.2025.
  • Chadha, K. S., (2024), “Bias and Fairness in Artificial Intelligence: Methods and Mitigation Strategies”, International Journal for Research Publication and Seminar, 15(3): 36-49.
  • Chen, Y., Clayton, E. W., Novak, L. L., Anders, S. ve Malin, B. (2023), “Human-Centered Design to Address Biases in Artificial Intelligence”, Journal of Medical Internet Research, 25: 1-10.
  • Danaher, J (2020), “Freedom in an Age of Algocracy”, Oxford Handbook of Philosophy of Technology, Editör: Shannon Vallor, (250-272), Oxford University Press, Oxford, UK.
  • De, S., Jangra, S., Agarwal, V., Johnson, J. ve Sastry, N. (2023), “Biases and Ethical Considerations for Machine Learning Pipelines in the Computational Social Sciences”, Ethics in Artificial Intelligence: Bias, Fairness and Beyond, Editör: Animesh Mukherjee, Juhi Kulshrestha, Abhijnan Chakraborty, Srijan Kumar, (99-113), Springer.
  • Diana, E., (17.07.2025), “Building AI Fairness by Reducing Algorithmic Bias, Tepperspectives”, Tepper School of Business at Carnegie Mellon University, https://tepperspectives.cmu.edu/all-articles/building-ai-fairness- by-reducing-algorithmic-bias/ 05.10.2025.
  • Dube, R. ve Shafana, J. N. (2021). “Bias in Artificial Intelligence and Machine Learning”, Bioscience Biotechnology Research Communications, Special Issue 14(9):227-234. |
  • Ferrara, E. (2024), “Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies”, Sci, 6(1):1-15.
  • FRA, (European Union Agency for Fundamental Rights), (2022), Bias in Algorithms –Artificial Intelligence and Discrimination, Report, https://fra.europa.eu/sites/default/files/fra_uploads/fra-2022-bias-in- algorithms_en.pdf 05.10.2025.
  • González Sendino, R., Serrano, E., Bajo, J., ve Novais, P. (2024), “A Review of Bias and Fairness in Artificial Intelligence”, International Journal of Interactive Multimedia and Artificial Intelligence, 9(1): 5–17.
  • Gray, M., Samala, R., Liu, Q., Skiles, D., Xu, J., Tong, W. ve Wu, L. (2024), “Measurement and Mitigation of Bias in Artificial Intelligence: A Narrative Literature Review for Regulatory Science”, Clinical Pharmacology & Therapeutics, 115(4): 687-697.
  • Gulsoy, M., Yalcin, E., Tacli, Y., Bilge, A., (2025), “DUoR: Dynamic User-oriented re-Ranking Calibration Strategy for Popularity Bias Treatment of Recommendation Algorithms”, International Journal of Human-Computer Studies, 203:103578.
  • Hanna, M. G., Pantanowitza, L., Jacksonc, B., Palmera, O., Visweswarane, S., Pantanowitzf, J., Deebajahg, M. ve Rashidi, H. H. (2025), “Ethical and Bias Considerations in Artificial Intelligence/Machine Learning”, Modern Pathology, 38: 1-13.
  • Heisler, N. ve Grossman, M. R. (2024), Standards for the Control of Algorithmic Bias:The Canadian Administrative Context, CRC Press, USA.
  • Iddenden, G. (20.03.2025), “Algorithmic Gatekeepers: The Hidden Bias in AI Payments”, The Payments Association, https://thepaymentsassociation.org/article/algorithmic-gatekeepers-the-hidden-bias-in-ai- payments/ 02.10.2025.
  • Kharitonova, Y., Savina, V S. ve Pagnini, F. (2021), “Artificial Intelligence’s Algorithmic Bias: Ethical and Legal Issues”, Perm University Herald. Juridical Sciences, 53: 488–515.
  • Krishnan, M. (2020), “Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning”, Philosophy & Technology, 33:487–502.
  • Li, Yunyi; Arteaga, M. ve Saar-Tsechansky, M. (2024), “Label Bias: A Pervasive and Invisibilized Problem”, Notices of the American Mathematical Society, 71(8): 1069- 1077.
  • Mihan, A., Pandey, A. ve Van Spall H. (2024), “Mitigating the Risk of Artificial Intelligence Bias in Cardiovascular Care”, Lancet Digit Health, 6: 749–754.
  • Mitchell, M. (2019). Artificial Intelligence: A Guide for Thinking Humans, Farrar, Straus and Giroux,nUSA.
  • Moya, G. ve Le, V. (18.02.2021), “Algorithmic Bias: How Automated Decision-Making Becomes Automated Discrimination”, The Greenlining Institute, https://greenlining.org/wp-content/uploads/2021/04/Greenlining- Institute-Algorithmic-Bias-Explained-Report-Feb-2021.pdf 12.09.2025.
  • NIPNLG, (National Immigration Project), (2024), Bias in the Criminal Legal System: A Report on Racial Bias in the Criminal Process and its Impact on Noncitizens of Color in Removal Proceedings, Immigrants’ Rights Clinic, Stanford Law School, USA. https://law.stanford.edu/wp-content/uploads/2024/06/2024-Bias- Criminal-Legal-System.pdf 09.08.2025.
  • Nikoli´c, M.; Nikoli´c, D.; Stefanovi´c, M.; Koprivica, S.; Stefanovi´c, D. (2025), “Mitigating Algorithmic Bias Through Probability Calibration: A Case Study on Lead Generation Data”, Mathematics, 13: 2183.
  • Orso, M. ve Medeiros, A. (22.08.2023), “The Uses and Risks of AI in BSA/AML Compliance: Navigating the Future of Financial Crime Prevention”, Troutman Pepper Locke’s Financial Services Group, https:// www.troutmanfinancialservices.com/2023/08/the-uses-and-risks-of-ai-in-bsa-aml-compliance- navigating-the-future-of-financial-crime-prevention/ 13.09.2025.
  • Pan, R. ve Zhong, C. (2023), “Fairness First Clustering: A Multi-Stage Approach for Mitigating Bias”, Electronics, 12(13): 1-16.
  • Saha, D., Agarwal, A., Hans, S. ve Haldar, S. (2023), “Testing, Debugging, and Repairing Individual Discrimination in Machine Learning Models”, Ethics in Artificial Intelligence: Bias, Fairness and Beyond, Studies in Computational Intelligence, Editörler: A. Mukherjee et al. (1-30), Springer.
  • SAP, (Sistem Analizi Program Geliştirme), (30.10.2024), What is AI bias?, Türkiye Yazılım Üretim ve Tic. A.Ş., https://www.sap.com/resources/what-is-ai-bias#emerging-trends-in-fair-ai-development 15.10.2025
  • Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A. ve Hall, P. (2022), “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence”, Special Publication (NIST SP), National Institute of Standards and Technology, Gaithersburg, MD, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934464 07.10.2025.
  • Silberg, J., ve Manyika, J. (06.06.2019), “Tackling bias in artificial intelligence (and in humans)”, McKinsey Global institute. https://www.mckinsey.com/~/media/mckinsey/featured%20insights/ artificial%20intelligence/tackling%20bias%20in%20artificial%20intelligence%20and%20in%20humans/mgi- tackling-bias-in-ai-june-2019.pdf 10.01.2025.
  • Sweeney, L. (2013), “Discrimination in Online Ad Delivery”, Communications of the ACM,56 (5), 44–54.
  • Ulnicane, I. ve Aden A. (2023), “Power and Politics in Framing Bias in Artificial Intelligence Policy”, Rev Policy Res, 40: 665–687.
  • Upadhyay, S. (27.02.2023), Algorithmic Bias and its Impact on Society, https://medium.com/kigumi-group/ algorithmic-bias-and-its-impact-on-society-df12edcfb303 08.08.2025.
  • Vicente, L. ve Matute, H. (2023), “Humans Inherit Artificial Intelligence Biases”, Scientifc Reports, 13:15737.
  • World Economic Forum, (30.06.2023), Why AI Bias may be Easier to Fix Than Humanity’s, https:// www.weforum.org/stories/2023/06/why-ai-bias-may-be-easier-to-fix-than-humanity-s/ 15.09.2025

From Invisible Threat to Social Problem: Assessing the Economic and Societal Impacts of Artificial Intelligence Bias

Yıl 2025, Cilt: 11 Sayı: 2, 17 - 28, 31.12.2025

Öz

Artificial intelligence-based systems are firmly ingrained in our daily lives. Artificial intelligence technologies are increasingly being used in decision-making processes across numerous sectors, including economic and financial transactions, healthcare, education, justice, and law enforcement. The hidden biases inherent in artificial intelligence systems negatively impact public life and undermine public trust in technology. This study aims to clarify the causes of biases generated by artificial intelligence systems, their effects, and the potential technical and sociopolitical measures to address them. The findings of this study, which examines artificial intelligence bias in theory and practice, are as follows: Artificial intelligence bias leads to social injustice and discrimination in decision-making. An effective policy to combat artificial intelligence bias requires a holistic approach encompassing both technical and ethical considerations. The quality and representativeness of the data should be improved through the use of qualitative and diverse datasets. Ethical guidelines and regulatory standards governing artificial intelligence systems should also be established. Interdisciplinary collaboration should be fostered at the national and international levels to set standards and evaluate algorithms.

Kaynakça

  • Bai, T., Luo, J., Zhao, J., Wen, B., Wang, Q. (2021), “Recent Advances in Adversarial Training for Adversarial Robustness”, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21), https://www.ijcai.org/proceedings/2021/0591.pdf 05.11.2025.
  • Bansal, C., Pandey, K. K., Goel, R. ve Sharma, A. (2023), “Srinivas Jangirala Artificial Intelligence (AI) Bias İmpacts: Classification Framework for Effective Mitigation”, Issues in Information Systems, 24(4): 367-389.
  • Baeza-Yates, R. ve Murgai, L. (2024), “Bias and the Web”, Introduction to Digital Humanism, Editors: Hannes Werthner, Carlo Ghezzi, Jeff Kramer, Julian Nida-Rümelin, Bashar Nuseibeh, Erich Prem, Allison Stanger, (435-462), Springer, India.
  • Butvinik, D., (25.07.2022), “Bias and Fairness of AI-Based Systems within Financial Crime”, NICE Actimize, https://www.niceactimize.com/blog/fraud-bias-and-fairness-of-ai-based-systems-within-financial- crime/ 05.09.2025.
  • Chadha, K. S., (2024), “Bias and Fairness in Artificial Intelligence: Methods and Mitigation Strategies”, International Journal for Research Publication and Seminar, 15(3): 36-49.
  • Chen, Y., Clayton, E. W., Novak, L. L., Anders, S. ve Malin, B. (2023), “Human-Centered Design to Address Biases in Artificial Intelligence”, Journal of Medical Internet Research, 25: 1-10.
  • Danaher, J (2020), “Freedom in an Age of Algocracy”, Oxford Handbook of Philosophy of Technology, Editör: Shannon Vallor, (250-272), Oxford University Press, Oxford, UK.
  • De, S., Jangra, S., Agarwal, V., Johnson, J. ve Sastry, N. (2023), “Biases and Ethical Considerations for Machine Learning Pipelines in the Computational Social Sciences”, Ethics in Artificial Intelligence: Bias, Fairness and Beyond, Editör: Animesh Mukherjee, Juhi Kulshrestha, Abhijnan Chakraborty, Srijan Kumar, (99-113), Springer.
  • Diana, E., (17.07.2025), “Building AI Fairness by Reducing Algorithmic Bias, Tepperspectives”, Tepper School of Business at Carnegie Mellon University, https://tepperspectives.cmu.edu/all-articles/building-ai-fairness- by-reducing-algorithmic-bias/ 05.10.2025.
  • Dube, R. ve Shafana, J. N. (2021). “Bias in Artificial Intelligence and Machine Learning”, Bioscience Biotechnology Research Communications, Special Issue 14(9):227-234. |
  • Ferrara, E. (2024), “Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies”, Sci, 6(1):1-15.
  • FRA, (European Union Agency for Fundamental Rights), (2022), Bias in Algorithms –Artificial Intelligence and Discrimination, Report, https://fra.europa.eu/sites/default/files/fra_uploads/fra-2022-bias-in- algorithms_en.pdf 05.10.2025.
  • González Sendino, R., Serrano, E., Bajo, J., ve Novais, P. (2024), “A Review of Bias and Fairness in Artificial Intelligence”, International Journal of Interactive Multimedia and Artificial Intelligence, 9(1): 5–17.
  • Gray, M., Samala, R., Liu, Q., Skiles, D., Xu, J., Tong, W. ve Wu, L. (2024), “Measurement and Mitigation of Bias in Artificial Intelligence: A Narrative Literature Review for Regulatory Science”, Clinical Pharmacology & Therapeutics, 115(4): 687-697.
  • Gulsoy, M., Yalcin, E., Tacli, Y., Bilge, A., (2025), “DUoR: Dynamic User-oriented re-Ranking Calibration Strategy for Popularity Bias Treatment of Recommendation Algorithms”, International Journal of Human-Computer Studies, 203:103578.
  • Hanna, M. G., Pantanowitza, L., Jacksonc, B., Palmera, O., Visweswarane, S., Pantanowitzf, J., Deebajahg, M. ve Rashidi, H. H. (2025), “Ethical and Bias Considerations in Artificial Intelligence/Machine Learning”, Modern Pathology, 38: 1-13.
  • Heisler, N. ve Grossman, M. R. (2024), Standards for the Control of Algorithmic Bias:The Canadian Administrative Context, CRC Press, USA.
  • Iddenden, G. (20.03.2025), “Algorithmic Gatekeepers: The Hidden Bias in AI Payments”, The Payments Association, https://thepaymentsassociation.org/article/algorithmic-gatekeepers-the-hidden-bias-in-ai- payments/ 02.10.2025.
  • Kharitonova, Y., Savina, V S. ve Pagnini, F. (2021), “Artificial Intelligence’s Algorithmic Bias: Ethical and Legal Issues”, Perm University Herald. Juridical Sciences, 53: 488–515.
  • Krishnan, M. (2020), “Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning”, Philosophy & Technology, 33:487–502.
  • Li, Yunyi; Arteaga, M. ve Saar-Tsechansky, M. (2024), “Label Bias: A Pervasive and Invisibilized Problem”, Notices of the American Mathematical Society, 71(8): 1069- 1077.
  • Mihan, A., Pandey, A. ve Van Spall H. (2024), “Mitigating the Risk of Artificial Intelligence Bias in Cardiovascular Care”, Lancet Digit Health, 6: 749–754.
  • Mitchell, M. (2019). Artificial Intelligence: A Guide for Thinking Humans, Farrar, Straus and Giroux,nUSA.
  • Moya, G. ve Le, V. (18.02.2021), “Algorithmic Bias: How Automated Decision-Making Becomes Automated Discrimination”, The Greenlining Institute, https://greenlining.org/wp-content/uploads/2021/04/Greenlining- Institute-Algorithmic-Bias-Explained-Report-Feb-2021.pdf 12.09.2025.
  • NIPNLG, (National Immigration Project), (2024), Bias in the Criminal Legal System: A Report on Racial Bias in the Criminal Process and its Impact on Noncitizens of Color in Removal Proceedings, Immigrants’ Rights Clinic, Stanford Law School, USA. https://law.stanford.edu/wp-content/uploads/2024/06/2024-Bias- Criminal-Legal-System.pdf 09.08.2025.
  • Nikoli´c, M.; Nikoli´c, D.; Stefanovi´c, M.; Koprivica, S.; Stefanovi´c, D. (2025), “Mitigating Algorithmic Bias Through Probability Calibration: A Case Study on Lead Generation Data”, Mathematics, 13: 2183.
  • Orso, M. ve Medeiros, A. (22.08.2023), “The Uses and Risks of AI in BSA/AML Compliance: Navigating the Future of Financial Crime Prevention”, Troutman Pepper Locke’s Financial Services Group, https:// www.troutmanfinancialservices.com/2023/08/the-uses-and-risks-of-ai-in-bsa-aml-compliance- navigating-the-future-of-financial-crime-prevention/ 13.09.2025.
  • Pan, R. ve Zhong, C. (2023), “Fairness First Clustering: A Multi-Stage Approach for Mitigating Bias”, Electronics, 12(13): 1-16.
  • Saha, D., Agarwal, A., Hans, S. ve Haldar, S. (2023), “Testing, Debugging, and Repairing Individual Discrimination in Machine Learning Models”, Ethics in Artificial Intelligence: Bias, Fairness and Beyond, Studies in Computational Intelligence, Editörler: A. Mukherjee et al. (1-30), Springer.
  • SAP, (Sistem Analizi Program Geliştirme), (30.10.2024), What is AI bias?, Türkiye Yazılım Üretim ve Tic. A.Ş., https://www.sap.com/resources/what-is-ai-bias#emerging-trends-in-fair-ai-development 15.10.2025
  • Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A. ve Hall, P. (2022), “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence”, Special Publication (NIST SP), National Institute of Standards and Technology, Gaithersburg, MD, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934464 07.10.2025.
  • Silberg, J., ve Manyika, J. (06.06.2019), “Tackling bias in artificial intelligence (and in humans)”, McKinsey Global institute. https://www.mckinsey.com/~/media/mckinsey/featured%20insights/ artificial%20intelligence/tackling%20bias%20in%20artificial%20intelligence%20and%20in%20humans/mgi- tackling-bias-in-ai-june-2019.pdf 10.01.2025.
  • Sweeney, L. (2013), “Discrimination in Online Ad Delivery”, Communications of the ACM,56 (5), 44–54.
  • Ulnicane, I. ve Aden A. (2023), “Power and Politics in Framing Bias in Artificial Intelligence Policy”, Rev Policy Res, 40: 665–687.
  • Upadhyay, S. (27.02.2023), Algorithmic Bias and its Impact on Society, https://medium.com/kigumi-group/ algorithmic-bias-and-its-impact-on-society-df12edcfb303 08.08.2025.
  • Vicente, L. ve Matute, H. (2023), “Humans Inherit Artificial Intelligence Biases”, Scientifc Reports, 13:15737.
  • World Economic Forum, (30.06.2023), Why AI Bias may be Easier to Fix Than Humanity’s, https:// www.weforum.org/stories/2023/06/why-ai-bias-may-be-easier-to-fix-than-humanity-s/ 15.09.2025
Toplam 37 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Maliye Kuramı
Bölüm Araştırma Makalesi
Yazarlar

Deniz Turan 0000-0002-6697-2721

Mehmet Uğurlu 0009-0004-2396-9663

Ertuğrul Karatay 0009-0008-7593-3856

Gönderilme Tarihi 18 Kasım 2025
Kabul Tarihi 28 Aralık 2025
Yayımlanma Tarihi 31 Aralık 2025
Yayımlandığı Sayı Yıl 2025 Cilt: 11 Sayı: 2

Kaynak Göster

APA Turan, D., Uğurlu, M., & Karatay, E. (2025). Görünmez Tehditten Toplumsal Soruna: Yapay Zekâ Önyargısının Ekonomik ve Toplumsal Etkilerinin Değerlendirilmesi. Uluslararası Ekonomik Araştırmalar Dergisi, 11(2), 17-28.
AMA Turan D, Uğurlu M, Karatay E. Görünmez Tehditten Toplumsal Soruna: Yapay Zekâ Önyargısının Ekonomik ve Toplumsal Etkilerinin Değerlendirilmesi. UEAD. Aralık 2025;11(2):17-28.
Chicago Turan, Deniz, Mehmet Uğurlu, ve Ertuğrul Karatay. “Görünmez Tehditten Toplumsal Soruna: Yapay Zekâ Önyargısının Ekonomik ve Toplumsal Etkilerinin Değerlendirilmesi”. Uluslararası Ekonomik Araştırmalar Dergisi 11, sy. 2 (Aralık 2025): 17-28.
EndNote Turan D, Uğurlu M, Karatay E (01 Aralık 2025) Görünmez Tehditten Toplumsal Soruna: Yapay Zekâ Önyargısının Ekonomik ve Toplumsal Etkilerinin Değerlendirilmesi. Uluslararası Ekonomik Araştırmalar Dergisi 11 2 17–28.
IEEE D. Turan, M. Uğurlu, ve E. Karatay, “Görünmez Tehditten Toplumsal Soruna: Yapay Zekâ Önyargısının Ekonomik ve Toplumsal Etkilerinin Değerlendirilmesi”, UEAD, c. 11, sy. 2, ss. 17–28, 2025.
ISNAD Turan, Deniz vd. “Görünmez Tehditten Toplumsal Soruna: Yapay Zekâ Önyargısının Ekonomik ve Toplumsal Etkilerinin Değerlendirilmesi”. Uluslararası Ekonomik Araştırmalar Dergisi 11/2 (Aralık2025), 17-28.
JAMA Turan D, Uğurlu M, Karatay E. Görünmez Tehditten Toplumsal Soruna: Yapay Zekâ Önyargısının Ekonomik ve Toplumsal Etkilerinin Değerlendirilmesi. UEAD. 2025;11:17–28.
MLA Turan, Deniz vd. “Görünmez Tehditten Toplumsal Soruna: Yapay Zekâ Önyargısının Ekonomik ve Toplumsal Etkilerinin Değerlendirilmesi”. Uluslararası Ekonomik Araştırmalar Dergisi, c. 11, sy. 2, 2025, ss. 17-28.
Vancouver Turan D, Uğurlu M, Karatay E. Görünmez Tehditten Toplumsal Soruna: Yapay Zekâ Önyargısının Ekonomik ve Toplumsal Etkilerinin Değerlendirilmesi. UEAD. 2025;11(2):17-28.