Research Article
BibTex RIS Cite

THE TRANSFER OF INTERNATIONAL CASES OF ALGORITHMIC INJUSTICE TO TURKISH DIGITAL MEDIA: A DISCURSIVE REPRESENTATION ANALYSIS

Year 2026, Volume: 16 Issue: 2 , 636 - 650 , 01.04.2026
https://doi.org/10.7456/tojdac.1857152
https://izlik.org/JA33NJ57RN

Abstract

This study examines how internationally reported cases of bias and injustice related to artificial intelligence and algorithmic decision systems are translated and discursively reframed within Turkish digital media. It analyzes how global cases circulate through translation and external sourcing, shaping public understandings of algorithmic injustice. Drawing on critical algorithm studies, the research conceptualizes algorithmic bias not as a technical malfunction but as an expression of power relations embedded in data capitalism, racial inequality, and labor regimes. The analysis is based on 19 Turkish-language media texts selected through purposive sampling and examined using reflexive thematic analysis. Findings show that algorithmic injustice is predominantly framed as a technical error or an external issue associated with foreign institutions and global technology companies, while structural dimensions such as data capitalism and socio-economic inequality remain less visible. The study also suggests that reliance on internationally sourced news and translation-based reporting practices contributes to the technocentric and depoliticized language through which algorithmic harm is represented. By situating media discourse within the circulation of global narratives, the study demonstrates how algorithmic injustice is localized through media transfer processes and shaped by the dynamics of translation journalism and transnational information flows.

References

  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa
  • Buolamwini, J. (2018, 4 Temmuz). When AI fails on Oprah, Serena Williams, and Michelle Obama, it’s time to face truth. Medium. https://medium.com/@Joy.Buolamwini/when-ai-fails-on-oprah-serena-williams-and-michelle-obama-its-time-to-face-truth-bf7c2c8a4119 Erişim Tarihi: 28 Eylül 2025.
  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 77-91.
  • Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
  • Filizler, F. Y. (2025, 11 Temmuz). Veri biliminde sorun ve mücadele alanları. Sendika.org. https://www.sendika.org/2025/07/veri-biliminde-sorun-ve-mucadele-alanlari-729459 Erişim Tarihi: 12 Kasım 2025.
  • Forrest, K. B. (2021). When machines can be judge, jury, and executioner: Justice in the age of artificial intelligence. World Scientific.
  • Gözükeleş, İ. (2023, 1 Nisan). Yapay zekânın politikliği. Bilim ve Gelecek. https://bilimvegelecek.com.tr/index.php/2023/04/01/yapay-zekanin-politikligi/ Erişim Tarihi: 1 Eylül 2025.
  • Kura, K. (2023). Algorithmic Bias in Artificial Intelligence Systems. SSRN Electronic Journal. 10.2139/ssrn.4370191.
  • Lohmann, L. (2020). “Interpretation Machines: Contradictions of ‘Artificial Intelligence’ in 21st-Century Capitalism”. Socialist Register, 57, 50–78.
  • Milner, Y., & Traub, A. (2021). Data Capitalism + Algorithmic Racism. Demos; Data for Black Lives.
  • Moss, S. (2022). The prevalence of artificial intelligence, surveillance capitalism, disinformation, and biased algorithms amplify the need for critical skills applied to media. The Journal of Media Literacy, 69 (1-2).
  • Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. https://doi.org/10.2307/j.ctt1pwt9w5
  • Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods, 16(1), 1-13. https://doi.org/10.1177/1609406917733847
  • O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
  • O’Shea, L. (2021). Geleceğin tarihleri: Ada Lovelace, Tom Paine ve Paris Komünü bize dijital teknoloji hakkında ne öğretebilir? (A. Ay, Çev.). Metis Yayınları.
  • Parshley, L. (2023, 15 Kasım). Rage against the algorithm. The Lever. H ttps://www.levernews.com/rage-against-the-algorithm/ Erişim Tarihi: 11 Temmuz 2025.
  • Pasquinelli, M. (2025). Patronun Gözü: Yapay Zekânın Sosyal Tarihi. (Çev. E. Gen). İstanbul: Metis Yayınları.
  • Photopoulos, J. (2021). “Fighting Algorithmic Bias in Artificial Intelligence”. Physics World, 34(5), 42–47.
  • Saurwein, F., & Spencer-Smith, C. (2021). Automated trouble: The role of algorithmic selection in harms on social media platforms. Media and Communication, 9 (4), 222-233. https://doi.org/10.17645/mac.v9i4.4062
  • The Greenlining Institute. (2021). Algorithmic bias explained: How automated decision-making becomes automated discrimination. https://greenlining.org/wp-content/uploads/2021/02/Greenlining-Algorithmic-Bias-Explained-Report-Feb-2021.pdf Erişim Tarihi: 11 Haziran 2025.
  • Williams, N. H. (2023). Artificial Intelligence and Healthcare: The Impact of Algorithmic Bias on Health Disparities. Springer International Publishing. https://doi.org/10.1007/978-3-031-48261-8
  • Zajko, M. (2022). “Artificial Intelligence, Algorithms, and Social Inequality: Sociological Contributions to Contemporary Debates”. Sociological Compass, 16(3), e12962. https://doi.org/10.1111/soc4.12962

ULUSLARARASI ALGORİTMİK ADALETSİZLİK VAKALARININ TÜRKİYE DİJİTAL MEDYASINA AKTARIMI: SÖYLEMSEL BİR TEMSİL ANALİZİ

Year 2026, Volume: 16 Issue: 2 , 636 - 650 , 01.04.2026
https://doi.org/10.7456/tojdac.1857152
https://izlik.org/JA33NJ57RN

Abstract

Bu çalışma, yapay zekâ ve algoritmik karar sistemlerine ilişkin uluslararası düzeyde raporlanan yanlılık ve adaletsizlik vakalarının Türk dijital medyasında nasıl çevrildiğini ve söylemsel olarak yeniden çerçevelendiğini incelemektedir. Küresel vakaların çeviri ve dış kaynaklı haber üretimi yoluyla nasıl dolaşıma girdiğini ve bu süreçlerin algoritmik adaletsizliğe ilişkin kamusal anlayışı nasıl şekillendirdiğini analiz etmektedir. Eleştirel algoritma çalışmaları literatüründen hareketle, algoritmik yanlılık teknik bir arıza olarak değil, veri kapitalizmi, ırksal eşitsizlik ve emek rejimlerine gömülü güç ilişkilerinin bir ifadesi olarak kavramsallaştırılmaktadır. Analiz, amaçlı örnekleme yöntemiyle seçilen 19 Türkçe medya metnine dayanmaktadır ve bu metinler refleksif tematik analiz yöntemiyle incelenmiştir. Bulgular, algoritmik adaletsizliğin çoğunlukla teknik bir hata ya da yabancı kurumlar ve küresel teknoloji şirketleriyle ilişkilendirilen dışsal bir sorun olarak çerçevelendiğini; buna karşılık veri kapitalizmi ve sosyo-ekonomik eşitsizlik gibi yapısal boyutların daha az görünür kaldığını göstermektedir. Çalışma ayrıca, uluslararası kaynaklı haberlere dayalı içerik üretiminin ve çeviri temelli gazetecilik pratiklerinin, algoritmik zararların temsilinde görülen teknomerkezci ve depolitize edici dili şekillendirdiğini ortaya koymaktadır. Medya söylemini küresel anlatıların dolaşımı bağlamında ele alan bu çalışma, algoritmik adaletsizliğin medya aktarım süreçleri yoluyla yerelleştiğini ve bu temsil biçimlerinin çeviri gazeteciliği ile ulusötesi bilgi akışlarının dinamikleri tarafından biçimlendirildiğini göstermektedir

References

  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa
  • Buolamwini, J. (2018, 4 Temmuz). When AI fails on Oprah, Serena Williams, and Michelle Obama, it’s time to face truth. Medium. https://medium.com/@Joy.Buolamwini/when-ai-fails-on-oprah-serena-williams-and-michelle-obama-its-time-to-face-truth-bf7c2c8a4119 Erişim Tarihi: 28 Eylül 2025.
  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 77-91.
  • Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
  • Filizler, F. Y. (2025, 11 Temmuz). Veri biliminde sorun ve mücadele alanları. Sendika.org. https://www.sendika.org/2025/07/veri-biliminde-sorun-ve-mucadele-alanlari-729459 Erişim Tarihi: 12 Kasım 2025.
  • Forrest, K. B. (2021). When machines can be judge, jury, and executioner: Justice in the age of artificial intelligence. World Scientific.
  • Gözükeleş, İ. (2023, 1 Nisan). Yapay zekânın politikliği. Bilim ve Gelecek. https://bilimvegelecek.com.tr/index.php/2023/04/01/yapay-zekanin-politikligi/ Erişim Tarihi: 1 Eylül 2025.
  • Kura, K. (2023). Algorithmic Bias in Artificial Intelligence Systems. SSRN Electronic Journal. 10.2139/ssrn.4370191.
  • Lohmann, L. (2020). “Interpretation Machines: Contradictions of ‘Artificial Intelligence’ in 21st-Century Capitalism”. Socialist Register, 57, 50–78.
  • Milner, Y., & Traub, A. (2021). Data Capitalism + Algorithmic Racism. Demos; Data for Black Lives.
  • Moss, S. (2022). The prevalence of artificial intelligence, surveillance capitalism, disinformation, and biased algorithms amplify the need for critical skills applied to media. The Journal of Media Literacy, 69 (1-2).
  • Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. https://doi.org/10.2307/j.ctt1pwt9w5
  • Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods, 16(1), 1-13. https://doi.org/10.1177/1609406917733847
  • O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
  • O’Shea, L. (2021). Geleceğin tarihleri: Ada Lovelace, Tom Paine ve Paris Komünü bize dijital teknoloji hakkında ne öğretebilir? (A. Ay, Çev.). Metis Yayınları.
  • Parshley, L. (2023, 15 Kasım). Rage against the algorithm. The Lever. H ttps://www.levernews.com/rage-against-the-algorithm/ Erişim Tarihi: 11 Temmuz 2025.
  • Pasquinelli, M. (2025). Patronun Gözü: Yapay Zekânın Sosyal Tarihi. (Çev. E. Gen). İstanbul: Metis Yayınları.
  • Photopoulos, J. (2021). “Fighting Algorithmic Bias in Artificial Intelligence”. Physics World, 34(5), 42–47.
  • Saurwein, F., & Spencer-Smith, C. (2021). Automated trouble: The role of algorithmic selection in harms on social media platforms. Media and Communication, 9 (4), 222-233. https://doi.org/10.17645/mac.v9i4.4062
  • The Greenlining Institute. (2021). Algorithmic bias explained: How automated decision-making becomes automated discrimination. https://greenlining.org/wp-content/uploads/2021/02/Greenlining-Algorithmic-Bias-Explained-Report-Feb-2021.pdf Erişim Tarihi: 11 Haziran 2025.
  • Williams, N. H. (2023). Artificial Intelligence and Healthcare: The Impact of Algorithmic Bias on Health Disparities. Springer International Publishing. https://doi.org/10.1007/978-3-031-48261-8
  • Zajko, M. (2022). “Artificial Intelligence, Algorithms, and Social Inequality: Sociological Contributions to Contemporary Debates”. Sociological Compass, 16(3), e12962. https://doi.org/10.1111/soc4.12962
There are 22 citations in total.

Details

Primary Language English
Subjects New Media
Journal Section Research Article
Authors

Eren Efe 0000-0001-7500-4709

Submission Date January 6, 2026
Acceptance Date March 27, 2026
Publication Date April 1, 2026
DOI https://doi.org/10.7456/tojdac.1857152
IZ https://izlik.org/JA33NJ57RN
Published in Issue Year 2026 Volume: 16 Issue: 2

Cite

APA Efe, E. (2026). THE TRANSFER OF INTERNATIONAL CASES OF ALGORITHMIC INJUSTICE TO TURKISH DIGITAL MEDIA: A DISCURSIVE REPRESENTATION ANALYSIS. The Turkish Online Journal of Design Art and Communication, 16(2), 636-650. https://doi.org/10.7456/tojdac.1857152


All site content, except where otherwise noted, is licensed under a Creative Common Attribution Licence. (CC-BY-NC 4.0)

by-nc.png