Araştırma Makalesi
BibTex RIS Kaynak Göster

Makine Çevirisinde Şeffaflık ve Hesap Verebilirlik: Yapay Zeka Çağında Yeni Etik Normlar Ortaya Çıkıyor mu?

Yıl 2026, Cilt: 26 Sayı: 1, 463 - 478, 28.03.2026
https://doi.org/10.18037/ausbd.1811507
https://izlik.org/JA37CE26SJ

Öz

Bu çalışma, Makine Çevirisi (MÇ) alanında şeffaflık ve hesap verebilirlik kavramlarının politika belgelerinde, kurumsal söylemlerde ve akademik yazında nasıl tanımlandığını, uygulandığını ve dönüştüğünü incelemektedir. Veri seti; altı uluslararası politika belgesi (örneğin UNESCO, OECD, AB Yapay Zekâ Yasası taslakları), yirmi üç hakemli akademik makale ve 2018–2024 yılları arasında önde gelen MÇ sağlayıcıları tarafından yayımlanan dokuz kurumsal açıklama ve teknik yönergeden oluşmaktadır. Belgeler, MÇ sistemlerinde şeffaflık, sorumluluk veya açıklanabilirlik kavramlarına açık atıf yapılması ölçüt alınarak amaçlı örnekleme yöntemiyle seçilmiştir. Analiz, Braun ve Clarke’ın tematik analiz çerçevesi izlenerek yürütülmüş ve NVivo yazılımı ile desteklenen yinelemeli, tümevarımsal bir kodlama süreci uygulanmıştır. Bulgular üç ana tema ortaya koymaktadır: (1) şeffaflığın politik söylemde söylemsel olarak genişlemesi, şeffaflığın evrensel bir güvence olarak sunulmasına karşın uygulama ölçütlerinin belirsiz kalması; (2) kurumsal iletişimde hesap verebilirliğin kaydırılması, değerlendirme emeği ve risk yönetiminin kullanıcıya yüklenmesi; ve (3) akademik yazında şeffaflığın daralması, kavramın kuramsal olarak tartışılmasına rağmen MÇ iş akışlarına ampirik düzeyde sınırlı entegrasyon sağlanması. Bu bulgular doğrultusunda çalışma, MÇ’de teknik, etik ve bağlamsal anlam katmanlarını bütünleştiren ilişkisel bir model olarak Hermeneutik Şeffaflık kavramını önermektedir. Çalışma, yüksek düzeyli söylem ile uygulamadaki gerçeklik arasındaki boşluklara dikkat çekmekte ve şeffaflığın durağan bir gereklilik değil; duruma bağlı, yorumlayıcı ve diyalojik bir süreç olarak yeniden düşünülmesi gerektiğini savunmaktadır. MÇ araştırmaları, yönetişim mekanizmaları ve platform tasarımı için sonuçlar da tartışılmaktadır.

Kaynakça

  • Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
  • Baker, M. (2018). Translation and conflict: A narrative account. Routledge.
  • Bietti, E. (2020). From ethics washing to ethics bashing: Regulating AI in Europe. Computer Law & Security Review, 35(6), 105–114. https://doi.org/10.1016/j.clsr.2020.105383
  • Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Springer.
  • Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal, 9(2), 27–40. https://doi.org/10.3316/QRJ0902027
  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
  • Chesterman, A. (2001). Proposal for a hieronymic oath. The Translator, 7(2), 139–154. https://doi.org/10.1080/13556509.2001.10799099
  • Chesterman, A. (2016). Memes of translation: The spread of ideas in translation theory (Revised ed.). John Benjamins.
  • Crawford, K., & Paglen, T. (2021). Excavating AI: The politics of images in machine learning training sets. AI Now Institute. https://www.excavating.ai
  • Cronin, M. (2017). Eco-translation: Translation and ecology in the age of the Anthropocene. Routledge.
  • Cronin, M. (2022). Translation in the digital age: Ethics, agency, and posthumanism. Routledge.
  • European Union. (2024). Artificial Intelligence Act (Regulation (EU) 2024/1689). Official Journal of the European Union. https://eur-lex.europa.eu
  • Fairclough, N. (1995). Critical discourse analysis: The critical study of language. Longman.
  • Floridi, L. (2019). The logic of information: A theory of philosophy as conceptual design. Oxford University Press.
  • Floridi, L., & Cowls, J. (2021). A unified framework of five principles for AI in society. Harvard Data Science Review, 3(1). https://doi.org/10.1162/99608f92.8cd550d1
  • GenLaw Blog. (2023). Liability and post-editing in AI-assisted legal translation. https://blog.genlaw.org/Camera-Ready/16.pdf
  • Han, B.-C. (2021). Infocracy: Digitization and the crisis of democracy. Polity Press.
  • Hovy, D., & Spruit, S. L. (2023). The social impact of natural language processing. Computational Linguistics, 49(1), 1–27. https://doi.org/10.1162/coli_a_00468
  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
  • Kenny, D., & Doherty, S. (2020). Ethics in machine translation: A humanist perspective. Translation Studies, 13(1), 19–35. https://doi.org/10.1080/14781700.2019.1689878
  • Krüger, R. (2022). Translation ethics and the posthuman turn: Agency, responsibility, and technology. Target, 34(2), 301–324. https://doi.org/10.1075/target.21035.kru
  • Latour, B. (2005). Reassembling the social: An introduction to actor-network theory. Oxford University Press.
  • Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage.
  • Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and implementation (4th ed.). Jossey-Bass.
  • Mittelstadt, B. D. (2022). The value of AI ethics. AI and Ethics, 2(1), 1–9. https://doi.org/10.1007/s43681-021-00084-x
  • O’Hagan, M. (2022). Transparency in the age of neural MT: Revisiting translation ethics. Machine Translation, 36(1), 57–70. https://doi.org/10.1007/s10590-021-09317-0
  • OECD. (2019). OECD Principles on Artificial Intelligence. https://www.oecd.org/going-digital/ai/principles/
  • OECD. (2020). Implementing trustworthy AI: AI principles in practice. https://doi.org/10.1787/cb6b6e5f-en
  • OECD. (2023). AI, Data and Public Policy: Building More Trustworthy Artificial Intelligence Systems. https://doi.org/10.1787/1f0a2d20-en
  • Olohan, M. (2021). Technology, translation, and society: A sociological perspective. Routledge.
  • Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N., & Hoagwood, K. (2015). Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health and Mental Health Services Research, 42(5), 533–544. https://doi.org/10.1007/s10488-013-0528-y
  • Pym, A. (2012). On translator ethics: Principles for mediation between cultures. John Benjamins.
  • Rescigno, A., & Monti, J. (2023). Gender bias in machine translation: a statistical evaluation of google translate and deepl for english, italian and german. In INTERNATIONAL CONFERENCE Human-informed Translation and Interpreting Technology (HiT-IT 2023) Proceedings (pp. 1-11). Incoma Ltd..
  • Ricoeur, P. (1976). Interpretation theory: Discourse and the surplus of meaning. Texas Christian University Press.
  • UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000380455
  • UNESCO. (2023). Guidance for the Ethical Use and Governance of Artificial Intelligence in Education. https://unesdoc.unesco.org/ark:/48223/pf0000385457
  • Venuti, L. (1995). The translator’s invisibility: A history of translation. Routledge.
  • Wolfe, C. (2010). What is posthumanism? University of Minnesota Press.

Transparency and Accountability in Machine Translation: Are New Ethical Norms Emerging in the Age of AI?

Yıl 2026, Cilt: 26 Sayı: 1, 463 - 478, 28.03.2026
https://doi.org/10.18037/ausbd.1811507
https://izlik.org/JA37CE26SJ

Öz

This study examines how transparency and accountability are articulated, operationalized, and transformed across policy documents, corporate communication, and academic discourse on Machine Translation (MT). The dataset consists of six international policy documents (e.g., UNESCO, OECD, EU AI Act drafts), twenty-three peer-reviewed academic articles, and nine corporate statements and technical guidelines issued by leading MT providers between 2018 and 2024. Documents were selected through purposive sampling based on explicit references to transparency, responsibility, or explainability in MT systems. The analysis follows Braun and Clarke’s thematic framework and was conducted through an iterative, inductive coding process supported by NVivo. Three overarching themes emerged: (1) the rhetorical expansion of transparency in policy discourse, where transparency is framed as a universal safeguard yet operational criteria remain vague; (2) the displacement of accountability in corporate communication, which locates evaluative labour and risk management at the user’s end; and (3) the narrowing of transparency in academic texts, where the concept is discussed conceptually but seldom integrated into empirical MT workflows. Building on these findings, the article proposes Hermeneutic Transparency as a relational model that integrates technical, ethical, and contextual layers of meaning-making in MT systems. The study highlights gaps between high-level discourse and actual implementation practices and argues that transparency should be reconceived not as a static requirement but as a situated, interpretive and dialogic process. Implications for MT research, governance, and platform design are also discussed.

Kaynakça

  • Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
  • Baker, M. (2018). Translation and conflict: A narrative account. Routledge.
  • Bietti, E. (2020). From ethics washing to ethics bashing: Regulating AI in Europe. Computer Law & Security Review, 35(6), 105–114. https://doi.org/10.1016/j.clsr.2020.105383
  • Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Springer.
  • Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal, 9(2), 27–40. https://doi.org/10.3316/QRJ0902027
  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
  • Chesterman, A. (2001). Proposal for a hieronymic oath. The Translator, 7(2), 139–154. https://doi.org/10.1080/13556509.2001.10799099
  • Chesterman, A. (2016). Memes of translation: The spread of ideas in translation theory (Revised ed.). John Benjamins.
  • Crawford, K., & Paglen, T. (2021). Excavating AI: The politics of images in machine learning training sets. AI Now Institute. https://www.excavating.ai
  • Cronin, M. (2017). Eco-translation: Translation and ecology in the age of the Anthropocene. Routledge.
  • Cronin, M. (2022). Translation in the digital age: Ethics, agency, and posthumanism. Routledge.
  • European Union. (2024). Artificial Intelligence Act (Regulation (EU) 2024/1689). Official Journal of the European Union. https://eur-lex.europa.eu
  • Fairclough, N. (1995). Critical discourse analysis: The critical study of language. Longman.
  • Floridi, L. (2019). The logic of information: A theory of philosophy as conceptual design. Oxford University Press.
  • Floridi, L., & Cowls, J. (2021). A unified framework of five principles for AI in society. Harvard Data Science Review, 3(1). https://doi.org/10.1162/99608f92.8cd550d1
  • GenLaw Blog. (2023). Liability and post-editing in AI-assisted legal translation. https://blog.genlaw.org/Camera-Ready/16.pdf
  • Han, B.-C. (2021). Infocracy: Digitization and the crisis of democracy. Polity Press.
  • Hovy, D., & Spruit, S. L. (2023). The social impact of natural language processing. Computational Linguistics, 49(1), 1–27. https://doi.org/10.1162/coli_a_00468
  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
  • Kenny, D., & Doherty, S. (2020). Ethics in machine translation: A humanist perspective. Translation Studies, 13(1), 19–35. https://doi.org/10.1080/14781700.2019.1689878
  • Krüger, R. (2022). Translation ethics and the posthuman turn: Agency, responsibility, and technology. Target, 34(2), 301–324. https://doi.org/10.1075/target.21035.kru
  • Latour, B. (2005). Reassembling the social: An introduction to actor-network theory. Oxford University Press.
  • Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage.
  • Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and implementation (4th ed.). Jossey-Bass.
  • Mittelstadt, B. D. (2022). The value of AI ethics. AI and Ethics, 2(1), 1–9. https://doi.org/10.1007/s43681-021-00084-x
  • O’Hagan, M. (2022). Transparency in the age of neural MT: Revisiting translation ethics. Machine Translation, 36(1), 57–70. https://doi.org/10.1007/s10590-021-09317-0
  • OECD. (2019). OECD Principles on Artificial Intelligence. https://www.oecd.org/going-digital/ai/principles/
  • OECD. (2020). Implementing trustworthy AI: AI principles in practice. https://doi.org/10.1787/cb6b6e5f-en
  • OECD. (2023). AI, Data and Public Policy: Building More Trustworthy Artificial Intelligence Systems. https://doi.org/10.1787/1f0a2d20-en
  • Olohan, M. (2021). Technology, translation, and society: A sociological perspective. Routledge.
  • Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N., & Hoagwood, K. (2015). Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health and Mental Health Services Research, 42(5), 533–544. https://doi.org/10.1007/s10488-013-0528-y
  • Pym, A. (2012). On translator ethics: Principles for mediation between cultures. John Benjamins.
  • Rescigno, A., & Monti, J. (2023). Gender bias in machine translation: a statistical evaluation of google translate and deepl for english, italian and german. In INTERNATIONAL CONFERENCE Human-informed Translation and Interpreting Technology (HiT-IT 2023) Proceedings (pp. 1-11). Incoma Ltd..
  • Ricoeur, P. (1976). Interpretation theory: Discourse and the surplus of meaning. Texas Christian University Press.
  • UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000380455
  • UNESCO. (2023). Guidance for the Ethical Use and Governance of Artificial Intelligence in Education. https://unesdoc.unesco.org/ark:/48223/pf0000385457
  • Venuti, L. (1995). The translator’s invisibility: A history of translation. Routledge.
  • Wolfe, C. (2010). What is posthumanism? University of Minnesota Press.
Toplam 38 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Makine Öğrenme (Diğer), Bilgi Temsili ve Akıl Yürütme, İletişim Sistemleri
Bölüm Araştırma Makalesi
Yazarlar

Gülfidan Aytaş 0000-0003-1566-1592

Gönderilme Tarihi 27 Ekim 2025
Kabul Tarihi 14 Ocak 2026
Yayımlanma Tarihi 28 Mart 2026
DOI https://doi.org/10.18037/ausbd.1811507
IZ https://izlik.org/JA37CE26SJ
Yayımlandığı Sayı Yıl 2026 Cilt: 26 Sayı: 1

Kaynak Göster

APA Aytaş, G. (2026). Transparency and Accountability in Machine Translation: Are New Ethical Norms Emerging in the Age of AI? Anadolu Üniversitesi Sosyal Bilimler Dergisi, 26(1), 463-478. https://doi.org/10.18037/ausbd.1811507