Araştırma Makalesi
BibTex RIS Kaynak Göster

Algoritmik Stereotipler: ‘ChatGPT’ ve ‘Gemini’ Yapay Zekâ Modellerinde Türk Kimlik-Kültür Temsillerinin Analizi

Yıl 2026, Sayı: 17 , 444 - 477 , 30.04.2026
https://doi.org/10.32739/etkilesim.2026.9.17.345
https://izlik.org/JA62MB37RZ

Öz

Bu çalışma, 21. yüzyılın belirleyici teknolojilerinden biri olan yapay zekânın (YZ), görsel üretim çıktıları aracılığıyla Türk kimliğini ve kültürünü temsil etme biçimlerini incelemeyi amaçlamaktadır. Araştırmanın örneklemini, gelişmiş görsel üretim yetenekleriyle öne çıkan üretken YZ modelleri olan ChatGPT ve Gemini oluşturmaktadır. Söz konusu modeller, araştırmanın kapsamı doğrultusunda amaçlı örnekleme yöntemiyle belirlenmiş, modeller tarafından üretilen görsel çıktılar ise nitel içerik çözümlemesine tabi tutulmuştur. Görsel veriler; ‘modern–geleneksel, batılı–doğulu, kentsel–kırsal, muhafazakâr–seküler, milliyetçi–kozmopolit ve kadın–erkek’ gibi kesişen kimlik eksenleri çerçevesinde sistematik olarak kodlanmıştır. Elde edilen bulgular, algoritmaların Türk toplumuna yönelik temsili kalıplarını ve dijital stereotiplerini tematik bir çerçevede eleştirel olarak analiz etmektedir. Araştırma, YZ sistemlerinin Türk toplumsal gerçekliğini çoğulcu ve heterojen yapısından uzaklaştırarak, indirgemeci ikili karşıtlıklar üzerinden yeniden ürettiğini göstermektedir. Analizler sonucunda, Batılı Türk kimliğinin ekonomik refah ve tüketim kültürüyle, Doğulu Türk kimliğinin ise tarihsel ve otantik simgelerle sınırlandırıldığı saptanmıştır. Öte yandan seküler ve muhafazakâr kimliklerin giyim kodları ve mekânsal tercihler üzerinden keskin biçimde ayrıştırıldığı, toplumsal yapının İstanbul merkezli kentsel bir bakış açısıyla daraltılarak sunulduğu görülmektedir. Ayrıca toplumsal cinsiyet rollerinin büyük oranda geleneksel kalıplarla ilişkilendirildiği tespit edilmiştir. Nihayetinde çalışma, YZ sistemlerinin kültürel temsilleri yeniden inşa etme süreçlerini Türkiye örneği üzerinden somutlaştırarak, algoritmik kültür ve temsiliyet tartışmalarına bağlamsal bir perspektif kazandırmaktadır.

Kaynakça

  • Abid, A., Farooqi, M., & Zou, J. (2021, 19-21 Mayıs). Persistent anti-muslim bias in large language models [Konferans Bildiri Özeti]. 2021 AAAI/ACM Conference on AI, Ethics, and Society. https://arxiv.org/pdf/2101.05783
  • Belenguer, L. (2022). AI bias: Exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI and Ethics, 2(4), 771–787. https://doi.org/10.1007/s43681-022-00138-8
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, 3-10 Mart). On the dangers of stochastic parrots: Can language models be too big? [Konferans Bildiri Özeti]. 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT). https://doi.org/10.1145/3442188.3445922
  • Benjamin, R. (2019). Assessing risk, automating racism. Science, 366(6464), 421–422. https://doi.org/10.1126/science.aaz3873
  • Berkes, N. (1978). Türkiye'de çağdaşlaşma. Doğu-Batı Yayınları.
  • Bianchi, F., Kalluri, P., Durmus, E., Ladhak, F., Cheng, M., Nozza, D., Hashimoto, T., Jurafsky, D., Zou, J., & Caliskan, A. (2023). Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. [Konferans Bildiri Özeti]. 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT). https://doi.org/10.1145/3593013.3594095
  • Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679.
  • Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company.
  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification [Konferans Bildiri Özeti]. Fairness, Accountability, and Transparency Conference.
  • Cho, J., Zala, A., & Bansal, M. (2022). DALL-Eval: Probing the reasoning skills and social biases of text-to-image generation models. ArXiv preprint arXiv:2202.04053. https://arxiv.org/abs/2202.04053
  • Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
  • D’Ignazio, C., & Klein, L. F. (2023). Data feminism. MIT Press.
  • Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems [Konferans Bildiri Özeti]. Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), 4691–4697.
  • Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Edwards, A., Panteli, N., Peck, C., Al-Amri, S., Al-Mezaini, N. W., Baabdullah, A. M., Balakrishnan, V., Belk, R. W., Budhwar, P. S., Cheung, C. M. K., Cauldwell-French, E., Cheratit, A., Coombs, C. R., David, R., Dennehy, D., … Shah, J. (2023). Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 68, Article 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
  • Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.
  • Floridi, L. (2023). The ethics of artificial intelligence: Principles, challenges, and opportunities. Oxford University Press.
  • Fuchs, C. (2021). Social media: A critical introduction. SAGE Publications.
  • Genç, F., Keyder, Ç., Keyman, E. F., & Badur, A. K. (2021). Kentlerin Türkiyesi: İmkanlar, sınırlar ve çatışmalar. İletişim Yayınları.
  • Hall, S. (1997). The spectacle of the other. S. Hall (Ed.), Representation: Cultural representations and signifying practices içinde (ss. 223–290). SAGE Publications.
  • Hobsbawm, E. J. (2010). Milletler ve milliyetçilik: Program, mit, gerçeklik (O. Akınhay, Çev.). Ayrıntı Yayınları.
  • Hobsbawm, E. J. (2014). Fractured times: Culture and society in the twentieth century. The New Press.
  • Karakaş, M. (2014). Türkiye’nin kimlikler siyaseti ve sosyolojisi. Akademik İncelemeler Dergisi, 8(2), 1-44.
  • Karataş, C. (2012). Türk kültürü ve milli kimlik bağlamında kozmopolitizm. Journal of History Culture and Art Research,
  • Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., ... & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.
  • Kress, G., & Van Leeuwen, T. (2020). Reading images: The grammar of visual design. Routledge.
  • Li, Z., Xia, L., Ren, X., Tang, J., Chen, T., Xu, Y., & Huang, C. (2025). Urban computing in the era of large language models. ACM Transactions on Intelligent Systems and Technology, 146, 1-43. https://doi.org/10.1145/3768163
  • Liang, C. X., Tian, P., Yin, C. H., Yua, Y., An-Hou, W., Ming, L., & Liu, M. (2024). A comprehensive survey and guide to multimodal large language models in vision-language tasks. arXiv preprint arXiv:2411.06284.
  • Luccioni, A. S., Akiki, C., Mitchell, M., & Jernite, Y. (2023). Stable bias: Analyzing societal representations in diffusion models. arXiv preprint arXiv:2303.11408.
  • Mardin, Ş. (1991). Türkiye'de din ve siyaset. İletişim Yayınları.
  • Markham, T. (2022). Media and everyday life. Bloomsbury Publishing.
  • Mayring, P. (2021). Qualitative content analysis: A step-by-step guide. SAGE Publications.
  • Mhlambi, S. (2020). From rationality to relationality: Ubuntu as an ethical and human rights framework for artificial intelligence governance. Carr Center for Human Rights Policy Discussion Paper Series, 9(31).
  • Mirzoeff, N. (1999). An introduction to visual culture. Routledge.
  • Neuman, W. L. (2006). Social research methods: Qualitative and quantitative approaches (6th ed.). Pearson/Allyn and Bacon.
  • Ng, A. (2018). AI is the new electricity. O'Reilly Media.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
  • O'neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
  • Orlikowski, W. J. (1991). The duality of technology: Rethinking the concept of technology in organizations. Massachusetts Institute of Technology.
  • Özdal, M. A. (2024). 21. yüzyıl yapay zekâ destekli resimlerde Avrupa ve Türk kültüründen izler. Kültür Araştırmaları Dergisi, 22, 280-307.
  • Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
  • Putland, E., Chikodzore-Paterson, C., & Brookes, G. (2025). Artificial intelligence and visual discourse: A multimodal critical discourse analysis of AI-generated images of “dementia”. Social Semiotics, 35(2), 228–253. https://doi.org/10.1080/10350330.2023.2290555
  • Qadri, R., Diaz, M., Wang, D., & Madaio, M. (2025). The case for "thick evaluations" of cultural representation in AI. arXiv preprint arXiv:2503.19075.
  • Remitly. (2022). Dream jobs around the world study. Remitly. https://www.remitly.com/gb/en/landing/dream-jobs-around-the-world
  • Rose, G. (2022). Visual methodologies: An introduction to researching with visual materials. SAGE Publications.
  • Said, E. W. (2016). Orientalism. C. Lemert (Ed.), Social theory: The multicultural and classic readings içinde (ss. 402-417). Routledge.
  • Sheng, E., Chang, K. W., Natarajan, P., & Peng, N. (2019). The woman worked as a babysitter: On biases in language generation [Konferans Bildiri Özeti]. 2019 Conference on Empirical Methods in Natural Language Processing, 3407–3412.
  • Tao, Y., Viberg, O., Baker, R. S., & Kizilcec, R. F. (2024). Cultural bias and cultural alignment of large language models. PNAS Nexus, 3(9), pgae346. https://doi.org/10.1093/pnasnexus/pgae346
  • Tümertekin, E., & Özgüç, N. (1997). Beşerî coğrafya: İnsan, kültür, mekân. Çantay Kitabevi.
  • Uzun, Y., Akkuzu, B., & Kayrıcı, M. (2021). Yapay zeka’nın kültür ve sanatla olan ilişkisi. Avrupa Bilim ve Teknoloji Dergisi, (28), 753–757.
  • Verdegem, P. (2024). Dismantling AI capitalism: The commons as an alternative to the power concentration of Big Tech. AI & Society, 39(2), 727–737.
  • Webster, F. (2014). Theories of the information society (4th ed.). Routledge.
  • Winner, L. (1989). The whale and the reactor: A search for limits in an age of high technology. University of Chicago Press.
  • Yurdigül, A., & Erdinç, Y. A. (2024). Ne’likten kim’liğe: Google temsillerindeki yapay zekâ suretleri ve kimlik imgelemleri. Uluslararası İletişim ve Sanat Dergisi, 5(13), 194–212.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

Algorithmic Stereotypes: An Analysis of Turkish Identity and Cultural Representations in ‘ChatGPT’ And ‘Gemini’ AI Models

Yıl 2026, Sayı: 17 , 444 - 477 , 30.04.2026
https://doi.org/10.32739/etkilesim.2026.9.17.345
https://izlik.org/JA62MB37RZ

Öz

This study aims to examine how Artificial Intelligence (AI), one of the defining technologies of the 21st century, represents Turkish identity and culture through its visual generative outputs. The research sample consists of ChatGPT and Gemini, generative AI models prominent for their advanced visual synthesis capabilities. Visual outputs obtained through purposive sampling were systematically analyzed using qualitative content analysis within the framework of intersecting identity axes such as ‘modern–traditional, western–eastern, urban–rural, conservative–secular, nationalist–cosmopolitan, and female–male’. The research results indicate that AI systems reproduce Turkish social reality through reductive binary oppositions, distancing it from its pluralistic and heterogeneous nature. The findings reveal that westernized Turkish identity is confined to economic prosperity and consumer culture, while easternized Turkish identity is limited to historical and authentic symbols. Furthermore, it was observed that secular and conservative identities are sharply segregated through dress codes and spatial preferences, the social structure is narrowed down to an Istanbul-centered urban perspective, and gender roles are largely associated with traditional patterns. Ultimately, this study substantiates the processes by which AI systems reconstruct cultural representations through the case of Türkiye, providing a contextual perspective to the debates on algorithmic culture and representation.

Kaynakça

  • Abid, A., Farooqi, M., & Zou, J. (2021, 19-21 Mayıs). Persistent anti-muslim bias in large language models [Konferans Bildiri Özeti]. 2021 AAAI/ACM Conference on AI, Ethics, and Society. https://arxiv.org/pdf/2101.05783
  • Belenguer, L. (2022). AI bias: Exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI and Ethics, 2(4), 771–787. https://doi.org/10.1007/s43681-022-00138-8
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, 3-10 Mart). On the dangers of stochastic parrots: Can language models be too big? [Konferans Bildiri Özeti]. 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT). https://doi.org/10.1145/3442188.3445922
  • Benjamin, R. (2019). Assessing risk, automating racism. Science, 366(6464), 421–422. https://doi.org/10.1126/science.aaz3873
  • Berkes, N. (1978). Türkiye'de çağdaşlaşma. Doğu-Batı Yayınları.
  • Bianchi, F., Kalluri, P., Durmus, E., Ladhak, F., Cheng, M., Nozza, D., Hashimoto, T., Jurafsky, D., Zou, J., & Caliskan, A. (2023). Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. [Konferans Bildiri Özeti]. 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT). https://doi.org/10.1145/3593013.3594095
  • Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679.
  • Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company.
  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification [Konferans Bildiri Özeti]. Fairness, Accountability, and Transparency Conference.
  • Cho, J., Zala, A., & Bansal, M. (2022). DALL-Eval: Probing the reasoning skills and social biases of text-to-image generation models. ArXiv preprint arXiv:2202.04053. https://arxiv.org/abs/2202.04053
  • Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
  • D’Ignazio, C., & Klein, L. F. (2023). Data feminism. MIT Press.
  • Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems [Konferans Bildiri Özeti]. Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), 4691–4697.
  • Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Edwards, A., Panteli, N., Peck, C., Al-Amri, S., Al-Mezaini, N. W., Baabdullah, A. M., Balakrishnan, V., Belk, R. W., Budhwar, P. S., Cheung, C. M. K., Cauldwell-French, E., Cheratit, A., Coombs, C. R., David, R., Dennehy, D., … Shah, J. (2023). Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 68, Article 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
  • Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.
  • Floridi, L. (2023). The ethics of artificial intelligence: Principles, challenges, and opportunities. Oxford University Press.
  • Fuchs, C. (2021). Social media: A critical introduction. SAGE Publications.
  • Genç, F., Keyder, Ç., Keyman, E. F., & Badur, A. K. (2021). Kentlerin Türkiyesi: İmkanlar, sınırlar ve çatışmalar. İletişim Yayınları.
  • Hall, S. (1997). The spectacle of the other. S. Hall (Ed.), Representation: Cultural representations and signifying practices içinde (ss. 223–290). SAGE Publications.
  • Hobsbawm, E. J. (2010). Milletler ve milliyetçilik: Program, mit, gerçeklik (O. Akınhay, Çev.). Ayrıntı Yayınları.
  • Hobsbawm, E. J. (2014). Fractured times: Culture and society in the twentieth century. The New Press.
  • Karakaş, M. (2014). Türkiye’nin kimlikler siyaseti ve sosyolojisi. Akademik İncelemeler Dergisi, 8(2), 1-44.
  • Karataş, C. (2012). Türk kültürü ve milli kimlik bağlamında kozmopolitizm. Journal of History Culture and Art Research,
  • Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., ... & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.
  • Kress, G., & Van Leeuwen, T. (2020). Reading images: The grammar of visual design. Routledge.
  • Li, Z., Xia, L., Ren, X., Tang, J., Chen, T., Xu, Y., & Huang, C. (2025). Urban computing in the era of large language models. ACM Transactions on Intelligent Systems and Technology, 146, 1-43. https://doi.org/10.1145/3768163
  • Liang, C. X., Tian, P., Yin, C. H., Yua, Y., An-Hou, W., Ming, L., & Liu, M. (2024). A comprehensive survey and guide to multimodal large language models in vision-language tasks. arXiv preprint arXiv:2411.06284.
  • Luccioni, A. S., Akiki, C., Mitchell, M., & Jernite, Y. (2023). Stable bias: Analyzing societal representations in diffusion models. arXiv preprint arXiv:2303.11408.
  • Mardin, Ş. (1991). Türkiye'de din ve siyaset. İletişim Yayınları.
  • Markham, T. (2022). Media and everyday life. Bloomsbury Publishing.
  • Mayring, P. (2021). Qualitative content analysis: A step-by-step guide. SAGE Publications.
  • Mhlambi, S. (2020). From rationality to relationality: Ubuntu as an ethical and human rights framework for artificial intelligence governance. Carr Center for Human Rights Policy Discussion Paper Series, 9(31).
  • Mirzoeff, N. (1999). An introduction to visual culture. Routledge.
  • Neuman, W. L. (2006). Social research methods: Qualitative and quantitative approaches (6th ed.). Pearson/Allyn and Bacon.
  • Ng, A. (2018). AI is the new electricity. O'Reilly Media.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
  • O'neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
  • Orlikowski, W. J. (1991). The duality of technology: Rethinking the concept of technology in organizations. Massachusetts Institute of Technology.
  • Özdal, M. A. (2024). 21. yüzyıl yapay zekâ destekli resimlerde Avrupa ve Türk kültüründen izler. Kültür Araştırmaları Dergisi, 22, 280-307.
  • Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
  • Putland, E., Chikodzore-Paterson, C., & Brookes, G. (2025). Artificial intelligence and visual discourse: A multimodal critical discourse analysis of AI-generated images of “dementia”. Social Semiotics, 35(2), 228–253. https://doi.org/10.1080/10350330.2023.2290555
  • Qadri, R., Diaz, M., Wang, D., & Madaio, M. (2025). The case for "thick evaluations" of cultural representation in AI. arXiv preprint arXiv:2503.19075.
  • Remitly. (2022). Dream jobs around the world study. Remitly. https://www.remitly.com/gb/en/landing/dream-jobs-around-the-world
  • Rose, G. (2022). Visual methodologies: An introduction to researching with visual materials. SAGE Publications.
  • Said, E. W. (2016). Orientalism. C. Lemert (Ed.), Social theory: The multicultural and classic readings içinde (ss. 402-417). Routledge.
  • Sheng, E., Chang, K. W., Natarajan, P., & Peng, N. (2019). The woman worked as a babysitter: On biases in language generation [Konferans Bildiri Özeti]. 2019 Conference on Empirical Methods in Natural Language Processing, 3407–3412.
  • Tao, Y., Viberg, O., Baker, R. S., & Kizilcec, R. F. (2024). Cultural bias and cultural alignment of large language models. PNAS Nexus, 3(9), pgae346. https://doi.org/10.1093/pnasnexus/pgae346
  • Tümertekin, E., & Özgüç, N. (1997). Beşerî coğrafya: İnsan, kültür, mekân. Çantay Kitabevi.
  • Uzun, Y., Akkuzu, B., & Kayrıcı, M. (2021). Yapay zeka’nın kültür ve sanatla olan ilişkisi. Avrupa Bilim ve Teknoloji Dergisi, (28), 753–757.
  • Verdegem, P. (2024). Dismantling AI capitalism: The commons as an alternative to the power concentration of Big Tech. AI & Society, 39(2), 727–737.
  • Webster, F. (2014). Theories of the information society (4th ed.). Routledge.
  • Winner, L. (1989). The whale and the reactor: A search for limits in an age of high technology. University of Chicago Press.
  • Yurdigül, A., & Erdinç, Y. A. (2024). Ne’likten kim’liğe: Google temsillerindeki yapay zekâ suretleri ve kimlik imgelemleri. Uluslararası İletişim ve Sanat Dergisi, 5(13), 194–212.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
Toplam 54 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular İletişim Çalışmaları
Bölüm Araştırma Makalesi
Yazarlar

Recep Altay 0000-0001-9250-3382

Gönderilme Tarihi 15 Aralık 2025
Kabul Tarihi 31 Mart 2026
Yayımlanma Tarihi 30 Nisan 2026
DOI https://doi.org/10.32739/etkilesim.2026.9.17.345
IZ https://izlik.org/JA62MB37RZ
Yayımlandığı Sayı Yıl 2026 Sayı: 17

Kaynak Göster

APA Altay, R. (2026). Algoritmik Stereotipler: ‘ChatGPT’ ve ‘Gemini’ Yapay Zekâ Modellerinde Türk Kimlik-Kültür Temsillerinin Analizi. Etkileşim, 17, 444-477. https://doi.org/10.32739/etkilesim.2026.9.17.345
AMA 1.Altay R. Algoritmik Stereotipler: ‘ChatGPT’ ve ‘Gemini’ Yapay Zekâ Modellerinde Türk Kimlik-Kültür Temsillerinin Analizi. Etkileşim. 2026;(17):444-477. doi:10.32739/etkilesim.2026.9.17.345
Chicago Altay, Recep. 2026. “Algoritmik Stereotipler: ‘ChatGPT’ ve ‘Gemini’ Yapay Zekâ Modellerinde Türk Kimlik-Kültür Temsillerinin Analizi”. Etkileşim, sy 17: 444-77. https://doi.org/10.32739/etkilesim.2026.9.17.345.
EndNote Altay R (01 Nisan 2026) Algoritmik Stereotipler: ‘ChatGPT’ ve ‘Gemini’ Yapay Zekâ Modellerinde Türk Kimlik-Kültür Temsillerinin Analizi. Etkileşim 17 444–477.
IEEE [1]R. Altay, “Algoritmik Stereotipler: ‘ChatGPT’ ve ‘Gemini’ Yapay Zekâ Modellerinde Türk Kimlik-Kültür Temsillerinin Analizi”, Etkileşim, sy 17, ss. 444–477, Nis. 2026, doi: 10.32739/etkilesim.2026.9.17.345.
ISNAD Altay, Recep. “Algoritmik Stereotipler: ‘ChatGPT’ ve ‘Gemini’ Yapay Zekâ Modellerinde Türk Kimlik-Kültür Temsillerinin Analizi”. Etkileşim. 17 (01 Nisan 2026): 444-477. https://doi.org/10.32739/etkilesim.2026.9.17.345.
JAMA 1.Altay R. Algoritmik Stereotipler: ‘ChatGPT’ ve ‘Gemini’ Yapay Zekâ Modellerinde Türk Kimlik-Kültür Temsillerinin Analizi. Etkileşim. 2026;:444–477.
MLA Altay, Recep. “Algoritmik Stereotipler: ‘ChatGPT’ ve ‘Gemini’ Yapay Zekâ Modellerinde Türk Kimlik-Kültür Temsillerinin Analizi”. Etkileşim, sy 17, Nisan 2026, ss. 444-77, doi:10.32739/etkilesim.2026.9.17.345.
Vancouver 1.Recep Altay. Algoritmik Stereotipler: ‘ChatGPT’ ve ‘Gemini’ Yapay Zekâ Modellerinde Türk Kimlik-Kültür Temsillerinin Analizi. Etkileşim. 01 Nisan 2026;(17):444-77. doi:10.32739/etkilesim.2026.9.17.345