Research Article
BibTex RIS Cite

Yapay Zekâ Üretimi İçeriklerde Yanlılık ve Mizahi Manipülasyon

Year 2025, Volume: 2 Issue: 2, 187 - 227, 25.07.2025

Abstract

Bu çalışma, yapay zekâ üretimi içeriklerde görülen sistematik yanlılığı incelemekte ve bu yanlılıkların mizahi manipülasyon süreçlerine nasıl dönüşebileceğini analiz etmektedir. Araştırma, Donald Trump’ın 2025 yılında viral olan yapay zekâ üretimi Gazze videosu üzerinden yapay zekâ yanlılığı, politik iletişim ve mizah arasındaki kesişimi incelemektedir. Yapılan araştırmalar göstermektedir ki yapay zekâ modelleri, eğitildikleri veri setlerindeki toplumsal önyargıları öğrenebilir ve pekiştirebilir. Bununla birlikte yapılan tahminlere göre, 2025 yılına kadar internet içeriğinin %90’ının yapay zekâ tarafından üretileceği öngörülmektedir. Çalışmada TIBET metodolojisi kullanılarak Trump’ın videosu kapsamlı önyargı analizi açısından incelenmiştir. Bulgular, yapay zekânın komut (prompt) bağımlılığı nedeniyle sistematik önyargı üretiminin temel mekanizmasını oluşturduğunu göstermektedir. Veri setlerinin sömürgeci mirası, “palmiye ağaçları altında mutlu çocuk” gibi yönergeler verildiğinde sistemin Batılı kaynaklardan edindiği “egzotik yoksulluk” imgelerine yönelmesine neden olmaktadır. Araştırma, yapay zekanın “Filistinli mutlu bir çocuk” gibi yönergelere yanıt üretme kapasitesinin sınırlı olduğunu ortaya koymaktadır. Yapay zekâ üretimi içeriğin en tehlikeli yanı, mizah ile ciddi politik mesajları birleştirerek izleyicinin eleştirel düşünme yetisini sınırlandırmasıdır. Bu mizahi normalleştirme mekanizması, etnik temizlik gibi ağır politik önermeleri görsel eğlenceye dönüştürerek toplumsal gerçekliği çarpıtmaktadır. Sonuçlar, yapay zekâ teknolojilerinin demokratik söylemi tehdit edebilecek boyutta ideolojik manipülasyon araçlarına dönüşebileceğini göstermektedir.

References

  • Adorno, T. W. (2021). Kültür endüstrisi: Kültür yönetimi (N. Ülner, M. Tüzel & E. Gen, Çev.). İletişim Yayınları.
  • Barthes, R. (2015). Göstergebilimsel serüven (5. Baskı, M. Rifat & S. Rifat, Çev.). Yapı Kredi Yayınları. (Orijinal çalışma 1985 yılında yayımlanmıştır)
  • Baum, J., & Villasenor, J. (2024). Rendering misrepresentation: Diversity failures in AI image generation. The Brookings Institution.
  • Berger, J. (2008). Ways of seeing. Penguin Classics.
  • Bower, A. H., & Steyvers , M. (2021). Perceptions of AI engaging in human expression. Scientific Reports.
  • Chinchure, A., Shukla, P., & Bhatt, G. (2024). TIBET: Identifying and Evaluating Biases in Text-to-Image Generative Models. Computer Vision – ECCV 2024 (s. 429–446). içinde
  • Chomsky, N., & Herman, E. S. (2021). Rızanın imalatı. (E. Abadoğlu, Çev.) BGST Yayınları.
  • DeLacey, P. (2023). Biases in large image-text AI model favor wealthier, Western perspectives. Michigan Engineering News. https://news.engin.umich.edu/2023/12/biases-in-large-image-text-ai-model-favor-wealthier-western-perspectives/
  • Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.
  • Fitts, A. S., Rabinowitz, K., & Sadof, K. D. (2023). This is how AI image generators see the world. The Washington Post.
  • Foucault, M. (2000). Özne ve iktidar (I. Ergüden & O. Akınhay, Çev.). Ayrıntı Yayınları. (Orijinal çalışma 1982 yılında yayımlandı)
  • Frank, A. L. (2025). Wit meets wisdom: The relationship between satire and AI communication. Journal of Science Communication, 24(1), 1-18. https://doi.org/10.22323/2.24010204
  • Garfinkle, A. (2023, Ocak 13). 90% of online content could be ‘generated by AI by 2025,’ expert says. https://finance.yahoo.com/. adresinden alındı
  • Gebru, T., & Mitchell, M. (2024). AI ethics beyond technical solutions: Toward democratic governance and social justice. Journal of Artificial Intelligence Research, 75(1), 118-142. https://doi.org/10.1093/oxfordhb/9780190067397.013.16
  • Girrbach, L., Alaniz, S., & Smith, G. (2025). A Large Scale Analysis of Gender Biases in Text-to-Image Generative Models. arXiv:2503.23398.
  • Guzman, C. d. (2022). Meta’s Facebook Algorithms ‘Proactively’ Promoted Violence Against the Rohingya, New Amnesty International Report Asserts. TIME USA, LLC.
  • Hall, R. (2025). Trump Gaza’ AI video intended as political satire, says creator. Guardian News & Media Limited.
  • Hall, S. (Ed.). (1997). Representation: Cultural representations and signifying practices. Sage Publications, Inc; Open University Press.
  • Hanjun Luo, H. H. (2024). BIGbench: A Unified Benchmark for Evaluating Multi-dimensional Social Biases in Text-to-Image Models. a. p. arXiv içinde, https://doi.org/10.48550/arXiv.2407.15240.
  • Luccioni, A. S., Akiki, C., & Mitchell, M. (2023). Stable Bias: Analyzing Societal Representations in Diffusion Models. NIPS ‘23: Proceedings of the 37th International Conference on Neural Information Processing Systems.
  • Martinez, D. R., & Kifle, B. M. (2024). Artificial Intelligence: A Systems Approach from Architecture Principles to Deployment. MIT Press.
  • McLuhan, M. (1964). Understanding media: The extensions of man. McGraw-Hill.
  • Mollett, A., Brumley, C., Gilson, C., & Williams, S. (2017). Communicating Your Research with Social Media: A Practical Guide to Using Blogs, Podcasts, Data Visualisations and Video. SAGE Publications.
  • Naik, R., & Nushi, B. (2023). Social Biases through the Text-to-Image Generation Lens. Publication History, s. 786 - 808.
  • Nicoletti, L., & Bass, D. (2023). Humans Are Biased Generative AI Is Even Worse. Bloomberg L.P.
  • Perera, M. V., & Patel, V. M. (2023). Analyzing Bias in Diffusion-based Face Generation Models. 2023 IEEE International Joint Conference on Biometrics.
  • Rayyash, H. A. (2024). AI meets comedy: Viewers' reactions to GPT-4 generated humor translation. Ampersand, 12.
  • Postman, N. (2020). Televizyon: Öldüren eğlence (O. Akınhay, Çev.). Ayrıntı Yayınları.
  • Saharia, C., & Chan, W. (2022). Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. NIPS ‘22: Proceedings of the 36th International Conference on Neural Information Processing Systems, 36479 - 36494.
  • Said, E. W. (1978). Orientalism. Pantheon Books.
  • Saumure, R., De Freitas, J. & Puntoni, S. Humor as a window into generative AI bias. Sci Rep 15, 1326 (2025). https://doi.org/10.1038/s41598-024-83384-6
  • Son, G. B. G., 2021. Migration Memes and Social Gaze: An Analysis of Facebook. Social Transformations Journal of the Global South. Vice, J., & Akhtar, N. (2025). Quantifying Bias in Text-to-Image Generative. Institute of Electrical and Electronics Engineers, s. 1-14. Wiessner, D. (2024). Workday must face novel bias lawsuit over AI screening software. Reuters. Zewe, A., 2022. Can machine-learning models overcome biased datasets?. MIT News Office. Zhou, M., Abhishek, V., & Derdenger, T. (2024). Bias in Generative AI. eprint arXiv. https://doi.org/https://doi.org/10.48550/arXiv.2403.02726

Bias and Humorous Manipulation in AI-Generated Content

Year 2025, Volume: 2 Issue: 2, 187 - 227, 25.07.2025

Abstract

This study looks at the regular prejudice in content artificial intelligence creates - it also analyzes how these prejudices turn into funny manipulation methods. The study examines the spot where AI prejudice, political talk along with humor meet, using Donald Trump’s popular AI-created Gaza video from 2025. AI models learn and support the social biases present in their training information. Forecasts show that by 2025, artificial intelligence will create ninety percent of internet content. For the Trump video, the study uses the TIBET method to do a full prejudice analysis. Results show that AI needs prompts, which forms the basic way it makes regular prejudice. The old ways of datasets cause the system to lean toward pictures of “different poor people” from Western places when it gets orders like “happy child under palm trees.” The study shows that AI has little ability to create answers for prompts such as “happy Palestinian child”. The most harmful part of AI-created content is that it puts humor with serious political ideas. This limits how well viewers can think clearly - this funny way of making things normal twists what is real in society. It changes important political ideas, like removing a group of people, into something fun to watch. The results point to AI tools becoming tools for controlling ideas, which could greatly endanger democratic discussions.

References

  • Adorno, T. W. (2021). Kültür endüstrisi: Kültür yönetimi (N. Ülner, M. Tüzel & E. Gen, Çev.). İletişim Yayınları.
  • Barthes, R. (2015). Göstergebilimsel serüven (5. Baskı, M. Rifat & S. Rifat, Çev.). Yapı Kredi Yayınları. (Orijinal çalışma 1985 yılında yayımlanmıştır)
  • Baum, J., & Villasenor, J. (2024). Rendering misrepresentation: Diversity failures in AI image generation. The Brookings Institution.
  • Berger, J. (2008). Ways of seeing. Penguin Classics.
  • Bower, A. H., & Steyvers , M. (2021). Perceptions of AI engaging in human expression. Scientific Reports.
  • Chinchure, A., Shukla, P., & Bhatt, G. (2024). TIBET: Identifying and Evaluating Biases in Text-to-Image Generative Models. Computer Vision – ECCV 2024 (s. 429–446). içinde
  • Chomsky, N., & Herman, E. S. (2021). Rızanın imalatı. (E. Abadoğlu, Çev.) BGST Yayınları.
  • DeLacey, P. (2023). Biases in large image-text AI model favor wealthier, Western perspectives. Michigan Engineering News. https://news.engin.umich.edu/2023/12/biases-in-large-image-text-ai-model-favor-wealthier-western-perspectives/
  • Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.
  • Fitts, A. S., Rabinowitz, K., & Sadof, K. D. (2023). This is how AI image generators see the world. The Washington Post.
  • Foucault, M. (2000). Özne ve iktidar (I. Ergüden & O. Akınhay, Çev.). Ayrıntı Yayınları. (Orijinal çalışma 1982 yılında yayımlandı)
  • Frank, A. L. (2025). Wit meets wisdom: The relationship between satire and AI communication. Journal of Science Communication, 24(1), 1-18. https://doi.org/10.22323/2.24010204
  • Garfinkle, A. (2023, Ocak 13). 90% of online content could be ‘generated by AI by 2025,’ expert says. https://finance.yahoo.com/. adresinden alındı
  • Gebru, T., & Mitchell, M. (2024). AI ethics beyond technical solutions: Toward democratic governance and social justice. Journal of Artificial Intelligence Research, 75(1), 118-142. https://doi.org/10.1093/oxfordhb/9780190067397.013.16
  • Girrbach, L., Alaniz, S., & Smith, G. (2025). A Large Scale Analysis of Gender Biases in Text-to-Image Generative Models. arXiv:2503.23398.
  • Guzman, C. d. (2022). Meta’s Facebook Algorithms ‘Proactively’ Promoted Violence Against the Rohingya, New Amnesty International Report Asserts. TIME USA, LLC.
  • Hall, R. (2025). Trump Gaza’ AI video intended as political satire, says creator. Guardian News & Media Limited.
  • Hall, S. (Ed.). (1997). Representation: Cultural representations and signifying practices. Sage Publications, Inc; Open University Press.
  • Hanjun Luo, H. H. (2024). BIGbench: A Unified Benchmark for Evaluating Multi-dimensional Social Biases in Text-to-Image Models. a. p. arXiv içinde, https://doi.org/10.48550/arXiv.2407.15240.
  • Luccioni, A. S., Akiki, C., & Mitchell, M. (2023). Stable Bias: Analyzing Societal Representations in Diffusion Models. NIPS ‘23: Proceedings of the 37th International Conference on Neural Information Processing Systems.
  • Martinez, D. R., & Kifle, B. M. (2024). Artificial Intelligence: A Systems Approach from Architecture Principles to Deployment. MIT Press.
  • McLuhan, M. (1964). Understanding media: The extensions of man. McGraw-Hill.
  • Mollett, A., Brumley, C., Gilson, C., & Williams, S. (2017). Communicating Your Research with Social Media: A Practical Guide to Using Blogs, Podcasts, Data Visualisations and Video. SAGE Publications.
  • Naik, R., & Nushi, B. (2023). Social Biases through the Text-to-Image Generation Lens. Publication History, s. 786 - 808.
  • Nicoletti, L., & Bass, D. (2023). Humans Are Biased Generative AI Is Even Worse. Bloomberg L.P.
  • Perera, M. V., & Patel, V. M. (2023). Analyzing Bias in Diffusion-based Face Generation Models. 2023 IEEE International Joint Conference on Biometrics.
  • Rayyash, H. A. (2024). AI meets comedy: Viewers' reactions to GPT-4 generated humor translation. Ampersand, 12.
  • Postman, N. (2020). Televizyon: Öldüren eğlence (O. Akınhay, Çev.). Ayrıntı Yayınları.
  • Saharia, C., & Chan, W. (2022). Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. NIPS ‘22: Proceedings of the 36th International Conference on Neural Information Processing Systems, 36479 - 36494.
  • Said, E. W. (1978). Orientalism. Pantheon Books.
  • Saumure, R., De Freitas, J. & Puntoni, S. Humor as a window into generative AI bias. Sci Rep 15, 1326 (2025). https://doi.org/10.1038/s41598-024-83384-6
  • Son, G. B. G., 2021. Migration Memes and Social Gaze: An Analysis of Facebook. Social Transformations Journal of the Global South. Vice, J., & Akhtar, N. (2025). Quantifying Bias in Text-to-Image Generative. Institute of Electrical and Electronics Engineers, s. 1-14. Wiessner, D. (2024). Workday must face novel bias lawsuit over AI screening software. Reuters. Zewe, A., 2022. Can machine-learning models overcome biased datasets?. MIT News Office. Zhou, M., Abhishek, V., & Derdenger, T. (2024). Bias in Generative AI. eprint arXiv. https://doi.org/https://doi.org/10.48550/arXiv.2403.02726
There are 32 citations in total.

Details

Primary Language Turkish
Subjects Communication Technology and Digital Media Studies
Journal Section Research Articles
Authors

Süleyman Duyar 0000-0002-5036-908X

Publication Date July 25, 2025
Submission Date June 14, 2025
Acceptance Date July 10, 2025
Published in Issue Year 2025 Volume: 2 Issue: 2

Cite

APA Duyar, S. (2025). Yapay Zekâ Üretimi İçeriklerde Yanlılık ve Mizahi Manipülasyon. Kronotop İletişim Dergisi, 2(2), 187-227.