Araştırma Makalesi
BibTex RIS Kaynak Göster

Habercilikte Yapay Zekâ: ‘Dini Dalga’ Terör Saldırıları Örnekleminde Algoritmik İktidarın Bellekteki Çatlağı Hedeflemesi

Yıl 2024, Sayı: Special Issue 1, 35 - 52, 28.11.2024
https://doi.org/10.47951/mediad.1523167

Öz

Deneyimlediğimiz tarihsel sürecin şeklini tayin eden dijital teknoloji, gündelik hayat formlarına ilişik görüntüsüyle hiç olmadığı kadar muğlak bir dünya yaratımına ortaklık etmektedir. Dünyanın hızla hapsolduğu bu müphem tablo habercilik pratiklerinde belirginleşmekte; sektör yeni birtakım gazetecilik biçimleriyle tanışırken haber okuyucusunun omzundaki hakikat okuryazarı olma sorumluluğu biraz daha artmaktadır. Nitekim bugün artık kaynağa erişimden metnin üretimine haber yapım sürecinde yapay zekâ faktörü görünürlük kazanmıştır. Bu çalışma haber-iktidar-teknoloji ilişkisi ile küresel iktidar yapılarının belleği çarpıtma dinamiklerini birlikte düşünme gerekliliğinden ortaya çıkmıştır. Yapay zekâ ürünü haberlerin belleğin çarpıtılması bağlamındaki olası risklerini işaret etme amacındaki çalışmada, söz konusu bağlantılılığı net biçimde gösterme potansiyeline sahip olduğu düşünülen ‘dini dalga’ terör saldırıları dikkate alınmıştır. Örneklemini ChatGPT yapay zekâ algoritmasının 9/11, 7/7, 2015 Paris ve Christchurch saldırıları konusunda ürettiği haber metinlerinin oluşturduğu araştırmada eleştirel söylem analizinden yararlanılmıştır. Bulgular din tabanlı terör saldırılarının yapay zekâ tarafından haberleştirilme pratiğinde küresel iktidar söyleminin tekrar edildiği, toplumsal belleği hedefleyen modern teknolojinin iktidar ideolojisini yeniden üreten bir araç konumuna yerleştiği sonucuna ulaştırmıştır. Bu sonuçtan hareketle hem üretim hem tüketim konumunda bulunan haberin muhataplarına, yapay zekâ algoritmalarından çıkan haber metinlerine karşı dikkat yoğun bir okuma pratiği geliştirme önerisinde bulunulabilir.

Kaynakça

  • Anderson, C. (2011). Deliberative, agonistic, and algorithmic audiences: Journalism’s vision of its public in an age of audience transparency. International Journal of Communication, 5, 529-547.
  • Barocas, S., Hood, S., & Ziewitz, M. (2013). Governing algorithms: A provocation piece. In Governing Algorithms: A Conference on Computation, Automation, and Control. New York University. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2245322
  • Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985-1002. https://doi.org/10.1177/1461444809336551
  • Blommaert, J., & Bulcaen, C. (2000). Critical discourse analysis. Annual Review of Anthropology, 29(1), 447-466. https://doi.org/10.1146/annurev.anthro.29.1.447
  • Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., Chen, H., Yi, X., Wang, C., Wang, Y., Ye, W., Zhang, Y., Chang, Y., Yu, P. S., Yang, Q., & Xie, X. (2024). A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology, 15(3), 1-45. https://doi.org/10.1145/3641289
  • Deleuze, G. (1992). Postscript on the societies of control. October, 59, 3-7.
  • Ferrara, E. (2023). Should ChatGPT be biased? Challenges and risks of bias in large language models. arXivpreprint arXiv:2304.03738. https://doi.org/10.5210/fm.v28i11.13346
  • Ferrara, E. (2024a). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(3), 1-15. https://doi.org/10.3390/sci6010003
  • Ferrara, E. (2024b). GenAI against humanity: Nefarious applications of generative artificial intelligence and large language models. Journal of Computational Social Science, 7(1), 549-569. https://doi.org/10.1007/s42001-024-00250-1
  • Ferrara, E. (2024c). The butterfly effect in artificial intelligence systems: Implications for AI bias and fairness. Machine Learning with Applications, 15, 100525.
  • Fioriglio, G. (2015). Freedom, authority and knowledge on line: The dictatorship of the algorithm. Revista Internacional Pensamiento Politico, 10, 395-410.
  • Foucault, M. (1977). Discipline & punish: The birth of the prison (A. Sherida, Trans.). Vintage Books.
  • Geiger, R. S. (2009). Does Habermas understand the internet? The algorithmic construction of the blogo/public sphere. Gnovis: A Journal of Communication, Culture, and Technology, 10(1), 1-29.
  • Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Ed.), Media technologies. MIT Press.
  • Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854-871. https://doi.org/10.1080/1369118X.2019.1573914
  • Hovy, D., & Prabhumoye, S. (2021). Five sources of bias in natural language processing. Wileyy&Sons, 1-19. https://doi.org/10.1111/lnc3.12432
  • Kutchel, D. (2023, Kasım 29). Google, “gatekeeper of the internet”, under scrutiny. Law Society Journal. https://lsj.com.au/articles/google-gatekeeper-of-the-internet-under-scrutiny/
  • Lash, S. (2007). Power after hegemony: Cultural studies in mutation? Theory, Culture & Society, 24(3), 55-78. https://doi.org/10.1177/0263276407075956
  • Latour, B. (1993). The pasteurization of France. Harvard University Press.
  • Liu, Y., Yao, Y., Ton, J.-F., Zhang, X., Guo, R., Cheng, H., Klochkov, Y., Taufiq, M. F., & Li, H. (2023). Trustworthy LLMs: A survey and guideline for evaluating large language models’ alignment (arXiv:2308.05374). arXiv. http://arxiv.org/abs/2308.05374
  • Milosavljević, M., & Vobič, I. (2019). Human still in the Loop: Editors reconsider the ideals of professional journalism through automation. Digital Journalism, 7(8), 1098-1116. https://doi.org/10.1080/21670811.2019.1601576
  • Musiani, F. (2013). Governance by algorithms. Internet Policy Review, 2(3), 1-8.
  • OpenAI. (2024). ChatGPT. https://openai.com/chatgpt/
  • Rapoport, D. C. (2002). The four waves of rebel terror and September 11. Anthropoetics, 8(1), 1-11.
  • Rapoport, D. C. (2013). The four waves of modern terror: International dimensions and consequences 1. In J. M. Hanhimaki & B. Blumenau (Ed.), An international history of terrorism (pp. 282-310). Routledge.
  • van Dijk, T. A. (1988). News as discourse. Lawrence Erlbaum Associates, Inc.
  • Woolgar, S., & Neyland, D. (2013). Mundane governance: Ontology and accountability. OUP Oxford.

Artificial Intelligence on Journalism: Algorithmic Power Targeting the Crack in Memory in the Example of ‘Religious Wave’ Terrorist Attacks

Yıl 2024, Sayı: Special Issue 1, 35 - 52, 28.11.2024
https://doi.org/10.47951/mediad.1523167

Öz

The digital technology that governs this historical process is a partner in the creation of an ambiguous world with its image attached to everyday life forms. This picture the world is trapped in is becoming clearer in journalism practices. As the sector becomes acquainted with new forms of journalism, the responsibility of the news reader to be a truth reader increases. AI is visible in news production, from access to the source to the production. This study emerged from the necessity of considering the news-power-technology relationship and the memory distortion dynamics of global power structures together. In this study, which aims to point out the risks of AI news in the context of memory distortion, the ‘religious wave’ terrorist attacks are taken into consideration. Critical discourse analysis was used in this research conducted on a sample of the ChatGPT’s news regarding 9/11, 7/7, 2015 Paris and Christchurch attacks. The findings show that the discourse of global power is repeated in artificial intelligence news on religion-based terrorist attacks and that technology targeting social memory reproduces the ideology of power. Accordingly, an intensive reading practice can be recommended to the interlocutors of the news against the news emerging from AI algorithms.

Kaynakça

  • Anderson, C. (2011). Deliberative, agonistic, and algorithmic audiences: Journalism’s vision of its public in an age of audience transparency. International Journal of Communication, 5, 529-547.
  • Barocas, S., Hood, S., & Ziewitz, M. (2013). Governing algorithms: A provocation piece. In Governing Algorithms: A Conference on Computation, Automation, and Control. New York University. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2245322
  • Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985-1002. https://doi.org/10.1177/1461444809336551
  • Blommaert, J., & Bulcaen, C. (2000). Critical discourse analysis. Annual Review of Anthropology, 29(1), 447-466. https://doi.org/10.1146/annurev.anthro.29.1.447
  • Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., Chen, H., Yi, X., Wang, C., Wang, Y., Ye, W., Zhang, Y., Chang, Y., Yu, P. S., Yang, Q., & Xie, X. (2024). A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology, 15(3), 1-45. https://doi.org/10.1145/3641289
  • Deleuze, G. (1992). Postscript on the societies of control. October, 59, 3-7.
  • Ferrara, E. (2023). Should ChatGPT be biased? Challenges and risks of bias in large language models. arXivpreprint arXiv:2304.03738. https://doi.org/10.5210/fm.v28i11.13346
  • Ferrara, E. (2024a). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(3), 1-15. https://doi.org/10.3390/sci6010003
  • Ferrara, E. (2024b). GenAI against humanity: Nefarious applications of generative artificial intelligence and large language models. Journal of Computational Social Science, 7(1), 549-569. https://doi.org/10.1007/s42001-024-00250-1
  • Ferrara, E. (2024c). The butterfly effect in artificial intelligence systems: Implications for AI bias and fairness. Machine Learning with Applications, 15, 100525.
  • Fioriglio, G. (2015). Freedom, authority and knowledge on line: The dictatorship of the algorithm. Revista Internacional Pensamiento Politico, 10, 395-410.
  • Foucault, M. (1977). Discipline & punish: The birth of the prison (A. Sherida, Trans.). Vintage Books.
  • Geiger, R. S. (2009). Does Habermas understand the internet? The algorithmic construction of the blogo/public sphere. Gnovis: A Journal of Communication, Culture, and Technology, 10(1), 1-29.
  • Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Ed.), Media technologies. MIT Press.
  • Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854-871. https://doi.org/10.1080/1369118X.2019.1573914
  • Hovy, D., & Prabhumoye, S. (2021). Five sources of bias in natural language processing. Wileyy&Sons, 1-19. https://doi.org/10.1111/lnc3.12432
  • Kutchel, D. (2023, Kasım 29). Google, “gatekeeper of the internet”, under scrutiny. Law Society Journal. https://lsj.com.au/articles/google-gatekeeper-of-the-internet-under-scrutiny/
  • Lash, S. (2007). Power after hegemony: Cultural studies in mutation? Theory, Culture & Society, 24(3), 55-78. https://doi.org/10.1177/0263276407075956
  • Latour, B. (1993). The pasteurization of France. Harvard University Press.
  • Liu, Y., Yao, Y., Ton, J.-F., Zhang, X., Guo, R., Cheng, H., Klochkov, Y., Taufiq, M. F., & Li, H. (2023). Trustworthy LLMs: A survey and guideline for evaluating large language models’ alignment (arXiv:2308.05374). arXiv. http://arxiv.org/abs/2308.05374
  • Milosavljević, M., & Vobič, I. (2019). Human still in the Loop: Editors reconsider the ideals of professional journalism through automation. Digital Journalism, 7(8), 1098-1116. https://doi.org/10.1080/21670811.2019.1601576
  • Musiani, F. (2013). Governance by algorithms. Internet Policy Review, 2(3), 1-8.
  • OpenAI. (2024). ChatGPT. https://openai.com/chatgpt/
  • Rapoport, D. C. (2002). The four waves of rebel terror and September 11. Anthropoetics, 8(1), 1-11.
  • Rapoport, D. C. (2013). The four waves of modern terror: International dimensions and consequences 1. In J. M. Hanhimaki & B. Blumenau (Ed.), An international history of terrorism (pp. 282-310). Routledge.
  • van Dijk, T. A. (1988). News as discourse. Lawrence Erlbaum Associates, Inc.
  • Woolgar, S., & Neyland, D. (2013). Mundane governance: Ontology and accountability. OUP Oxford.
Toplam 27 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular İletişim Çalışmaları, İletişim ve Medya Çalışmaları (Diğer)
Bölüm Araştırma Makaleleri
Yazarlar

Fikriye Çelik 0000-0003-1633-0357

Yayımlanma Tarihi 28 Kasım 2024
Gönderilme Tarihi 27 Temmuz 2024
Kabul Tarihi 16 Kasım 2024
Yayımlandığı Sayı Yıl 2024 Sayı: Special Issue 1

Kaynak Göster

APA Çelik, F. (2024). Artificial Intelligence on Journalism: Algorithmic Power Targeting the Crack in Memory in the Example of ‘Religious Wave’ Terrorist Attacks. Journal of Media and Religion Studies(Special Issue 1), 35-52. https://doi.org/10.47951/mediad.1523167

Creative Commons License MEDYA VE DİN ARAŞTIRMALARI DERGİSİ (MEDİAD) - JOURNAL OF MEDIA AND RELIGION STUDIES

This journal is licensed under a Creative Commons Attribution 4.0 International License.