TY - JOUR T1 - Artificial Intelligence on Journalism: Algorithmic Power Targeting the Crack in Memory in the Example of ‘Religious Wave’ Terrorist Attacks TT - Habercilikte Yapay Zekâ: ‘Dini Dalga’ Terör Saldırıları Örnekleminde Algoritmik İktidarın Bellekteki Çatlağı Hedeflemesi AU - Çelik, Fikriye PY - 2024 DA - November Y2 - 2024 DO - 10.47951/mediad.1523167 JF - Journal of Media and Religion Studies JO - MEDİAD PB - Hakan AYDIN WT - DergiPark SN - 2636-8811 SP - 35 EP - 52 IS - Special Issue 1 LA - en AB - The digital technology that governs this historical process is a partner in the creation of an ambiguous world with its image attached to everyday life forms. This picture the world is trapped in is becoming clearer in journalism practices. As the sector becomes acquainted with new forms of journalism, the responsibility of the news reader to be a truth reader increases. AI is visible in news production, from access to the source to the production. This study emerged from the necessity of considering the news-power-technology relationship and the memory distortion dynamics of global power structures together. In this study, which aims to point out the risks of AI news in the context of memory distortion, the ‘religious wave’ terrorist attacks are taken into consideration. Critical discourse analysis was used in this research conducted on a sample of the ChatGPT’s news regarding 9/11, 7/7, 2015 Paris and Christchurch attacks. The findings show that the discourse of global power is repeated in artificial intelligence news on religion-based terrorist attacks and that technology targeting social memory reproduces the ideology of power. Accordingly, an intensive reading practice can be recommended to the interlocutors of the news against the news emerging from AI algorithms. KW - Artificial Intelligence Journalism KW - Collective Memory KW - ChatGPT KW - Algorithmic Power KW - AI Bias N2 - Deneyimlediğimiz tarihsel sürecin şeklini tayin eden dijital teknoloji, gündelik hayat formlarına ilişik görüntüsüyle hiç olmadığı kadar muğlak bir dünya yaratımına ortaklık etmektedir. Dünyanın hızla hapsolduğu bu müphem tablo habercilik pratiklerinde belirginleşmekte; sektör yeni birtakım gazetecilik biçimleriyle tanışırken haber okuyucusunun omzundaki hakikat okuryazarı olma sorumluluğu biraz daha artmaktadır. Nitekim bugün artık kaynağa erişimden metnin üretimine haber yapım sürecinde yapay zekâ faktörü görünürlük kazanmıştır. Bu çalışma haber-iktidar-teknoloji ilişkisi ile küresel iktidar yapılarının belleği çarpıtma dinamiklerini birlikte düşünme gerekliliğinden ortaya çıkmıştır. Yapay zekâ ürünü haberlerin belleğin çarpıtılması bağlamındaki olası risklerini işaret etme amacındaki çalışmada, söz konusu bağlantılılığı net biçimde gösterme potansiyeline sahip olduğu düşünülen ‘dini dalga’ terör saldırıları dikkate alınmıştır. Örneklemini ChatGPT yapay zekâ algoritmasının 9/11, 7/7, 2015 Paris ve Christchurch saldırıları konusunda ürettiği haber metinlerinin oluşturduğu araştırmada eleştirel söylem analizinden yararlanılmıştır. Bulgular din tabanlı terör saldırılarının yapay zekâ tarafından haberleştirilme pratiğinde küresel iktidar söyleminin tekrar edildiği, toplumsal belleği hedefleyen modern teknolojinin iktidar ideolojisini yeniden üreten bir araç konumuna yerleştiği sonucuna ulaştırmıştır. Bu sonuçtan hareketle hem üretim hem tüketim konumunda bulunan haberin muhataplarına, yapay zekâ algoritmalarından çıkan haber metinlerine karşı dikkat yoğun bir okuma pratiği geliştirme önerisinde bulunulabilir. CR - Anderson, C. (2011). Deliberative, agonistic, and algorithmic audiences: Journalism’s vision of its public in an age of audience transparency. International Journal of Communication, 5, 529-547. CR - Barocas, S., Hood, S., & Ziewitz, M. (2013). Governing algorithms: A provocation piece. In Governing Algorithms: A Conference on Computation, Automation, and Control. New York University. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2245322 CR - Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985-1002. https://doi.org/10.1177/1461444809336551 CR - Blommaert, J., & Bulcaen, C. (2000). Critical discourse analysis. Annual Review of Anthropology, 29(1), 447-466. https://doi.org/10.1146/annurev.anthro.29.1.447 CR - Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., Chen, H., Yi, X., Wang, C., Wang, Y., Ye, W., Zhang, Y., Chang, Y., Yu, P. S., Yang, Q., & Xie, X. (2024). A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology, 15(3), 1-45. https://doi.org/10.1145/3641289 CR - Deleuze, G. (1992). Postscript on the societies of control. October, 59, 3-7. CR - Ferrara, E. (2023). Should ChatGPT be biased? Challenges and risks of bias in large language models. arXivpreprint arXiv:2304.03738. https://doi.org/10.5210/fm.v28i11.13346 CR - Ferrara, E. (2024a). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(3), 1-15. https://doi.org/10.3390/sci6010003 CR - Ferrara, E. (2024b). GenAI against humanity: Nefarious applications of generative artificial intelligence and large language models. Journal of Computational Social Science, 7(1), 549-569. https://doi.org/10.1007/s42001-024-00250-1 CR - Ferrara, E. (2024c). The butterfly effect in artificial intelligence systems: Implications for AI bias and fairness. Machine Learning with Applications, 15, 100525. CR - Fioriglio, G. (2015). Freedom, authority and knowledge on line: The dictatorship of the algorithm. Revista Internacional Pensamiento Politico, 10, 395-410. CR - Foucault, M. (1977). Discipline & punish: The birth of the prison (A. Sherida, Trans.). Vintage Books. CR - Geiger, R. S. (2009). Does Habermas understand the internet? The algorithmic construction of the blogo/public sphere. Gnovis: A Journal of Communication, Culture, and Technology, 10(1), 1-29. CR - Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Ed.), Media technologies. MIT Press. CR - Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854-871. https://doi.org/10.1080/1369118X.2019.1573914 CR - Hovy, D., & Prabhumoye, S. (2021). Five sources of bias in natural language processing. Wileyy&Sons, 1-19. https://doi.org/10.1111/lnc3.12432 CR - Kutchel, D. (2023, Kasım 29). Google, “gatekeeper of the internet”, under scrutiny. Law Society Journal. https://lsj.com.au/articles/google-gatekeeper-of-the-internet-under-scrutiny/ CR - Lash, S. (2007). Power after hegemony: Cultural studies in mutation? Theory, Culture & Society, 24(3), 55-78. https://doi.org/10.1177/0263276407075956 CR - Latour, B. (1993). The pasteurization of France. Harvard University Press. CR - Liu, Y., Yao, Y., Ton, J.-F., Zhang, X., Guo, R., Cheng, H., Klochkov, Y., Taufiq, M. F., & Li, H. (2023). Trustworthy LLMs: A survey and guideline for evaluating large language models’ alignment (arXiv:2308.05374). arXiv. http://arxiv.org/abs/2308.05374 CR - Milosavljević, M., & Vobič, I. (2019). Human still in the Loop: Editors reconsider the ideals of professional journalism through automation. Digital Journalism, 7(8), 1098-1116. https://doi.org/10.1080/21670811.2019.1601576 CR - Musiani, F. (2013). Governance by algorithms. Internet Policy Review, 2(3), 1-8. CR - OpenAI. (2024). ChatGPT. https://openai.com/chatgpt/ CR - Rapoport, D. C. (2002). The four waves of rebel terror and September 11. Anthropoetics, 8(1), 1-11. CR - Rapoport, D. C. (2013). The four waves of modern terror: International dimensions and consequences 1. In J. M. Hanhimaki & B. Blumenau (Ed.), An international history of terrorism (pp. 282-310). Routledge. CR - van Dijk, T. A. (1988). News as discourse. Lawrence Erlbaum Associates, Inc. CR - Woolgar, S., & Neyland, D. (2013). Mundane governance: Ontology and accountability. OUP Oxford. UR - https://doi.org/10.47951/mediad.1523167 L1 - https://dergipark.org.tr/en/download/article-file/4101450 ER -