Derleme
BibTex RIS Kaynak Göster

Yıl 2025, Sayı: Yeni Medya Çalışmaları, 751 - 776, 28.09.2025
https://doi.org/10.53791/imgelem.1650961

Öz

Kaynakça

  • Ananny, M., Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability, New Media & Society, 20(3), 973-989. Doi: https://doi.org/10.1177/1461444816676645
  • Bakshy, E., Messing, S. & Adamic, L. (2015). Exposure to ideologically diverse news and opinion on Facebook, Science, 348(6239), 1130-1132. Doi: https://doi.org/10.1126/science.aaa1160
  • Baudrillard, J. (1981). Simulacra and simulation, University of Michigan Press.
  • Benkler, Y., Faris, R. & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics, Oxford University Press. Doi: https://doi.org/10.1093/oso/9780190923624.001.0001
  • Borsci, S., Lehtola, V., Nex, F. et al. (2022). Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle, AI & SOCIETY, 38. 1-20. Doi: https://doi-org.ezproxy.its.uu.se/10.1007/s00146-021-01383-
  • DiResta, R., Cohn, J. & Gilbard, Z. (2022). The perception gap: How false beliefs about algorithmic systems shape political polarization, Journal of Online Trust and Safety, 1(3), 1-18. Doi: https://doi.org/10.1145/12345678
  • Eslami, M., Rickman, A., Vaccaro, K. et al. (2015). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153-162. Doi: https://doi.org/10.1145/2702123.2702556
  • European Commission. (2023). Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), Official Journal of the European Union, L 277, 1-102.
  • Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media, Yale University Press. Doi: https://doi.org/10.12987/9780300235029
  • Guess, A. M., Nyhan, B. & Reifler, J. (2019). Exposure to untrustworthy websites in the 2016 US election. Nature Human Behaviour, 3(9), 920-928. Doi: https://doi.org/10.1038/s41562-019-0729-9
  • Habermas, J. (1989). The structural transformation of the public sphere: An inquiry into a category of bourgeois society, MIT Press.
  • Hines, A., Smith, J. & Liu, Y. (2022). Algorithmic personalization on TikTok: A study of user engagement and content virality. Social Media + Society, 8(2), 1-18. Doi: https://doi.org/10.1177/20563051221107655
  • Knox, J. (2015). Critical education and digital cultures, M. A. Peters (Ed.), in Encyclopedia of Educational Philosophy and Theory, Springer. Doi: https://doi.org/10.1007/978-981-287-532-7_124-1
  • McLuhan, M. (2003). Understanding media: The extensions of man (2nd ed.), MIT Press.
  • Muzumdar, P., Cheemalapati, S., RamiReddy, S. R. et al. (2025). The dead internet theory: A survey on artificial interactions and the future of social media, Asian Journal of Research in Computer Science, 18(1), 67-73. Doi: https://doi.org/10.9734/ajrcos/2025/v18i1549
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism, NYU Press. Doi: https://doi.org/10.2307/j.ctt1pwt9w5
  • Pataranutaporn, P., Danry, V. & Leong, J. (2021). AI-generated characters for supporting personalized learning and well-being, Nature Machine Intelligence, 3(6), 465-473.
  • Pennycook, G., Epstein, Z., Mosleh, M. et al. (2021). Shifting attention to accuracy can reduce misinformation online. Nature, 592(7855), 590-595. Doi: https://doi.org/10.1038/s41586-021-03344-2
  • Perrotta, C., Selwyn, N. (2022). Artificial intelligence and the affective labor of understanding: The intimate moderation of a language model, New Media & Society.
  • Raza, A., Aslam, M. W. (2024). Algorithmic curation in Facebook: An investigation into the role of AI in forming political polarization and misinformation in Pakistan, Annals of Human and Social Sciences.
  • Ribeiro, M. H., Ottoni, R., West, R. et al. (2020). Auditing radicalization pathways on YouTube. Proceedings of the National Academy of Sciences, 117(32), 19103-19121. Doi: https://doi.org/10.1073/pnas.1910980117
  • Samuels, M. G. (2012). Review: The filter bubble: What the Internet is hiding from you by Eli Pariser, Issues in Information Systems, 8(2). Doi: https://doi.org/10.5070/D482011835
  • StatCounter. (2024). Search engine market share worldwide, StatCounter GlobalStats. https://gs.statcounter.com/search-engine-market-share
  • Suzor, N. P., West, S. M., Quodling, A. et al. (2019). What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation, International Journal of Communication, 13, 1526-1545.
  • Thompson, N. (2018, May 23). Exclusive: Facebook opens up about false news, Wired. https://www.wired.com/story/exclusive-facebook-opens-up-about-false-news/
  • Tüfekçi, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency, Colorado Technology Law Journal, 13(203), 203-218.
  • Tüfekçi, Z. (2018, January 16). It’s the (democracy-poisoning) golden age of free speech. Wired. https://www.wired.com/story/free-speech-issue-tech-turmoil-new-censorship/ (Access: 20.02.25).
  • Ungless, E. L., Markl, N. & Ross, B. (2025). Experiences of censorship on TikTok across marginalised identities, Proceedings of the International Conference on Web and Social Media (ICWSM 2025). AAAI Press. Doi: https://doi.org/10.48550/arXiv.2407.14164
  • Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy, Oxford University Press. Doi: https://doi.org/10.1093/oso/9780190056544.001.0001
  • van Dijck, J. (2013). The culture of connectivity: A critical history of social media, Oxford University Press. Doi: https://doi.org/10.1093/acprof:oso/9780199970773.001.0001
  • Vosoughi, S., Roy, D. & Aral, S. (2018). The spread of true and false news online, Science, 359(6380), 1146-1151. Doi: https://doi.org/10.1126/science.aap9559
  • Yücedağ, A., Özkul, D. (2023). Yapay Zekâ ve Habercilik Pratikleri: Anadolu Ajansı Örneği, İmgelem, 7(2), 272-295. Doi: https://doi.org/10.53791/imgelem.1636282
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power, PublicAffairs.

Yıl 2025, Sayı: Yeni Medya Çalışmaları, 751 - 776, 28.09.2025
https://doi.org/10.53791/imgelem.1650961

Öz

Kaynakça

  • Ananny, M., Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability, New Media & Society, 20(3), 973-989. Doi: https://doi.org/10.1177/1461444816676645
  • Bakshy, E., Messing, S. & Adamic, L. (2015). Exposure to ideologically diverse news and opinion on Facebook, Science, 348(6239), 1130-1132. Doi: https://doi.org/10.1126/science.aaa1160
  • Baudrillard, J. (1981). Simulacra and simulation, University of Michigan Press.
  • Benkler, Y., Faris, R. & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics, Oxford University Press. Doi: https://doi.org/10.1093/oso/9780190923624.001.0001
  • Borsci, S., Lehtola, V., Nex, F. et al. (2022). Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle, AI & SOCIETY, 38. 1-20. Doi: https://doi-org.ezproxy.its.uu.se/10.1007/s00146-021-01383-
  • DiResta, R., Cohn, J. & Gilbard, Z. (2022). The perception gap: How false beliefs about algorithmic systems shape political polarization, Journal of Online Trust and Safety, 1(3), 1-18. Doi: https://doi.org/10.1145/12345678
  • Eslami, M., Rickman, A., Vaccaro, K. et al. (2015). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153-162. Doi: https://doi.org/10.1145/2702123.2702556
  • European Commission. (2023). Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), Official Journal of the European Union, L 277, 1-102.
  • Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media, Yale University Press. Doi: https://doi.org/10.12987/9780300235029
  • Guess, A. M., Nyhan, B. & Reifler, J. (2019). Exposure to untrustworthy websites in the 2016 US election. Nature Human Behaviour, 3(9), 920-928. Doi: https://doi.org/10.1038/s41562-019-0729-9
  • Habermas, J. (1989). The structural transformation of the public sphere: An inquiry into a category of bourgeois society, MIT Press.
  • Hines, A., Smith, J. & Liu, Y. (2022). Algorithmic personalization on TikTok: A study of user engagement and content virality. Social Media + Society, 8(2), 1-18. Doi: https://doi.org/10.1177/20563051221107655
  • Knox, J. (2015). Critical education and digital cultures, M. A. Peters (Ed.), in Encyclopedia of Educational Philosophy and Theory, Springer. Doi: https://doi.org/10.1007/978-981-287-532-7_124-1
  • McLuhan, M. (2003). Understanding media: The extensions of man (2nd ed.), MIT Press.
  • Muzumdar, P., Cheemalapati, S., RamiReddy, S. R. et al. (2025). The dead internet theory: A survey on artificial interactions and the future of social media, Asian Journal of Research in Computer Science, 18(1), 67-73. Doi: https://doi.org/10.9734/ajrcos/2025/v18i1549
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism, NYU Press. Doi: https://doi.org/10.2307/j.ctt1pwt9w5
  • Pataranutaporn, P., Danry, V. & Leong, J. (2021). AI-generated characters for supporting personalized learning and well-being, Nature Machine Intelligence, 3(6), 465-473.
  • Pennycook, G., Epstein, Z., Mosleh, M. et al. (2021). Shifting attention to accuracy can reduce misinformation online. Nature, 592(7855), 590-595. Doi: https://doi.org/10.1038/s41586-021-03344-2
  • Perrotta, C., Selwyn, N. (2022). Artificial intelligence and the affective labor of understanding: The intimate moderation of a language model, New Media & Society.
  • Raza, A., Aslam, M. W. (2024). Algorithmic curation in Facebook: An investigation into the role of AI in forming political polarization and misinformation in Pakistan, Annals of Human and Social Sciences.
  • Ribeiro, M. H., Ottoni, R., West, R. et al. (2020). Auditing radicalization pathways on YouTube. Proceedings of the National Academy of Sciences, 117(32), 19103-19121. Doi: https://doi.org/10.1073/pnas.1910980117
  • Samuels, M. G. (2012). Review: The filter bubble: What the Internet is hiding from you by Eli Pariser, Issues in Information Systems, 8(2). Doi: https://doi.org/10.5070/D482011835
  • StatCounter. (2024). Search engine market share worldwide, StatCounter GlobalStats. https://gs.statcounter.com/search-engine-market-share
  • Suzor, N. P., West, S. M., Quodling, A. et al. (2019). What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation, International Journal of Communication, 13, 1526-1545.
  • Thompson, N. (2018, May 23). Exclusive: Facebook opens up about false news, Wired. https://www.wired.com/story/exclusive-facebook-opens-up-about-false-news/
  • Tüfekçi, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency, Colorado Technology Law Journal, 13(203), 203-218.
  • Tüfekçi, Z. (2018, January 16). It’s the (democracy-poisoning) golden age of free speech. Wired. https://www.wired.com/story/free-speech-issue-tech-turmoil-new-censorship/ (Access: 20.02.25).
  • Ungless, E. L., Markl, N. & Ross, B. (2025). Experiences of censorship on TikTok across marginalised identities, Proceedings of the International Conference on Web and Social Media (ICWSM 2025). AAAI Press. Doi: https://doi.org/10.48550/arXiv.2407.14164
  • Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy, Oxford University Press. Doi: https://doi.org/10.1093/oso/9780190056544.001.0001
  • van Dijck, J. (2013). The culture of connectivity: A critical history of social media, Oxford University Press. Doi: https://doi.org/10.1093/acprof:oso/9780199970773.001.0001
  • Vosoughi, S., Roy, D. & Aral, S. (2018). The spread of true and false news online, Science, 359(6380), 1146-1151. Doi: https://doi.org/10.1126/science.aap9559
  • Yücedağ, A., Özkul, D. (2023). Yapay Zekâ ve Habercilik Pratikleri: Anadolu Ajansı Örneği, İmgelem, 7(2), 272-295. Doi: https://doi.org/10.53791/imgelem.1636282
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power, PublicAffairs.

Dead Internet Hypothesis: AI, Censorship, and the Decline of Human-Centered Digital Discourse

Yıl 2025, Sayı: Yeni Medya Çalışmaları, 751 - 776, 28.09.2025
https://doi.org/10.53791/imgelem.1650961

Öz

This study explores the “Dead Internet Hypothesis”, which argues that much of today’s online activity is no longer driven by real human interaction but instead generated, curated, and amplified by artificial intelligence. It claims that the internet has shifted from being a democratic and participatory space to a corporatized system optimized for engagement, often at the expense of genuine communication. By analysing AI-generated media, monopolized search engines, recommendation algorithms, and digital echo chambers, the research highlights how online discourse is increasingly shaped by algorithmic logic rather than human agency. Using theoretical frameworks such as Habermas’ Public Sphere, McLuhan’s Medium Theory, Baudrillard’s Hyperreality, and van Dijk’s Algorithmic Culture, the paper critically examines the erosion of free speech, the illusion of pluralism, and the rise of simulated narratives. Case studies involving Google’s search algorithms, Facebook’s EdgeRank, TikTok’s recommendation engine, and X’s AI bot activity demonstrate how major platforms prioritize algorithmic visibility over authentic voices. The study concludes by questioning whether human agency can be restored in a digital landscape dominated by AI. It proposes possible interventions, including algorithmic transparency, platform decentralization, and AI literacy, as ways to resist the monopolization of communication. Ultimately, the research challenges the belief that the internet remains an open forum for public discourse, suggesting instead that it has become a controlled, artificial environment where AI governs what is seen, shared, and believed.

Kaynakça

  • Ananny, M., Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability, New Media & Society, 20(3), 973-989. Doi: https://doi.org/10.1177/1461444816676645
  • Bakshy, E., Messing, S. & Adamic, L. (2015). Exposure to ideologically diverse news and opinion on Facebook, Science, 348(6239), 1130-1132. Doi: https://doi.org/10.1126/science.aaa1160
  • Baudrillard, J. (1981). Simulacra and simulation, University of Michigan Press.
  • Benkler, Y., Faris, R. & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics, Oxford University Press. Doi: https://doi.org/10.1093/oso/9780190923624.001.0001
  • Borsci, S., Lehtola, V., Nex, F. et al. (2022). Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle, AI & SOCIETY, 38. 1-20. Doi: https://doi-org.ezproxy.its.uu.se/10.1007/s00146-021-01383-
  • DiResta, R., Cohn, J. & Gilbard, Z. (2022). The perception gap: How false beliefs about algorithmic systems shape political polarization, Journal of Online Trust and Safety, 1(3), 1-18. Doi: https://doi.org/10.1145/12345678
  • Eslami, M., Rickman, A., Vaccaro, K. et al. (2015). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153-162. Doi: https://doi.org/10.1145/2702123.2702556
  • European Commission. (2023). Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), Official Journal of the European Union, L 277, 1-102.
  • Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media, Yale University Press. Doi: https://doi.org/10.12987/9780300235029
  • Guess, A. M., Nyhan, B. & Reifler, J. (2019). Exposure to untrustworthy websites in the 2016 US election. Nature Human Behaviour, 3(9), 920-928. Doi: https://doi.org/10.1038/s41562-019-0729-9
  • Habermas, J. (1989). The structural transformation of the public sphere: An inquiry into a category of bourgeois society, MIT Press.
  • Hines, A., Smith, J. & Liu, Y. (2022). Algorithmic personalization on TikTok: A study of user engagement and content virality. Social Media + Society, 8(2), 1-18. Doi: https://doi.org/10.1177/20563051221107655
  • Knox, J. (2015). Critical education and digital cultures, M. A. Peters (Ed.), in Encyclopedia of Educational Philosophy and Theory, Springer. Doi: https://doi.org/10.1007/978-981-287-532-7_124-1
  • McLuhan, M. (2003). Understanding media: The extensions of man (2nd ed.), MIT Press.
  • Muzumdar, P., Cheemalapati, S., RamiReddy, S. R. et al. (2025). The dead internet theory: A survey on artificial interactions and the future of social media, Asian Journal of Research in Computer Science, 18(1), 67-73. Doi: https://doi.org/10.9734/ajrcos/2025/v18i1549
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism, NYU Press. Doi: https://doi.org/10.2307/j.ctt1pwt9w5
  • Pataranutaporn, P., Danry, V. & Leong, J. (2021). AI-generated characters for supporting personalized learning and well-being, Nature Machine Intelligence, 3(6), 465-473.
  • Pennycook, G., Epstein, Z., Mosleh, M. et al. (2021). Shifting attention to accuracy can reduce misinformation online. Nature, 592(7855), 590-595. Doi: https://doi.org/10.1038/s41586-021-03344-2
  • Perrotta, C., Selwyn, N. (2022). Artificial intelligence and the affective labor of understanding: The intimate moderation of a language model, New Media & Society.
  • Raza, A., Aslam, M. W. (2024). Algorithmic curation in Facebook: An investigation into the role of AI in forming political polarization and misinformation in Pakistan, Annals of Human and Social Sciences.
  • Ribeiro, M. H., Ottoni, R., West, R. et al. (2020). Auditing radicalization pathways on YouTube. Proceedings of the National Academy of Sciences, 117(32), 19103-19121. Doi: https://doi.org/10.1073/pnas.1910980117
  • Samuels, M. G. (2012). Review: The filter bubble: What the Internet is hiding from you by Eli Pariser, Issues in Information Systems, 8(2). Doi: https://doi.org/10.5070/D482011835
  • StatCounter. (2024). Search engine market share worldwide, StatCounter GlobalStats. https://gs.statcounter.com/search-engine-market-share
  • Suzor, N. P., West, S. M., Quodling, A. et al. (2019). What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation, International Journal of Communication, 13, 1526-1545.
  • Thompson, N. (2018, May 23). Exclusive: Facebook opens up about false news, Wired. https://www.wired.com/story/exclusive-facebook-opens-up-about-false-news/
  • Tüfekçi, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency, Colorado Technology Law Journal, 13(203), 203-218.
  • Tüfekçi, Z. (2018, January 16). It’s the (democracy-poisoning) golden age of free speech. Wired. https://www.wired.com/story/free-speech-issue-tech-turmoil-new-censorship/ (Access: 20.02.25).
  • Ungless, E. L., Markl, N. & Ross, B. (2025). Experiences of censorship on TikTok across marginalised identities, Proceedings of the International Conference on Web and Social Media (ICWSM 2025). AAAI Press. Doi: https://doi.org/10.48550/arXiv.2407.14164
  • Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy, Oxford University Press. Doi: https://doi.org/10.1093/oso/9780190056544.001.0001
  • van Dijck, J. (2013). The culture of connectivity: A critical history of social media, Oxford University Press. Doi: https://doi.org/10.1093/acprof:oso/9780199970773.001.0001
  • Vosoughi, S., Roy, D. & Aral, S. (2018). The spread of true and false news online, Science, 359(6380), 1146-1151. Doi: https://doi.org/10.1126/science.aap9559
  • Yücedağ, A., Özkul, D. (2023). Yapay Zekâ ve Habercilik Pratikleri: Anadolu Ajansı Örneği, İmgelem, 7(2), 272-295. Doi: https://doi.org/10.53791/imgelem.1636282
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power, PublicAffairs.

Ölü İnternet Hipotezi: Yapay Zeka, Sansür ve İnsan Merkezli Dijital Söylemin Çöküşü

Yıl 2025, Sayı: Yeni Medya Çalışmaları, 751 - 776, 28.09.2025
https://doi.org/10.53791/imgelem.1650961

Öz

Bu çalışma, günümüz çevrim içi faaliyetlerinin büyük bir kısmının artık gerçek insan etkileşimleriyle değil, yapay zeka tarafından üretilen, düzenlenen ve yayılan içeriklerle yönlendirildiğini öne süren “Ölü İnternet Hipotezi”ni incelemektedir. Araştırma, internetin demokratik ve katılımcı bir alan olmaktan çıkarak, gerçek iletişim pahasına etkileşimi maksimize eden kurumsallaşmış bir sisteme dönüştüğünü savunmaktadır. Yapay zeka tarafından oluşturulan medya içerikleri, tekel haline gelmiş arama motorları, öneri algoritmaları ve dijital yankı odaları analiz edilerek, çevrim içi söylemin insan iradesinden çok algoritmik mantıkla şekillendiği ortaya konulmaktadır. Habermas’ın Kamusal Alan kuramı, McLuhan’ın Araç Kuramı, Baudrillard’ın Hipergerçeklik kavramı ve van Dijck’in Algoritmik Kültür yaklaşımı gibi kuramsal çerçevelerden yararlanan çalışma, ifade özgürlüğünün aşınması, çoğulculuk yanılsaması ve simüle edilmiş anlatıların yükselişi gibi konuları eleştirel bir şekilde ele almaktadır. Google’ın arama algoritmaları, Facebook’un EdgeRank sistemi, TikTok’un öneri motoru ve X (eski Twitter) platformundaki bot faaliyetlerini içeren vaka analizleri, büyük dijital platformların gerçek seslerden ziyade algoritmik görünürlüğü önceliklendirdiğini göstermektedir. Çalışma, yapay zekânın egemenliğindeki dijital dünyada insan iradesinin yeniden tesis edilip edilemeyeceği sorusunu ortaya koyarak; algoritmik şeffaflık, platformların merkezsizleştirilmesi ve yapay zekâ okuryazarlığı gibi müdahale alanlarını önerir. Sonuç olarak, internetin hâlâ açık bir kamusal tartışma alanı olduğu inancını sorgulayan araştırma, onun artık yapay zekânın neyin görülüp paylaşılacağına ve neye inanılacağına karar verdiği kontrollü ve yapay bir ortam haline geldiğini ileri sürmektedir.

Kaynakça

  • Ananny, M., Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability, New Media & Society, 20(3), 973-989. Doi: https://doi.org/10.1177/1461444816676645
  • Bakshy, E., Messing, S. & Adamic, L. (2015). Exposure to ideologically diverse news and opinion on Facebook, Science, 348(6239), 1130-1132. Doi: https://doi.org/10.1126/science.aaa1160
  • Baudrillard, J. (1981). Simulacra and simulation, University of Michigan Press.
  • Benkler, Y., Faris, R. & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics, Oxford University Press. Doi: https://doi.org/10.1093/oso/9780190923624.001.0001
  • Borsci, S., Lehtola, V., Nex, F. et al. (2022). Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle, AI & SOCIETY, 38. 1-20. Doi: https://doi-org.ezproxy.its.uu.se/10.1007/s00146-021-01383-
  • DiResta, R., Cohn, J. & Gilbard, Z. (2022). The perception gap: How false beliefs about algorithmic systems shape political polarization, Journal of Online Trust and Safety, 1(3), 1-18. Doi: https://doi.org/10.1145/12345678
  • Eslami, M., Rickman, A., Vaccaro, K. et al. (2015). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153-162. Doi: https://doi.org/10.1145/2702123.2702556
  • European Commission. (2023). Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), Official Journal of the European Union, L 277, 1-102.
  • Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media, Yale University Press. Doi: https://doi.org/10.12987/9780300235029
  • Guess, A. M., Nyhan, B. & Reifler, J. (2019). Exposure to untrustworthy websites in the 2016 US election. Nature Human Behaviour, 3(9), 920-928. Doi: https://doi.org/10.1038/s41562-019-0729-9
  • Habermas, J. (1989). The structural transformation of the public sphere: An inquiry into a category of bourgeois society, MIT Press.
  • Hines, A., Smith, J. & Liu, Y. (2022). Algorithmic personalization on TikTok: A study of user engagement and content virality. Social Media + Society, 8(2), 1-18. Doi: https://doi.org/10.1177/20563051221107655
  • Knox, J. (2015). Critical education and digital cultures, M. A. Peters (Ed.), in Encyclopedia of Educational Philosophy and Theory, Springer. Doi: https://doi.org/10.1007/978-981-287-532-7_124-1
  • McLuhan, M. (2003). Understanding media: The extensions of man (2nd ed.), MIT Press.
  • Muzumdar, P., Cheemalapati, S., RamiReddy, S. R. et al. (2025). The dead internet theory: A survey on artificial interactions and the future of social media, Asian Journal of Research in Computer Science, 18(1), 67-73. Doi: https://doi.org/10.9734/ajrcos/2025/v18i1549
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism, NYU Press. Doi: https://doi.org/10.2307/j.ctt1pwt9w5
  • Pataranutaporn, P., Danry, V. & Leong, J. (2021). AI-generated characters for supporting personalized learning and well-being, Nature Machine Intelligence, 3(6), 465-473.
  • Pennycook, G., Epstein, Z., Mosleh, M. et al. (2021). Shifting attention to accuracy can reduce misinformation online. Nature, 592(7855), 590-595. Doi: https://doi.org/10.1038/s41586-021-03344-2
  • Perrotta, C., Selwyn, N. (2022). Artificial intelligence and the affective labor of understanding: The intimate moderation of a language model, New Media & Society.
  • Raza, A., Aslam, M. W. (2024). Algorithmic curation in Facebook: An investigation into the role of AI in forming political polarization and misinformation in Pakistan, Annals of Human and Social Sciences.
  • Ribeiro, M. H., Ottoni, R., West, R. et al. (2020). Auditing radicalization pathways on YouTube. Proceedings of the National Academy of Sciences, 117(32), 19103-19121. Doi: https://doi.org/10.1073/pnas.1910980117
  • Samuels, M. G. (2012). Review: The filter bubble: What the Internet is hiding from you by Eli Pariser, Issues in Information Systems, 8(2). Doi: https://doi.org/10.5070/D482011835
  • StatCounter. (2024). Search engine market share worldwide, StatCounter GlobalStats. https://gs.statcounter.com/search-engine-market-share
  • Suzor, N. P., West, S. M., Quodling, A. et al. (2019). What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation, International Journal of Communication, 13, 1526-1545.
  • Thompson, N. (2018, May 23). Exclusive: Facebook opens up about false news, Wired. https://www.wired.com/story/exclusive-facebook-opens-up-about-false-news/
  • Tüfekçi, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency, Colorado Technology Law Journal, 13(203), 203-218.
  • Tüfekçi, Z. (2018, January 16). It’s the (democracy-poisoning) golden age of free speech. Wired. https://www.wired.com/story/free-speech-issue-tech-turmoil-new-censorship/ (Access: 20.02.25).
  • Ungless, E. L., Markl, N. & Ross, B. (2025). Experiences of censorship on TikTok across marginalised identities, Proceedings of the International Conference on Web and Social Media (ICWSM 2025). AAAI Press. Doi: https://doi.org/10.48550/arXiv.2407.14164
  • Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy, Oxford University Press. Doi: https://doi.org/10.1093/oso/9780190056544.001.0001
  • van Dijck, J. (2013). The culture of connectivity: A critical history of social media, Oxford University Press. Doi: https://doi.org/10.1093/acprof:oso/9780199970773.001.0001
  • Vosoughi, S., Roy, D. & Aral, S. (2018). The spread of true and false news online, Science, 359(6380), 1146-1151. Doi: https://doi.org/10.1126/science.aap9559
  • Yücedağ, A., Özkul, D. (2023). Yapay Zekâ ve Habercilik Pratikleri: Anadolu Ajansı Örneği, İmgelem, 7(2), 272-295. Doi: https://doi.org/10.53791/imgelem.1636282
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power, PublicAffairs.
Toplam 33 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular İletişim Sosyolojisi
Bölüm Makaleler
Yazarlar

Çağdaş Öğüç 0000-0003-3832-1152

Yayımlanma Tarihi 28 Eylül 2025
Gönderilme Tarihi 4 Mart 2025
Kabul Tarihi 24 Temmuz 2025
Yayımlandığı Sayı Yıl 2025 Sayı: Yeni Medya Çalışmaları

Kaynak Göster

APA Öğüç, Ç. (2025). Dead Internet Hypothesis: AI, Censorship, and the Decline of Human-Centered Digital Discourse. İmgelem(Yeni Medya Çalışmaları), 751-776. https://doi.org/10.53791/imgelem.1650961

This work licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Please click here to contact the publisher.