Araştırma Makalesi
BibTex RIS Kaynak Göster

Algı Yönetimi ve Yapay Zekâ: Medya Manipülasyonunda Yeni Dönem

Yıl 2026, Sayı: 41, 117 - 136, 25.01.2026
https://doi.org/10.54600/igdirsosbilder.1637527

Öz

Bu makale, yapay zekânın (YZ) medya ve algı yönetimi üzerindeki dönüştürücü etkilerini inceleyerek, faydalarını, etik zorluklarını ve toplumsal etkilerini analiz etmektedir. YZ, büyük veri analizi ve kişiselleştirilmiş mesaj oluşturma kapasitesi sayesinde medya manipülasyonu için geniş olanaklar sunmaktadır. Bununla birlikte, algoritmalara dayalı dezenformasyon ve yankı odaları, etik kaygıları artırmakta ve demokratik süreçleri tehdit etmektedir. Çalışma, bu ikilemleri eleştirerek, YZ'nin toplumsal eşitsizlikleri artırma ve kamu güvenini zayıflatma potansiyelini vurgulamaktadır. Nitel analizler ve vaka çalışmaları ışığında, YZ'nin medya alanında sunduğu fırsatların yanı sıra getirdiği riskler de detaylı bir şekilde incelenmiştir. Bulgular, şeffaf ve hesap verebilir YZ sistemlerinin gerekliliğini ortaya koymakta ve yönetişim çerçeveleri ile kamu farkındalığının artırılmasına olan ihtiyacı vurgulamaktadır. YZ'nin sunduğu bu fırsatlar, medya organizasyonları ve hükümetlerin, hedef kitlelere daha etkin bir şekilde ulaşmasını sağlamaktadır. Öte yandan, algoritmik önyargılar, mahremiyetin ihlali ve bilgi kirliliği gibi sorunlar, YZ'nin etik kullanımını tartışmaya açmaktadır. Çalışma, bu bağlamda, toplumda meydana gelebilecek olası etkileri de değerlendirmektedir. Ayrıca, şeffaflık ve hesap verebilirlik ilkelerinin, YZ uygulamalarının adil ve sorumlu bir şekilde kullanımı için hayati önem taşıdığı vurgulanmaktadır. Makale, etik ilkeler ışığında YZ'nin potansiyel kötüye kullanımını önlemeyi ve demokratik değerleri korumayı hedeflemektedir.

Kaynakça

  • Abrahamyan, S. A., & Banshchikova, M. A. (2020). Peculiarities of argumentative strategies of modern English political discourse. In O. Magirovskaya (Ed.), Functional approach to professional discourse exploration in linguistics (pp. 165–198). Russia: Springer Publications.
  • Acemoğlu, D., Makhdoumi, A., Malekian, A., & Ozdaglar, A. (2019). Too much data: Prices and inefficiencies in data markets (No. w26296). National Bureau of Economic Research. https://doi.org/10.3386/w26296
  • Allcott, H., and Gentzkow M. (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31 (2): 211–36. DOI: 10.1257/jep.31.2.211
  • Areeb, Q. M., Nadeem, M., Sohail, S. S., Imam, R., Doctor, F., Himeur, Y., ... & Amira, A. (2023). Filter bubbles in recommender systems: Fact or fallacy—A systematic review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 13(6), e1512. https://doi.org/10.48550/arxiv.2307.01221
  • Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 US Presidential election online discussion. First monday, 21(11-7).
  • Bienvenue, E. (2020). Computational propaganda: political parties, politicians, and political manipulation on social media. Oxford University Press, 96(2), 525-527. https://doi.org/10.1093/ia/iiaa018
  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). "It's reducing a human being to a percentage": Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14). Association for Computing Machinery. https://doi.org/10.1145/3173574.3173951
  • Bongard-Blanchy, K., Rossi, A., Rivas, S., Doublet, S., Koenig, V., & Lenzini, G. (2021). “I am definitely manipulated, even when I am aware of it. It’s ridiculous!” Dark patterns from the end-user perspective. In Proceedings of the 2021 ACM Designing Interactive Systems Conference (pp. 763–776). Association for Computing Machinery. https://doi.org/10.1145/3461778.3462086
  • Brennen, J., Howard, P. N., & Nielsen, R. K. (2018). An industry-led debate: How UK media cover artificial intelligence (Report). Reuters Institute for the Study of Journalism. https://ora.ox.ac.uk/objects/uuid:02126b4c-f4f9-4582-83a0-f8a9d9a65079
  • Brito, K., Paula, N., Fernandes, M., & Meira, S. (2019). Social media and presidential campaigns: Preliminary results of the 2018 Brazilian presidential election. In Proceedings of the 20th Annual International Conference on Digital Government Research (pp. 332–341). Association for Computing Machinery.
  • Brown, A J. (2020). “Should I Stay, or Should I Leave?”: Exploring (Dis)continued Facebook Use After the Cambridge Analytica Scandal. SAGE Publishing, 6(1), 205630512091388-205630512091388. https://doi.org/10.1177/2056305120913884
  • Chan‐Olmsted, S. M. (2019). A review of artificial intelligence adoptions in the media industry. The Journal of Media Innovations, 21(3–4), 193–215. https://doi.org/10.1080/14241277.2019.1695619
  • Chen, J. (2022). Research on the echo chamber effect. In 2021 International Conference on Public Art and Human Development (ICPAHD 2021) (pp. 874–877). Atlantis Press. https://doi.org/10.2991/assehr.k.220110.165
  • Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Affairs, 98(1), 147–155.
  • Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), Article e2023301118. https://doi.org/10.1073/pnas.2023301118
  • Creemers, R. (2018). Disrupting the Chinese state: New actors and new factors. Asiascape: Digital Asia, 5(3), 169–197.
  • Çelik, N. (2024). Yeni medyada bilginin üretimi ve paylaşımı sorunsalı. Içinde F. Ayaz & B. Taşdelen (Eds.), Bir yaşam alanı olarak dijital medya: Kuramlar, uygulamalar, tartışmalar (s. 55-72). Eğitim Yayınevi.
  • Dal, E. P., & Erdoğan, E. (2021). Medya dezenformasyonu sözlüğü - Temel kavramlar. NATO.
  • Das, A., & Schroeder, R. (2020). Online disinformation in the run-up to the Indian 2019 election. Information, Communication & Society, 24(12), 1762–1778. https://doi.org/10.1080/1369118x.2020.1736123
  • De-Lima-Santos, M., Yeung, W. N., & Dodds, T. (2024). Guiding the way: A comprehensive examination of AI guidelines in global media. Springer Nature. https://doi.org/10.1007/s00146-024-01973-5
  • Devereaux, A., & Peng, L. (2020). Give us a little social credit: To design or to discover personal ratings in the era of big data. Journal of Institutional Economics, 16(3), 369–387.
  • Faishal, M., Mathew, S., Neikha, K., Pusa, K., & Zhimomi, T. (2023). The future of work: AI, automation, and the changing dynamics of developed economies. World Journal of Advanced Research and Reviews, 18(3), 620–629. https://doi.org/10.30574/wjarr.2023.18.3.1086
  • Fildes, R., Kolassa, S., & Ma, S. (2022). Post-script—Retail forecasting: Research and practice. International Journal of Forecasting, 38(4), 1319–1324. https://doi.org/10.1016/j.ijforecast.2021.09.012
  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689–707.
  • Fürsich, E. (2010). Media and the representation of others. International Social Science Journal, 61(199), 113–130.
  • Gerke, S., Minssen, T., & Cohen, G. (2020, January 1). Ethical and legal challenges of artificial intelligence-driven healthcare. In Artificial intelligence in healthcare (pp. 295–336). Elsevier. https://doi.org/10.1016/b978-0-12-818438-7.00012-5
  • Guess, A. M., Lerner, M., Lyons, B., Montgomery, J., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences, 117(27), 15536–15545. https://doi.org/10.1073/pnas.1920498117
  • Gui, X., & Xu, Y. (2019). Research on the current situation of the application of artificial intelligence in media field and countermeasures. 2019 International Conference on Mechanical, Control and Computer Engineering (ICMCCE). https://doi.org/10.1109/icmcce48743.2019.00104
  • Hansen, M. R., Roca-Sales, M., Keegan, J. M., & King, G. (2017). Artificial intelligence: Practice and implications for journalism. Tow Center for Digital Journalism. https://doi.org/10.7916/d8x92prd
  • Hermann, E. (2021). Artificial intelligence and mass personalization of communication content—An ethical and literacy perspective. New Media & Society, 24(5), 1258–1277. https://doi.org/10.1177/14614448211022702
  • Horvitz, E. (2022). On the horizon: Interactive and compositional deepfakes. Proceedings of the ACM Multimedia Conference, 1–10. https://doi.org/10.1145/3536221.3558175
  • Hu, M. (2020). Cambridge Analytica’s black box. Big Data & Society, 7(2), Article 205395172093809. https://doi.org/10.1177/2053951720938091
  • Jalli, N., Jalli, N., & Idris, I. (2019). Fake news and elections in two Southeast Asian nations: A comparative study of Malaysia general election 2018 and Indonesia presidential election 2019. Proceedings of the International Conference on Decision Economics and Security Applications (ICDESA-19). https://doi.org/10.2991/icdesa-19.2019.30
  • Jha, D., Rauniyar, A., Srivastava, A., Hagos, D. H., Tomar, N. K., Sharma, V., Keleş, E., Zhang, Z., Demir, U., Topcu, A. E., Yazidi, A., Håakegård, J. E., & Bağcı, U. (2023). Ensuring trustworthy medical artificial intelligence through ethical and philosophical principles. Cornell University, arXiv Preprint. https://doi.org/10.48550/arxiv.2304.11530
  • Jiang, B., Karami, M., Cheng, L., Black, T., & Liu, H. (2021). Mechanisms and attributes of echo chambers in social media. arXiv Preprint, arXiv:2106.05401. https://doi.org/10.48550/arxiv.2106.05401
  • Jung, H. M. (2009). Information manipulation through the media. Journal of Media Economics, 22(4), 188–210. https://doi.org/10.1080/08997760903375886
  • Koplin, J. (2023). Dual-use implications of AI text generation. Ethics and Information Technology, 25(2), 1–19. https://doi.org/10.1007/s10676-023-09703-z
  • Kshetri, N. (2014). Big data’s impact on privacy, security and consumer welfare. Telecommunications Policy, 38(11), 1134–1145. https://doi.org/10.1016/j.telpol.2014.10.002
  • Lam, T. (2021). The people’s algorithms: Social credits and the rise of China’s big (br)other. In Artificial intelligence and society (pp. 71–95). Springer Nature. https://doi.org/10.1007/978-3-030-78201-6_3
  • Li, X., Chen, L., & Wu, D. (2023). Adversary for social good: Leveraging adversarial attacks to protect personal attribute privacy. arXiv Preprint. Cornell University. https://doi.org/10.48550/arxiv.2306.02488
  • Lopez, M. G., Porlezza, C., Cooper, G., Makri, S., MacFarlane, A., & Missaoui, S. (2022). A question of design: Strategies for embedding AI-driven tools into journalistic work routines. Digital Journalism, 11(3), 484–503. https://doi.org/10.1080/21670811.2022.2043759
  • Mazurczyk, W., Lee, D., & Vlachos, A. (2023). Disinformation 2.0 in the age of AI: A cybersecurity perspective. arXiv Preprint. Cornell University. https://doi.org/10.48550/arxiv.2306.05569
  • Miura, S. (2019). Manipulated news model: Electoral competition and mass media. Games and Economic Behavior, 113, 306–338.
  • Moura, M., & Michelson, M. R. (2017). WhatsApp in Brazil: Mobilising voters through door-to-door and personal messages. Internet Policy Review, 6(4), 1–18.
  • Mulla, I. (2024). UK: Anti-Muslim mob attacks Southport Mosque after misinformation campaign. Middle East Eye. Retrieved August 6, 2024, from https://www.middleeasteye.net/news/uk-far-right-attacks-southport-mosque-after-misinformation-campaign
  • Mwangi, E. W., Gachahi, M. W., & Ndung’u, C. W. (2019). The Role of Mass Media as a Socialisation Agent in Shaping Behaviour of Primary School Pupils in Thika Sub-County, Kenya. Pedagogical Research, 4(4), em0048. https://doi.org/10.29333/pr/5950
  • Nguyen, C. T. (2020). Echo chambers and epistemic bubbles. Episteme, 17(2), 141–161. https://doi.org/10.1017/epi.2018.32
  • Nguyen, D., & Hekman, E. (2022). The news framing of artificial intelligence: A critical exploration of how media discourses make sense of automation. AI & Society, 39(2), 437–451. https://doi.org/10.1007/s00146-022-01511-1
  • Noble, S. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://doi.org/10.18574/nyu/9781479833641.001.0001
  • Pachegowda, C. (2024). The global impact of AI-artificial intelligence: Recent advances and future directions, a review. arXiv Preprint. Cornell University. https://doi.org/10.48550/arxiv.2401.12223
  • Palomares, I., Martínez‐Cámara, E., Montes, M. F., García-Moral, P., Chiachío, M., Chiachío, J., Alonso, S., Melero, F. J., Molina, D., Fernández, B. C., Santaella, C. M., Marchena, R., Vargas, J. P. D., & Herrera, F. (2021). A panoramic view and SWOT analysis of artificial intelligence for achieving the sustainable development goals by 2030: Progress and prospects. Applied Intelligence, 51(9), 6497–6527. https://doi.org/10.1007/s10489-021-02264-y
  • Paris, B., & Donovan, J. (2019). Deepfakes and cheap fakes. Data & Society.
  • Piaia, V., & Alves, M. (2020). Abrir la caja negra: Análisis exploratorio de la red de Bolsonaro en WhatsApp. Intercom: Revista Brasileira de Ciências da Comunicação, 43, 135–154.
  • Ratchford, B. T. (2019). The impact of digital innovations on marketing and consumers. In Review of Marketing Research (Vol. 16, pp. 35–61). Emerald Publishing Limited. https://doi.org/10.1108/s1548-643520190000016005
  • Rehman, I. U. (2019). Facebook-Cambridge Analytica data harvesting: What you need to know. Library Philosophy and Practice. Retrieved from https://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=5833&context=libphilprac
  • Robert, L., Bansal, G., & Lütge, C. (2020). ICIS 2019 SIGHCI workshop panel report: Human–computer interaction challenges and opportunities for fair, trustworthy and ethical artificial intelligence. AIS Transactions on Human-Computer Interaction, 12(2), 96–108. https://doi.org/10.17705/1thci.00130
  • Roychowdhury, S., Li, W., Alareqi, E., Pandita, A., Liu, A., & Söderberg, J. (2020). Categorizing online shopping behavior from cosmetics to electronics: An analytical framework. arXiv Preprint. Cornell University. https://doi.org/10.48550/arxiv.2010.02503
  • Rubin, F. S., Almeida, Y. L. D., Alvim, A. C. F., Dias, V. F., & Santos, R. P. D. (2021). Analysis of the first-round of 2018 government election for the state of Rio de Janeiro based on Twitter. In Proceedings of the XVII Brazilian Symposium on Information Systems (pp. 1–8).
  • Sagvekar, V., & Sharma, P. (2021). Study on product opinion analysis for customer satisfaction on e-commerce websites. Advances in Parallel Computing, 36, 1–10. Elsevier BV. https://doi.org/10.3233/apc210206
  • Schneble, C. O., Elger, B. S., & Shaw, D. (2018). The Cambridge Analytica affair and internet‐mediated research. EMBO Reports, 19(8), Article e46579. https://doi.org/10.15252/embr.201846579
  • Shao, C., Ciampaglia, G. L., Varol, O., Flammini, A., & Menczer, F. (2017). The spread of misinformation by social bots. arXiv Preprint. Cornell University. https://arxiv.org/abs/1707.07592v3
  • Shao, C., Ciampaglia, G. L., Varol, O., Yang, K. C., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature Communications, 9(1), Article 1. http://export.arxiv.org/pdf/1707.07592
  • Shen, Y. (2020). Application of big data technology in e-commerce. Journal of Physics: Conference Series, 1682(1), Article 012075. https://doi.org/10.1088/1742-6596/1682/1/012075
  • Shoaib, M. R., Wang, Z., Ahvanooey, M. T., & Zhao, J. (2023). Deepfakes, misinformation, and disinformation in the era of frontier AI, generative AI, and large AI models. Proceedings of the 2023 International Conference on Computer Applications (ICCA 2023). https://doi.org/10.1109/icca59364.2023.10401723
  • Shrirame, V., Sabade, J., Soneta, H., & Vijayalakshmi, M. (2020). Consumer behavior analytics using machine learning algorithms. Proceedings of the 2020 International Conference on Electronics, Computing and Communication Technologies (CONECCT 2020). https://doi.org/10.1109/conecct50063.2020.9198562
  • Sorlin, S. (2017). The pragmatics of manipulation: Exploiting im/politeness theories. Journal of Pragmatics, 121, 132–146. https://doi.org/10.1016/j.pragma.2017.10.002
  • Štefko, R., Frankovský, M., Kovaľová, J., Birknerová, Z., & Zbihlejová, L. (2020). Assessment of sellers’ manipulative behaviour by customers and sellers in the context of generations X, Y and Z. In 11th International Scientific Conference “Business and Management 2020” (pp. 1–10). Vilnius, Lithuania. https://doi.org/10.3846/bm.2020.530
  • Susser, D., Roessler, B., & Nissenbaum, H. (2019). Online manipulation: Hidden influences in a digital world. Georgetown Law Technology Review, 4, 1–25. https://doi.org/10.2139/ssrn.3306006
  • Svetlova, E. (2022). AI ethics and systemic risks in finance. AI & Ethics, 2(4), 713–725. https://doi.org/10.1007/s43681-021-00129-1
  • Tkácová, H., Pavlíková, M., Stranovská, E., & Králik, R. (2023). Individual (non) resilience of university students to digital media manipulation after COVID-19 (case study of Slovak initiatives). International Journal of Environmental Research and Public Health, 20(2), Article 1605. https://doi.org/10.3390/ijerph20021605
  • Törnberg, P. (2018). Echo chambers and viral misinformation: Modeling fake news as complex contagion. PLoS One, 13(9), e0203958. https://doi.org/10.1371/journal.pone.0203958
  • Trattner, C., Jannach, D., Motta, E., Costera Meijer, I., Diakopoulos, N., Elahi, M., ... & Moe, H. (2022). Responsible media technology and AI: Challenges and research directions. AI and Ethics, 2(4), 585–594. https://doi.org/10.1007/s43681-021-00126-4
  • Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2021). The ethics of algorithms: Key problems and solutions. AI & Society, 37(1), 215–230. https://doi.org/10.1007/s00146-021-01154-8
  • Tucker, J. A., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., & Nyhan, B. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature. William and Flora Hewlett Foundation.
  • Unver, H. A. (2018). Artificial intelligence, authoritarianism and the future of political systems. EDAM Research Reports. Retrieved from https://ssrn.com/abstract=3331635
  • Varghese, M., Raj, S., & Venkatesh, V. (2022). Influence of AI in human lives. arXiv Preprint. Cornell University. https://doi.org/10.48550/arxiv.2212.12305
  • Vicario, M. D., Vivaldo, G., Bessi, A., Zollo, F., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2016). Echo chambers: Emotional contagion and group polarization on Facebook. Scientific Reports, 6(1), Article 37825. https://doi.org/10.1038/srep37825
  • Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151.
  • Wang, Y. (2021). When artificial intelligence meets educational leaders’ data-informed decision-making: A cautionary tale. Studies in Educational Evaluation, 69, Article 100872. https://doi.org/10.1016/j.stueduc.2020.100872
  • Wei, M., & Zhou, Z. (2022). AI ethics issues in real world: Evidence from AI incident database. arXiv Preprint. Cornell University. https://doi.org/10.48550/arxiv.2206.07635
  • White, A. (2005). Truth, honesty and spin. Democratisation, 12(5), 651–667.
  • Whitehead, J. (2024). Top prosecutor considers terrorism charges over riots, as PM calls emergency meeting. BBC News. Retrieved August 6, 2024, from https://www.bbc.com/news/live/cw5yyynpwnzt
  • Zhai, Y., Yan, J., Zhang, H., & Lu, W. (2020). Tracing the evolution of AI: Conceptualization of artificial intelligence in mass media discourse. Information Discovery and Delivery, 48(3), 137–149. https://doi.org/10.1108/IDD-01-2020-0007
  • Zhou, A. (2017). #Republic: Divided democracy in the age of social media. Journal of Communication, 67(6), E12–E14. https://doi.org/10.1111/jcom.12344
  • Zuboff, S. (2023). The age of surveillance capitalism. In Social theory re-wired (pp. 203–213). Routledge.

Perception Management and Artificial Intelligence: A New Era in Media Manipulation

Yıl 2026, Sayı: 41, 117 - 136, 25.01.2026
https://doi.org/10.54600/igdirsosbilder.1637527

Öz

This article examines the transformative impact of artificial intelligence (AI) on media and perception management, analyzing its advantages, ethical dilemmas, and societal implications. AI enables significant opportunities for media manipulation through big data analysis and personalized messaging. However, its integration with disinformation and echo chambers heightens ethical concerns and poses threats to democratic processes. The study critiques these challenges, emphasizing how AI can exacerbate social inequalities and weaken public trust. Using qualitative analysis and case studies, it explores both the opportunities AI provides in media and the risks it entails. Findings underscore the urgent need for transparent and accountable AI systems supported by robust governance frameworks and enhanced public awareness. The article advocates for adopting ethical AI principles that promote fairness, respect human rights, and prioritize public trust. Furthermore, it calls for proactive measures to prevent the misuse of AI in shaping societal narratives, aiming to protect democratic values and foster responsible applications of AI in media. Managing the ethical and societal consequences of AI requires enhanced oversight, transparency, and ethical frameworks that align with democratic principles and the broader public interest

Kaynakça

  • Abrahamyan, S. A., & Banshchikova, M. A. (2020). Peculiarities of argumentative strategies of modern English political discourse. In O. Magirovskaya (Ed.), Functional approach to professional discourse exploration in linguistics (pp. 165–198). Russia: Springer Publications.
  • Acemoğlu, D., Makhdoumi, A., Malekian, A., & Ozdaglar, A. (2019). Too much data: Prices and inefficiencies in data markets (No. w26296). National Bureau of Economic Research. https://doi.org/10.3386/w26296
  • Allcott, H., and Gentzkow M. (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31 (2): 211–36. DOI: 10.1257/jep.31.2.211
  • Areeb, Q. M., Nadeem, M., Sohail, S. S., Imam, R., Doctor, F., Himeur, Y., ... & Amira, A. (2023). Filter bubbles in recommender systems: Fact or fallacy—A systematic review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 13(6), e1512. https://doi.org/10.48550/arxiv.2307.01221
  • Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 US Presidential election online discussion. First monday, 21(11-7).
  • Bienvenue, E. (2020). Computational propaganda: political parties, politicians, and political manipulation on social media. Oxford University Press, 96(2), 525-527. https://doi.org/10.1093/ia/iiaa018
  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). "It's reducing a human being to a percentage": Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14). Association for Computing Machinery. https://doi.org/10.1145/3173574.3173951
  • Bongard-Blanchy, K., Rossi, A., Rivas, S., Doublet, S., Koenig, V., & Lenzini, G. (2021). “I am definitely manipulated, even when I am aware of it. It’s ridiculous!” Dark patterns from the end-user perspective. In Proceedings of the 2021 ACM Designing Interactive Systems Conference (pp. 763–776). Association for Computing Machinery. https://doi.org/10.1145/3461778.3462086
  • Brennen, J., Howard, P. N., & Nielsen, R. K. (2018). An industry-led debate: How UK media cover artificial intelligence (Report). Reuters Institute for the Study of Journalism. https://ora.ox.ac.uk/objects/uuid:02126b4c-f4f9-4582-83a0-f8a9d9a65079
  • Brito, K., Paula, N., Fernandes, M., & Meira, S. (2019). Social media and presidential campaigns: Preliminary results of the 2018 Brazilian presidential election. In Proceedings of the 20th Annual International Conference on Digital Government Research (pp. 332–341). Association for Computing Machinery.
  • Brown, A J. (2020). “Should I Stay, or Should I Leave?”: Exploring (Dis)continued Facebook Use After the Cambridge Analytica Scandal. SAGE Publishing, 6(1), 205630512091388-205630512091388. https://doi.org/10.1177/2056305120913884
  • Chan‐Olmsted, S. M. (2019). A review of artificial intelligence adoptions in the media industry. The Journal of Media Innovations, 21(3–4), 193–215. https://doi.org/10.1080/14241277.2019.1695619
  • Chen, J. (2022). Research on the echo chamber effect. In 2021 International Conference on Public Art and Human Development (ICPAHD 2021) (pp. 874–877). Atlantis Press. https://doi.org/10.2991/assehr.k.220110.165
  • Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Affairs, 98(1), 147–155.
  • Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), Article e2023301118. https://doi.org/10.1073/pnas.2023301118
  • Creemers, R. (2018). Disrupting the Chinese state: New actors and new factors. Asiascape: Digital Asia, 5(3), 169–197.
  • Çelik, N. (2024). Yeni medyada bilginin üretimi ve paylaşımı sorunsalı. Içinde F. Ayaz & B. Taşdelen (Eds.), Bir yaşam alanı olarak dijital medya: Kuramlar, uygulamalar, tartışmalar (s. 55-72). Eğitim Yayınevi.
  • Dal, E. P., & Erdoğan, E. (2021). Medya dezenformasyonu sözlüğü - Temel kavramlar. NATO.
  • Das, A., & Schroeder, R. (2020). Online disinformation in the run-up to the Indian 2019 election. Information, Communication & Society, 24(12), 1762–1778. https://doi.org/10.1080/1369118x.2020.1736123
  • De-Lima-Santos, M., Yeung, W. N., & Dodds, T. (2024). Guiding the way: A comprehensive examination of AI guidelines in global media. Springer Nature. https://doi.org/10.1007/s00146-024-01973-5
  • Devereaux, A., & Peng, L. (2020). Give us a little social credit: To design or to discover personal ratings in the era of big data. Journal of Institutional Economics, 16(3), 369–387.
  • Faishal, M., Mathew, S., Neikha, K., Pusa, K., & Zhimomi, T. (2023). The future of work: AI, automation, and the changing dynamics of developed economies. World Journal of Advanced Research and Reviews, 18(3), 620–629. https://doi.org/10.30574/wjarr.2023.18.3.1086
  • Fildes, R., Kolassa, S., & Ma, S. (2022). Post-script—Retail forecasting: Research and practice. International Journal of Forecasting, 38(4), 1319–1324. https://doi.org/10.1016/j.ijforecast.2021.09.012
  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689–707.
  • Fürsich, E. (2010). Media and the representation of others. International Social Science Journal, 61(199), 113–130.
  • Gerke, S., Minssen, T., & Cohen, G. (2020, January 1). Ethical and legal challenges of artificial intelligence-driven healthcare. In Artificial intelligence in healthcare (pp. 295–336). Elsevier. https://doi.org/10.1016/b978-0-12-818438-7.00012-5
  • Guess, A. M., Lerner, M., Lyons, B., Montgomery, J., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences, 117(27), 15536–15545. https://doi.org/10.1073/pnas.1920498117
  • Gui, X., & Xu, Y. (2019). Research on the current situation of the application of artificial intelligence in media field and countermeasures. 2019 International Conference on Mechanical, Control and Computer Engineering (ICMCCE). https://doi.org/10.1109/icmcce48743.2019.00104
  • Hansen, M. R., Roca-Sales, M., Keegan, J. M., & King, G. (2017). Artificial intelligence: Practice and implications for journalism. Tow Center for Digital Journalism. https://doi.org/10.7916/d8x92prd
  • Hermann, E. (2021). Artificial intelligence and mass personalization of communication content—An ethical and literacy perspective. New Media & Society, 24(5), 1258–1277. https://doi.org/10.1177/14614448211022702
  • Horvitz, E. (2022). On the horizon: Interactive and compositional deepfakes. Proceedings of the ACM Multimedia Conference, 1–10. https://doi.org/10.1145/3536221.3558175
  • Hu, M. (2020). Cambridge Analytica’s black box. Big Data & Society, 7(2), Article 205395172093809. https://doi.org/10.1177/2053951720938091
  • Jalli, N., Jalli, N., & Idris, I. (2019). Fake news and elections in two Southeast Asian nations: A comparative study of Malaysia general election 2018 and Indonesia presidential election 2019. Proceedings of the International Conference on Decision Economics and Security Applications (ICDESA-19). https://doi.org/10.2991/icdesa-19.2019.30
  • Jha, D., Rauniyar, A., Srivastava, A., Hagos, D. H., Tomar, N. K., Sharma, V., Keleş, E., Zhang, Z., Demir, U., Topcu, A. E., Yazidi, A., Håakegård, J. E., & Bağcı, U. (2023). Ensuring trustworthy medical artificial intelligence through ethical and philosophical principles. Cornell University, arXiv Preprint. https://doi.org/10.48550/arxiv.2304.11530
  • Jiang, B., Karami, M., Cheng, L., Black, T., & Liu, H. (2021). Mechanisms and attributes of echo chambers in social media. arXiv Preprint, arXiv:2106.05401. https://doi.org/10.48550/arxiv.2106.05401
  • Jung, H. M. (2009). Information manipulation through the media. Journal of Media Economics, 22(4), 188–210. https://doi.org/10.1080/08997760903375886
  • Koplin, J. (2023). Dual-use implications of AI text generation. Ethics and Information Technology, 25(2), 1–19. https://doi.org/10.1007/s10676-023-09703-z
  • Kshetri, N. (2014). Big data’s impact on privacy, security and consumer welfare. Telecommunications Policy, 38(11), 1134–1145. https://doi.org/10.1016/j.telpol.2014.10.002
  • Lam, T. (2021). The people’s algorithms: Social credits and the rise of China’s big (br)other. In Artificial intelligence and society (pp. 71–95). Springer Nature. https://doi.org/10.1007/978-3-030-78201-6_3
  • Li, X., Chen, L., & Wu, D. (2023). Adversary for social good: Leveraging adversarial attacks to protect personal attribute privacy. arXiv Preprint. Cornell University. https://doi.org/10.48550/arxiv.2306.02488
  • Lopez, M. G., Porlezza, C., Cooper, G., Makri, S., MacFarlane, A., & Missaoui, S. (2022). A question of design: Strategies for embedding AI-driven tools into journalistic work routines. Digital Journalism, 11(3), 484–503. https://doi.org/10.1080/21670811.2022.2043759
  • Mazurczyk, W., Lee, D., & Vlachos, A. (2023). Disinformation 2.0 in the age of AI: A cybersecurity perspective. arXiv Preprint. Cornell University. https://doi.org/10.48550/arxiv.2306.05569
  • Miura, S. (2019). Manipulated news model: Electoral competition and mass media. Games and Economic Behavior, 113, 306–338.
  • Moura, M., & Michelson, M. R. (2017). WhatsApp in Brazil: Mobilising voters through door-to-door and personal messages. Internet Policy Review, 6(4), 1–18.
  • Mulla, I. (2024). UK: Anti-Muslim mob attacks Southport Mosque after misinformation campaign. Middle East Eye. Retrieved August 6, 2024, from https://www.middleeasteye.net/news/uk-far-right-attacks-southport-mosque-after-misinformation-campaign
  • Mwangi, E. W., Gachahi, M. W., & Ndung’u, C. W. (2019). The Role of Mass Media as a Socialisation Agent in Shaping Behaviour of Primary School Pupils in Thika Sub-County, Kenya. Pedagogical Research, 4(4), em0048. https://doi.org/10.29333/pr/5950
  • Nguyen, C. T. (2020). Echo chambers and epistemic bubbles. Episteme, 17(2), 141–161. https://doi.org/10.1017/epi.2018.32
  • Nguyen, D., & Hekman, E. (2022). The news framing of artificial intelligence: A critical exploration of how media discourses make sense of automation. AI & Society, 39(2), 437–451. https://doi.org/10.1007/s00146-022-01511-1
  • Noble, S. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://doi.org/10.18574/nyu/9781479833641.001.0001
  • Pachegowda, C. (2024). The global impact of AI-artificial intelligence: Recent advances and future directions, a review. arXiv Preprint. Cornell University. https://doi.org/10.48550/arxiv.2401.12223
  • Palomares, I., Martínez‐Cámara, E., Montes, M. F., García-Moral, P., Chiachío, M., Chiachío, J., Alonso, S., Melero, F. J., Molina, D., Fernández, B. C., Santaella, C. M., Marchena, R., Vargas, J. P. D., & Herrera, F. (2021). A panoramic view and SWOT analysis of artificial intelligence for achieving the sustainable development goals by 2030: Progress and prospects. Applied Intelligence, 51(9), 6497–6527. https://doi.org/10.1007/s10489-021-02264-y
  • Paris, B., & Donovan, J. (2019). Deepfakes and cheap fakes. Data & Society.
  • Piaia, V., & Alves, M. (2020). Abrir la caja negra: Análisis exploratorio de la red de Bolsonaro en WhatsApp. Intercom: Revista Brasileira de Ciências da Comunicação, 43, 135–154.
  • Ratchford, B. T. (2019). The impact of digital innovations on marketing and consumers. In Review of Marketing Research (Vol. 16, pp. 35–61). Emerald Publishing Limited. https://doi.org/10.1108/s1548-643520190000016005
  • Rehman, I. U. (2019). Facebook-Cambridge Analytica data harvesting: What you need to know. Library Philosophy and Practice. Retrieved from https://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=5833&context=libphilprac
  • Robert, L., Bansal, G., & Lütge, C. (2020). ICIS 2019 SIGHCI workshop panel report: Human–computer interaction challenges and opportunities for fair, trustworthy and ethical artificial intelligence. AIS Transactions on Human-Computer Interaction, 12(2), 96–108. https://doi.org/10.17705/1thci.00130
  • Roychowdhury, S., Li, W., Alareqi, E., Pandita, A., Liu, A., & Söderberg, J. (2020). Categorizing online shopping behavior from cosmetics to electronics: An analytical framework. arXiv Preprint. Cornell University. https://doi.org/10.48550/arxiv.2010.02503
  • Rubin, F. S., Almeida, Y. L. D., Alvim, A. C. F., Dias, V. F., & Santos, R. P. D. (2021). Analysis of the first-round of 2018 government election for the state of Rio de Janeiro based on Twitter. In Proceedings of the XVII Brazilian Symposium on Information Systems (pp. 1–8).
  • Sagvekar, V., & Sharma, P. (2021). Study on product opinion analysis for customer satisfaction on e-commerce websites. Advances in Parallel Computing, 36, 1–10. Elsevier BV. https://doi.org/10.3233/apc210206
  • Schneble, C. O., Elger, B. S., & Shaw, D. (2018). The Cambridge Analytica affair and internet‐mediated research. EMBO Reports, 19(8), Article e46579. https://doi.org/10.15252/embr.201846579
  • Shao, C., Ciampaglia, G. L., Varol, O., Flammini, A., & Menczer, F. (2017). The spread of misinformation by social bots. arXiv Preprint. Cornell University. https://arxiv.org/abs/1707.07592v3
  • Shao, C., Ciampaglia, G. L., Varol, O., Yang, K. C., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature Communications, 9(1), Article 1. http://export.arxiv.org/pdf/1707.07592
  • Shen, Y. (2020). Application of big data technology in e-commerce. Journal of Physics: Conference Series, 1682(1), Article 012075. https://doi.org/10.1088/1742-6596/1682/1/012075
  • Shoaib, M. R., Wang, Z., Ahvanooey, M. T., & Zhao, J. (2023). Deepfakes, misinformation, and disinformation in the era of frontier AI, generative AI, and large AI models. Proceedings of the 2023 International Conference on Computer Applications (ICCA 2023). https://doi.org/10.1109/icca59364.2023.10401723
  • Shrirame, V., Sabade, J., Soneta, H., & Vijayalakshmi, M. (2020). Consumer behavior analytics using machine learning algorithms. Proceedings of the 2020 International Conference on Electronics, Computing and Communication Technologies (CONECCT 2020). https://doi.org/10.1109/conecct50063.2020.9198562
  • Sorlin, S. (2017). The pragmatics of manipulation: Exploiting im/politeness theories. Journal of Pragmatics, 121, 132–146. https://doi.org/10.1016/j.pragma.2017.10.002
  • Štefko, R., Frankovský, M., Kovaľová, J., Birknerová, Z., & Zbihlejová, L. (2020). Assessment of sellers’ manipulative behaviour by customers and sellers in the context of generations X, Y and Z. In 11th International Scientific Conference “Business and Management 2020” (pp. 1–10). Vilnius, Lithuania. https://doi.org/10.3846/bm.2020.530
  • Susser, D., Roessler, B., & Nissenbaum, H. (2019). Online manipulation: Hidden influences in a digital world. Georgetown Law Technology Review, 4, 1–25. https://doi.org/10.2139/ssrn.3306006
  • Svetlova, E. (2022). AI ethics and systemic risks in finance. AI & Ethics, 2(4), 713–725. https://doi.org/10.1007/s43681-021-00129-1
  • Tkácová, H., Pavlíková, M., Stranovská, E., & Králik, R. (2023). Individual (non) resilience of university students to digital media manipulation after COVID-19 (case study of Slovak initiatives). International Journal of Environmental Research and Public Health, 20(2), Article 1605. https://doi.org/10.3390/ijerph20021605
  • Törnberg, P. (2018). Echo chambers and viral misinformation: Modeling fake news as complex contagion. PLoS One, 13(9), e0203958. https://doi.org/10.1371/journal.pone.0203958
  • Trattner, C., Jannach, D., Motta, E., Costera Meijer, I., Diakopoulos, N., Elahi, M., ... & Moe, H. (2022). Responsible media technology and AI: Challenges and research directions. AI and Ethics, 2(4), 585–594. https://doi.org/10.1007/s43681-021-00126-4
  • Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2021). The ethics of algorithms: Key problems and solutions. AI & Society, 37(1), 215–230. https://doi.org/10.1007/s00146-021-01154-8
  • Tucker, J. A., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., & Nyhan, B. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature. William and Flora Hewlett Foundation.
  • Unver, H. A. (2018). Artificial intelligence, authoritarianism and the future of political systems. EDAM Research Reports. Retrieved from https://ssrn.com/abstract=3331635
  • Varghese, M., Raj, S., & Venkatesh, V. (2022). Influence of AI in human lives. arXiv Preprint. Cornell University. https://doi.org/10.48550/arxiv.2212.12305
  • Vicario, M. D., Vivaldo, G., Bessi, A., Zollo, F., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2016). Echo chambers: Emotional contagion and group polarization on Facebook. Scientific Reports, 6(1), Article 37825. https://doi.org/10.1038/srep37825
  • Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151.
  • Wang, Y. (2021). When artificial intelligence meets educational leaders’ data-informed decision-making: A cautionary tale. Studies in Educational Evaluation, 69, Article 100872. https://doi.org/10.1016/j.stueduc.2020.100872
  • Wei, M., & Zhou, Z. (2022). AI ethics issues in real world: Evidence from AI incident database. arXiv Preprint. Cornell University. https://doi.org/10.48550/arxiv.2206.07635
  • White, A. (2005). Truth, honesty and spin. Democratisation, 12(5), 651–667.
  • Whitehead, J. (2024). Top prosecutor considers terrorism charges over riots, as PM calls emergency meeting. BBC News. Retrieved August 6, 2024, from https://www.bbc.com/news/live/cw5yyynpwnzt
  • Zhai, Y., Yan, J., Zhang, H., & Lu, W. (2020). Tracing the evolution of AI: Conceptualization of artificial intelligence in mass media discourse. Information Discovery and Delivery, 48(3), 137–149. https://doi.org/10.1108/IDD-01-2020-0007
  • Zhou, A. (2017). #Republic: Divided democracy in the age of social media. Journal of Communication, 67(6), E12–E14. https://doi.org/10.1111/jcom.12344
  • Zuboff, S. (2023). The age of surveillance capitalism. In Social theory re-wired (pp. 203–213). Routledge.
Toplam 85 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular İletişim Teknolojisi ve Dijital Medya Çalışmaları
Bölüm Araştırma Makalesi
Yazarlar

Nuriye Çelik 0000-0001-6368-1956

Gönderilme Tarihi 11 Şubat 2025
Kabul Tarihi 3 Aralık 2025
Yayımlanma Tarihi 25 Ocak 2026
Yayımlandığı Sayı Yıl 2026 Sayı: 41

Kaynak Göster

APA Çelik, N. (2026). Algı Yönetimi ve Yapay Zekâ: Medya Manipülasyonunda Yeni Dönem. Iğdır Üniversitesi Sosyal Bilimler Dergisi, 41, 117-136. https://doi.org/10.54600/igdirsosbilder.1637527