Derleme
BibTex RIS Kaynak Göster

Deepfake Çağında Güvenlik Sorunsalı

Yıl 2026, Sayı: 31, 73 - 86, 12.03.2026
https://doi.org/10.32952/communicata.1886262
https://izlik.org/JA57AR46XZ

Öz

Yapay zekanın son zamanlarda kaydettiği pratik ilerlemeler, sahte haberlerin bir türü olarak kabul edilebilecek deepfake fenomeninin ortaya çıkmasına da neden olmuştur. Deepfake, gerçekçi dijital ürünlerin oluşturulması olgusudur ve son yıllarda sosyal medyada çok sayıda ses, metin ve video ortaya çıkmıştır. Özellikle, deepfake oluşturmak için gereken teknik uzmanlık ve ekipman gereksiniminin düşük olması, bu tür içeriklerin herkes tarafından kolayca üretilip çevrimiçi olarak dağıtılabilmesi anlamına gelmektedir. Bu araştırma, medya ve toplum, medya üretimi, medya temsilleri, medya izleyicileri, cinsiyet, hukuk ve düzenlemeler ile siyaset gibi çok yönlü bakış açılarıyla deepfake'leri incelemektedir. Bu çalışmada, materyaller belge analizi yönteminden yararlanılarak elde edilmiştir. Deepfake olgusunu ele alan temel literatür dikkate alınmış ve teknik raporlar, basın bültenleri, yasa teklifleri ve serbestçe erişilebilen internet platformlarında bulunan görsel-işitsel materyallerin analizi yapılmıştır. Olgunun araştırılmasında seçilen farklı açılar, kategorilerin birbirleriyle ve kendi içlerinde nasıl bağlantılı olduklarını ve farklı düzeylerde nasıl etkileşime girdiklerini araştırmak için kullanılabilen bir yaklaşım olan kesişimsellikten esinlenmiştir. Sonuç olarak toplumun deepfake'lerin ortaya çıkmasıyla herhangi bir düzeyde başa çıkmaya hazır olunmadığını göstermektedir. Şimdiye kadar ciddi bir etki görmemiş olunması, bu teknolojinin henüz gelişiminin erken aşamasında olması ve sorunu ele almak için eksiklikler içermesinden kaynaklanmaktadır. Bu sorunun çözülmesi için teknoloji, eğitim, öğretim ve yönetişimin bir arada kullanılması gerekmektedir.

Kaynakça

  • Alharthi, R., Guthier, B., & Saddik, A. E. (2018). Recognizing human needs during critical events using machine learning powered psychology-based framework. IEEE Access, 6, (58737-58753). DOI: https://doi.org/10.1109/ACCESS.2018.2874032
  • Alpsoy, İ. (2025). Bilgi çarpitma sürecinde yapay zekanin işlevi. Erciyes İletişim Dergisi, 12(2), 551-568. https://doi.org/10.17680/erciyesiletisim.1636521
  • Avatarify: Avatars for zoom, Skype and other video-conferencing apps, (2020). GitHub, San Francisco, CA, USA, [Online]. Available: https://github.com/alievk/avatarify
  • Bartneck, C. (2013). Robots in the theatre and the media, in Proc. Design Semantics Form Movement (DeSForM), 64-70.
  • Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3, e32.
  • Bouziane, M., Perrin, H., Cluzeau, A., Mardas, J., & Sadeq, A. (2020). Team buster. ai at CheckThat! 2020 insights and recommendations to improve fact-checking. in CLEF (working notes) (p. 12).
  • Brown, J. A. (2017). #WheresRey: Feminism, protest, and merchandising sex ism in Star Wars: The force awakens. Feminist Media Studies, 18(5), 335-348. DOI: https://doi.org/10.1080/14680777.2017.1313291
  • Busse, K., & Gray, J. (2011). Fan cultures and fan communities, in The Handbook of Media Audiences. Chichester, U.K.: Wiley-Blackwell, 425-443.
  • Çakar, Ü. (2023). Siyasal iletişim aracı olarak sosyal medya reklamlarının incelenmesi: ABD seçimleri örneği. Ö. F. Aslan, & S. S. Atılgan (Editör). İletişim Perspektifinde Sosyal ve Ekonomik Olaylar (247-275). Efe Akademi Yayınları
  • Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Affairs, 98(1), 147–153.
  • Cho, C. H., Martens, M. L., Kim, H., & Rodrigue, M. (2011). Astroturfing global warming: It isn’t always greener on the other side of the fence. Journal of Business Ethics, 104(4), 571-587.
  • Çifci, V., & Başarır, M. (2025). Doktor-tıbbi mümessil iletişiminde halkla ilişkiler ortam ve araçlarının rolü: Uygulamalı bir inceleme. Uluslararası Kültürel ve Sosyal Araştırmalar Dergisi, 11(1), 73-88.
  • Clark, R. (2016). Hope in a hashtag: The discursive activism of #WhyIStayed. Feminist Media Studies, 16(5), 788-804. DOI: https://doi.org/10.1080/14680777.2016.1138235
  • Clarke, Y. D. (2019). To combat the spread of disinformation through restrictions on deep-fake video alteration technology. [Online]. Available: https://www.congress.gov/116/bills/hr3230/BILLS-116hr3230ih.pdf
  • Çömlekçi, M. F. (2019). Sosyal medyada dezenformasyon ve haber doğrulama platformlarının pratikleri. Gümüşhane Üniversitesi İletişim Fakültesi Elektronik Dergisi, 7(3), 1549-1563.
  • Couldry N., & Hepp, A. (2013). Conceptualizing mediatization: Contexts, traditions, arguments. Commun. Theory, 23(3), 191-202.
  • DeepFaceLab: The Leading Software for Creating Deepfakes, (2019). GitHub, San Francisco, CA, USA., Available: https://github.com/iperov/DeepFaceLab
  • Degerstedt, L., & Pelle, S. (2017). More media, more people - On social & multimodal media intelligence. Human IT, 13(3), 54-84.
  • Epstein, S. L. (2015). Wanted: collaborative intelligence, Artificial Intelligence, 221, (36-45). DOI: https://doi.org/10.1016/j.artint.2014.12.006
  • Freidson, E. (2001). Professionalism: The third logic. New York, USA: Univ. Chicago Press.
  • Gölgeli, K. (2025). Yapay zekâ ile reklam tasadıkı: Reklamcılara yönelik bir araştırma. İnsan ve Toplum Bilimleri Araştırmaları Dergisi, 14(1), 319-336.
  • Gümüş, A., & Sidal, Ö. (2023). Örgütsel değişim ve yönetimi çerçevesinde dijital dönüşüm ile yapay zekâ teknolojilerinin turizm işletmelerine olan etkileriyle ilgili literatür araştırması. Journal of Academic Tourism Studies, 4, (51-62).
  • Habermas, J., Lennox, S., & Lennox, F. (1974). The public sphere: An encyclopedia article (1964). New German Critique, New York, NY, USA: Telos Press. 49-55.
  • Hall, S. (1997). The spectacle of the other. Representation: Cultural representations and signifying practices, 7, (223-290).
  • Haridakis, P. (2013). Uses and gratifications: A social and psychological perspective of media use and effects. In The International Encyclopedia of Media Studies. Chichester, U.K.: Blackwell.
  • Hemer, O. (2012). Fiction and Truth in Transition: Writhing the Present Past in South Africa and Agentina. (Vol. 34). LIT Verlag Münster.
  • Hepp, A., Hjarvard, S., & Lundby, K. (2015). Mediatization: Theorizing the interplay between media, culture and society. Media Culture Soc., 37(2), 314-324.
  • Holder, C., Khurana, V., Harrison, F., & Jacobs, L. (2016). Robotics and law: Key legal and regulatory implications of the robotics age (part I of II). Comput. Law Security Rev., 32(3), 383-402.
  • Hopp T., & Gangadharbatla, H. (2016). Examination of the factors that influence the technological adoption intentions of tomorrow’s new media producers: A longitudinal exploration. Computers in Human Behavior, 55, (1117-1124). DOI: https://doi.org/10.1016/j.chb.2014.09.040
  • Horn S., & Veermans, K. (2019). Critical thinking efficacy and transfer skills defend against ‘fake news’ at an international school in Finland. Journal of Research in International Education, 18(1), 23-41. DOI: https://doi.org/10.1177/1475240919830003
  • Hukuk ve Bilişim Derneği. (2022). Deepfake ve cezai boyutu.
  • Hyman, J. (2017). Addressing fake news: Open standards & easy identification. In 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), 63-69.
  • Janaszkiewicz, P., Krysi´nska, J., Prys, M., Kieruzel, M., Lipczy´nski, T., & Ró˙zewski, P. (2018). Text summarization for storytelling: Formal document case. Procedia Computer Science, 126, (1154-1161). DOI: https://doi.org/10.1016/j.procs.2018.08.053
  • Jia, Y., Zhang, Y., Weiss, R. J., Wang, Q., Shen, J., Ren, F., Chen, Z., Nguyen, P., Pang, R., Moreno, I. L., & Wu, Y. (2018). Transfer learning from speaker verification to multispeaker text-to-speech synthesis. Computer Science > Computation and Language, [Online]. Available: arxiv.abs/1806.04558. https://doi.org/10.48550/arXiv.1806.04558.
  • Kalmus, V., Pruulmann-Vengerfeldt, P., Siibak, A., & P. Runnel, (2009). Mapping the terrain of generation C: Places and practices of online content creation among estonian teenagers. Journal of Computer-Mediated Communication, 14(4), 1257–1282. DOI: https://doi.org/10.1111/j.1083-6101.2009.01489.x
  • Kammer, A. (2013). The mediatization of journalism. MedieKultur: Journal of Media and Communication Research, 29(54), 141-158.
  • Karakoç, E., & Zeybek, B. (2022). Görmek inanmaya yeter mi? Görsel dezenformasyonun ayırt edici biçimi olarak siyasi deepfake içerikler. Öneri Dergisi, 17(57), 50-72.
  • Karnouskos, S. (2020). Self-driving car acceptance and the role of ethics. IEEE Transactions on Engineering Management, 67(2), 252-265. DOI: https://doi.org/10.1109/TEM.2018.2877307
  • Kaya, A., Seyrek, E., Sarıgül, A., Sata, M., Gómez-Salgado, J., & Yıldırım, M. (2025). Social media addiction, perceived social support, sleep disorder, and job performance in healthcare professionals: Testing a moderated mediation model. International Journal of Mental Health Promotion, 27 (8), 1149-1163. https://doi.org/10.32604/ijmhp.2025.067388.
  • Kietzmann, J., Lee, L. W., McCarthy, I. P., & Kietzmann, T. C. (2020). Deepfakes: Trick or treat? Business Horizons, 63(2), 135-146.
  • Kılınç Law. (2025). Deepfake teknolojisinin kişisel veriler üzerindeki etkileri ve KVKK’nın koruma mekanizmaları. Kunelius, R., & Reunanen, E. (2016). Changing power of journalism: The two phases of mediatization. Communication Theory, 26(4), 369-388. DOI: https://doi.org/10.1111/comt.12098
  • Lotero-Echeverri, G., Romero-Rodríguez, L. M., & Pérez Rodríguez, M. A. (2018). Fact-checkin vs. fake news: Periodismo de confirmación como recurso de la competencia mediática contra la desinformación. Index Comunicación, 8(2), 295-316.
  • MacDorman, K. F., & Ishiguro, H. (2006). The uncanny advantage of using androids in cognitive and social science research. Interaction Studies, 7(3), 297-337. DOI: https://doi.org/10.1075/is.7.3.03mac.
  • Manor, I. (2018). How Diplomats Can Combat Digital Propaganda. https://www.globalpolicyjournal.com/blog/27/06/2018/how-diplomats-can-combat-digital-propaganda
  • McGrew, S., Ortega, T., Breakstone, J., & Wineburg, S. (2017). The chal lenge that’s bigger than fake news: Civic reasoning in a social media environment. American Educator, 41(3), 4-9.
  • Miletskiy, V. P., Cherezov, D. N., & E. V. Strogetskaya. (2019). Transformations of professional political communications in the digital society (by the example of the fake news communica tion strategy), in Proc. IEEE 2019 Communication Strategies in Digital Society Workshop (ComSDS), (10-11 Nisan 2019), 121-124. Available: https://ieeexplore.ieee.org/document/8709404 DOI: https://doi.org/10.1109/COMSDS.2019.8709404
  • Nash, K., Lea, J. M., Davies, T., & Yogeeswaran, K. (2018). The bionic blues: Robot rejection lowers self-esteem. Computers in Human Behavior, 78, 59-63.
  • Nuh, H., & Yıldırım, B. (2025). Medyanın görmediği hakikat: Mevsimlik tarım işçileri ile ilgili trafik kaza haberlerinin çerçeveleme kuramı bağlamında incelenmesi. Turkish Journal of Agriculture-Food Science and Technology, 13(6), 1652-1664.
  • Olsher, D. J. (2015). New artificial intelligence tools for deep conflict resolution and humanitarian response. Procedia Eng., 107, (282-292).
  • Özdemir, M., & Akdağ, M. (2024). Sokak siyaseti ve toplumsal hareketlilik: Mahsa Amini protestolarının medyadaki sunumu. İletişim ve Toplum Araştırmaları Dergisi, 4(1), 17-40.
  • Pantserev, K. A. (2020). The malicious use of AI-based deepfake technology as the new threat to psychological security and political stability. In Cyber defence in the age of AI, smart societies and augmented humanity (37-55). Cham: Springer International Publishing.
  • Pennycook, G., & Rand, D. G. (2019). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality, 88(2), 185-200.
  • Rumpala, Y. (2012). Artificial intelligences and political organization: An exploration based on the science fiction work of IAIN M. Banks. Technol. Soc., 34(1), 23-32.
  • Selçuk, H. (2024). Kriz yönetiminde halkla ilişkilerin rolü ve önemi. Dicle Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi, 14(28), 994-1014.
  • Selçuk, H. (2025). Sosyal medyanın gücü ve halkla ilişkilerdeki yeri. Dijital Halkla İlişkiler ve Sosyal Medyanın Rolü (ed. Feryat Alkan), Eğitim Yayınevi.
  • Siarohin, A., Lathuilière, S., Tulyakov, S., Ricci, E., & Sebe, N. (2019). First order motion model for image animation, in Proc. 32nd Adv. Neural Inf. Process. Syst., 7137-7147.
  • Smidi, A., & Shahin, S. (2017). Social media and social mobilisation in the middle east: A survey of research on the Arab spring. India Quart. J. Int. Affairs, 73(2), 196-209.
  • Steiner, L. (2014). Feminist media theory. In The Handbook of Media and Mass Communication Theory. Hoboken, NJ, USA: Wiley, 359-379.
  • Tornero, J. M. P., Tayie, S. S., Tejedor, S., & C. Pulido, (2018). How to confront fake news through news literacy? State of the art. Doxa Comunicación, (211-235). [Online]. Available: https://tinyurl.com/ybgastv3 van Dijck, J. (2012). Facebook and the engineering of connectivity, Converg. Int. J. Res. New Media Technol., 19(2), 141-155.
  • Westerlund, M. (2019). The emergence of deepfake technology: A review. Technol. Innov. Manag. Rev., 9(11), 39-52.
  • Whitney, L. (2019). How to spot fake news online. PC Magazine, New York, NY, USA, 155-159. You Won’t Believe What Obama Says in This Video, (2018). YouTube, San Bruno, CA, USA, [Online]. Available: https://www.youtube. com/watch?v=cQ54GDm1eL0

Security Issue in the Age of Deepfakes

Yıl 2026, Sayı: 31, 73 - 86, 12.03.2026
https://doi.org/10.32952/communicata.1886262
https://izlik.org/JA57AR46XZ

Öz

Recent practical advances in artificial intelligence have also led to the emergence of the deepfake phenomenon, which can be considered a type of fake news. Deepfake is the creation of realistic digital products, and in recent years, numerous audio, text, and video deepfakes have appeared on social media. In particular, the low level of technical expertise and equipment required to create deepfakes means that such content can be easily produced and distributed online by anyone. This research examines deepfakes from multiple perspectives, including media and society, media production, media representations, media audiences, gender, law and regulations, and politics. In this study, materials were obtained using document analysis methods. The basic literature addressing the phenomenon of deepfakes was taken into consideration, and technical reports, press releases, legislative proposals, and audiovisual materials available on freely accessible internet platforms were analyzed. The different angles chosen in the investigation of the phenomenon are inspired by intersectionality, an approach that can be used to explore how categories are connected to each other and within themselves, and how they interact at different levels. As a result, it shows that society is not prepared to deal with deepfakes at any level. The fact that no significant impact has been seen so far is due to the fact that this technology is still in its early stages of development and has shortcomings in addressing the problem. To solve this problem, technology, education, teaching, and governance must be used together.

Kaynakça

  • Alharthi, R., Guthier, B., & Saddik, A. E. (2018). Recognizing human needs during critical events using machine learning powered psychology-based framework. IEEE Access, 6, (58737-58753). DOI: https://doi.org/10.1109/ACCESS.2018.2874032
  • Alpsoy, İ. (2025). Bilgi çarpitma sürecinde yapay zekanin işlevi. Erciyes İletişim Dergisi, 12(2), 551-568. https://doi.org/10.17680/erciyesiletisim.1636521
  • Avatarify: Avatars for zoom, Skype and other video-conferencing apps, (2020). GitHub, San Francisco, CA, USA, [Online]. Available: https://github.com/alievk/avatarify
  • Bartneck, C. (2013). Robots in the theatre and the media, in Proc. Design Semantics Form Movement (DeSForM), 64-70.
  • Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3, e32.
  • Bouziane, M., Perrin, H., Cluzeau, A., Mardas, J., & Sadeq, A. (2020). Team buster. ai at CheckThat! 2020 insights and recommendations to improve fact-checking. in CLEF (working notes) (p. 12).
  • Brown, J. A. (2017). #WheresRey: Feminism, protest, and merchandising sex ism in Star Wars: The force awakens. Feminist Media Studies, 18(5), 335-348. DOI: https://doi.org/10.1080/14680777.2017.1313291
  • Busse, K., & Gray, J. (2011). Fan cultures and fan communities, in The Handbook of Media Audiences. Chichester, U.K.: Wiley-Blackwell, 425-443.
  • Çakar, Ü. (2023). Siyasal iletişim aracı olarak sosyal medya reklamlarının incelenmesi: ABD seçimleri örneği. Ö. F. Aslan, & S. S. Atılgan (Editör). İletişim Perspektifinde Sosyal ve Ekonomik Olaylar (247-275). Efe Akademi Yayınları
  • Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Affairs, 98(1), 147–153.
  • Cho, C. H., Martens, M. L., Kim, H., & Rodrigue, M. (2011). Astroturfing global warming: It isn’t always greener on the other side of the fence. Journal of Business Ethics, 104(4), 571-587.
  • Çifci, V., & Başarır, M. (2025). Doktor-tıbbi mümessil iletişiminde halkla ilişkiler ortam ve araçlarının rolü: Uygulamalı bir inceleme. Uluslararası Kültürel ve Sosyal Araştırmalar Dergisi, 11(1), 73-88.
  • Clark, R. (2016). Hope in a hashtag: The discursive activism of #WhyIStayed. Feminist Media Studies, 16(5), 788-804. DOI: https://doi.org/10.1080/14680777.2016.1138235
  • Clarke, Y. D. (2019). To combat the spread of disinformation through restrictions on deep-fake video alteration technology. [Online]. Available: https://www.congress.gov/116/bills/hr3230/BILLS-116hr3230ih.pdf
  • Çömlekçi, M. F. (2019). Sosyal medyada dezenformasyon ve haber doğrulama platformlarının pratikleri. Gümüşhane Üniversitesi İletişim Fakültesi Elektronik Dergisi, 7(3), 1549-1563.
  • Couldry N., & Hepp, A. (2013). Conceptualizing mediatization: Contexts, traditions, arguments. Commun. Theory, 23(3), 191-202.
  • DeepFaceLab: The Leading Software for Creating Deepfakes, (2019). GitHub, San Francisco, CA, USA., Available: https://github.com/iperov/DeepFaceLab
  • Degerstedt, L., & Pelle, S. (2017). More media, more people - On social & multimodal media intelligence. Human IT, 13(3), 54-84.
  • Epstein, S. L. (2015). Wanted: collaborative intelligence, Artificial Intelligence, 221, (36-45). DOI: https://doi.org/10.1016/j.artint.2014.12.006
  • Freidson, E. (2001). Professionalism: The third logic. New York, USA: Univ. Chicago Press.
  • Gölgeli, K. (2025). Yapay zekâ ile reklam tasadıkı: Reklamcılara yönelik bir araştırma. İnsan ve Toplum Bilimleri Araştırmaları Dergisi, 14(1), 319-336.
  • Gümüş, A., & Sidal, Ö. (2023). Örgütsel değişim ve yönetimi çerçevesinde dijital dönüşüm ile yapay zekâ teknolojilerinin turizm işletmelerine olan etkileriyle ilgili literatür araştırması. Journal of Academic Tourism Studies, 4, (51-62).
  • Habermas, J., Lennox, S., & Lennox, F. (1974). The public sphere: An encyclopedia article (1964). New German Critique, New York, NY, USA: Telos Press. 49-55.
  • Hall, S. (1997). The spectacle of the other. Representation: Cultural representations and signifying practices, 7, (223-290).
  • Haridakis, P. (2013). Uses and gratifications: A social and psychological perspective of media use and effects. In The International Encyclopedia of Media Studies. Chichester, U.K.: Blackwell.
  • Hemer, O. (2012). Fiction and Truth in Transition: Writhing the Present Past in South Africa and Agentina. (Vol. 34). LIT Verlag Münster.
  • Hepp, A., Hjarvard, S., & Lundby, K. (2015). Mediatization: Theorizing the interplay between media, culture and society. Media Culture Soc., 37(2), 314-324.
  • Holder, C., Khurana, V., Harrison, F., & Jacobs, L. (2016). Robotics and law: Key legal and regulatory implications of the robotics age (part I of II). Comput. Law Security Rev., 32(3), 383-402.
  • Hopp T., & Gangadharbatla, H. (2016). Examination of the factors that influence the technological adoption intentions of tomorrow’s new media producers: A longitudinal exploration. Computers in Human Behavior, 55, (1117-1124). DOI: https://doi.org/10.1016/j.chb.2014.09.040
  • Horn S., & Veermans, K. (2019). Critical thinking efficacy and transfer skills defend against ‘fake news’ at an international school in Finland. Journal of Research in International Education, 18(1), 23-41. DOI: https://doi.org/10.1177/1475240919830003
  • Hukuk ve Bilişim Derneği. (2022). Deepfake ve cezai boyutu.
  • Hyman, J. (2017). Addressing fake news: Open standards & easy identification. In 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), 63-69.
  • Janaszkiewicz, P., Krysi´nska, J., Prys, M., Kieruzel, M., Lipczy´nski, T., & Ró˙zewski, P. (2018). Text summarization for storytelling: Formal document case. Procedia Computer Science, 126, (1154-1161). DOI: https://doi.org/10.1016/j.procs.2018.08.053
  • Jia, Y., Zhang, Y., Weiss, R. J., Wang, Q., Shen, J., Ren, F., Chen, Z., Nguyen, P., Pang, R., Moreno, I. L., & Wu, Y. (2018). Transfer learning from speaker verification to multispeaker text-to-speech synthesis. Computer Science > Computation and Language, [Online]. Available: arxiv.abs/1806.04558. https://doi.org/10.48550/arXiv.1806.04558.
  • Kalmus, V., Pruulmann-Vengerfeldt, P., Siibak, A., & P. Runnel, (2009). Mapping the terrain of generation C: Places and practices of online content creation among estonian teenagers. Journal of Computer-Mediated Communication, 14(4), 1257–1282. DOI: https://doi.org/10.1111/j.1083-6101.2009.01489.x
  • Kammer, A. (2013). The mediatization of journalism. MedieKultur: Journal of Media and Communication Research, 29(54), 141-158.
  • Karakoç, E., & Zeybek, B. (2022). Görmek inanmaya yeter mi? Görsel dezenformasyonun ayırt edici biçimi olarak siyasi deepfake içerikler. Öneri Dergisi, 17(57), 50-72.
  • Karnouskos, S. (2020). Self-driving car acceptance and the role of ethics. IEEE Transactions on Engineering Management, 67(2), 252-265. DOI: https://doi.org/10.1109/TEM.2018.2877307
  • Kaya, A., Seyrek, E., Sarıgül, A., Sata, M., Gómez-Salgado, J., & Yıldırım, M. (2025). Social media addiction, perceived social support, sleep disorder, and job performance in healthcare professionals: Testing a moderated mediation model. International Journal of Mental Health Promotion, 27 (8), 1149-1163. https://doi.org/10.32604/ijmhp.2025.067388.
  • Kietzmann, J., Lee, L. W., McCarthy, I. P., & Kietzmann, T. C. (2020). Deepfakes: Trick or treat? Business Horizons, 63(2), 135-146.
  • Kılınç Law. (2025). Deepfake teknolojisinin kişisel veriler üzerindeki etkileri ve KVKK’nın koruma mekanizmaları. Kunelius, R., & Reunanen, E. (2016). Changing power of journalism: The two phases of mediatization. Communication Theory, 26(4), 369-388. DOI: https://doi.org/10.1111/comt.12098
  • Lotero-Echeverri, G., Romero-Rodríguez, L. M., & Pérez Rodríguez, M. A. (2018). Fact-checkin vs. fake news: Periodismo de confirmación como recurso de la competencia mediática contra la desinformación. Index Comunicación, 8(2), 295-316.
  • MacDorman, K. F., & Ishiguro, H. (2006). The uncanny advantage of using androids in cognitive and social science research. Interaction Studies, 7(3), 297-337. DOI: https://doi.org/10.1075/is.7.3.03mac.
  • Manor, I. (2018). How Diplomats Can Combat Digital Propaganda. https://www.globalpolicyjournal.com/blog/27/06/2018/how-diplomats-can-combat-digital-propaganda
  • McGrew, S., Ortega, T., Breakstone, J., & Wineburg, S. (2017). The chal lenge that’s bigger than fake news: Civic reasoning in a social media environment. American Educator, 41(3), 4-9.
  • Miletskiy, V. P., Cherezov, D. N., & E. V. Strogetskaya. (2019). Transformations of professional political communications in the digital society (by the example of the fake news communica tion strategy), in Proc. IEEE 2019 Communication Strategies in Digital Society Workshop (ComSDS), (10-11 Nisan 2019), 121-124. Available: https://ieeexplore.ieee.org/document/8709404 DOI: https://doi.org/10.1109/COMSDS.2019.8709404
  • Nash, K., Lea, J. M., Davies, T., & Yogeeswaran, K. (2018). The bionic blues: Robot rejection lowers self-esteem. Computers in Human Behavior, 78, 59-63.
  • Nuh, H., & Yıldırım, B. (2025). Medyanın görmediği hakikat: Mevsimlik tarım işçileri ile ilgili trafik kaza haberlerinin çerçeveleme kuramı bağlamında incelenmesi. Turkish Journal of Agriculture-Food Science and Technology, 13(6), 1652-1664.
  • Olsher, D. J. (2015). New artificial intelligence tools for deep conflict resolution and humanitarian response. Procedia Eng., 107, (282-292).
  • Özdemir, M., & Akdağ, M. (2024). Sokak siyaseti ve toplumsal hareketlilik: Mahsa Amini protestolarının medyadaki sunumu. İletişim ve Toplum Araştırmaları Dergisi, 4(1), 17-40.
  • Pantserev, K. A. (2020). The malicious use of AI-based deepfake technology as the new threat to psychological security and political stability. In Cyber defence in the age of AI, smart societies and augmented humanity (37-55). Cham: Springer International Publishing.
  • Pennycook, G., & Rand, D. G. (2019). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality, 88(2), 185-200.
  • Rumpala, Y. (2012). Artificial intelligences and political organization: An exploration based on the science fiction work of IAIN M. Banks. Technol. Soc., 34(1), 23-32.
  • Selçuk, H. (2024). Kriz yönetiminde halkla ilişkilerin rolü ve önemi. Dicle Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi, 14(28), 994-1014.
  • Selçuk, H. (2025). Sosyal medyanın gücü ve halkla ilişkilerdeki yeri. Dijital Halkla İlişkiler ve Sosyal Medyanın Rolü (ed. Feryat Alkan), Eğitim Yayınevi.
  • Siarohin, A., Lathuilière, S., Tulyakov, S., Ricci, E., & Sebe, N. (2019). First order motion model for image animation, in Proc. 32nd Adv. Neural Inf. Process. Syst., 7137-7147.
  • Smidi, A., & Shahin, S. (2017). Social media and social mobilisation in the middle east: A survey of research on the Arab spring. India Quart. J. Int. Affairs, 73(2), 196-209.
  • Steiner, L. (2014). Feminist media theory. In The Handbook of Media and Mass Communication Theory. Hoboken, NJ, USA: Wiley, 359-379.
  • Tornero, J. M. P., Tayie, S. S., Tejedor, S., & C. Pulido, (2018). How to confront fake news through news literacy? State of the art. Doxa Comunicación, (211-235). [Online]. Available: https://tinyurl.com/ybgastv3 van Dijck, J. (2012). Facebook and the engineering of connectivity, Converg. Int. J. Res. New Media Technol., 19(2), 141-155.
  • Westerlund, M. (2019). The emergence of deepfake technology: A review. Technol. Innov. Manag. Rev., 9(11), 39-52.
  • Whitney, L. (2019). How to spot fake news online. PC Magazine, New York, NY, USA, 155-159. You Won’t Believe What Obama Says in This Video, (2018). YouTube, San Bruno, CA, USA, [Online]. Available: https://www.youtube. com/watch?v=cQ54GDm1eL0
Toplam 61 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Yeni Medya
Bölüm Derleme
Yazarlar

İkram Alpsoy 0000-0001-6798-5180

Gönderilme Tarihi 10 Şubat 2026
Kabul Tarihi 10 Mart 2026
Yayımlanma Tarihi 12 Mart 2026
DOI https://doi.org/10.32952/communicata.1886262
IZ https://izlik.org/JA57AR46XZ
Yayımlandığı Sayı Yıl 2026 Sayı: 31

Kaynak Göster

APA Alpsoy, İ. (2026). Deepfake Çağında Güvenlik Sorunsalı. Communicata, 31, 73-86. https://doi.org/10.32952/communicata.1886262
AMA 1.Alpsoy İ. Deepfake Çağında Güvenlik Sorunsalı. Communicata. 2026;(31):73-86. doi:10.32952/communicata.1886262
Chicago Alpsoy, İkram. 2026. “Deepfake Çağında Güvenlik Sorunsalı”. Communicata, sy 31: 73-86. https://doi.org/10.32952/communicata.1886262.
EndNote Alpsoy İ (01 Mart 2026) Deepfake Çağında Güvenlik Sorunsalı. Communicata 31 73–86.
IEEE [1]İ. Alpsoy, “Deepfake Çağında Güvenlik Sorunsalı”, Communicata, sy 31, ss. 73–86, Mar. 2026, doi: 10.32952/communicata.1886262.
ISNAD Alpsoy, İkram. “Deepfake Çağında Güvenlik Sorunsalı”. Communicata. 31 (01 Mart 2026): 73-86. https://doi.org/10.32952/communicata.1886262.
JAMA 1.Alpsoy İ. Deepfake Çağında Güvenlik Sorunsalı. Communicata. 2026;:73–86.
MLA Alpsoy, İkram. “Deepfake Çağında Güvenlik Sorunsalı”. Communicata, sy 31, Mart 2026, ss. 73-86, doi:10.32952/communicata.1886262.
Vancouver 1.İkram Alpsoy. Deepfake Çağında Güvenlik Sorunsalı. Communicata. 01 Mart 2026;(31):73-86. doi:10.32952/communicata.1886262

Content of this journal is licensed under a Creative Commons Attribution NonCommercial 4.0 International License

29906