Research Article
BibTex RIS Cite

AI-Driven Media Manipulation: Public Awareness, Trust, and the Role of Detection Frameworks in Addressing Deepfake Technologies

Year 2025, Volume: 2 Issue: 3, 98 - 133, 29.05.2025

Abstract

This study examines the impact of AI deception technologies; voice cloning, deepfake videos, and face swapping on trust in digital media among affluent European senior citizens (40+). It assesses their awareness, experiences with identity theft, and trust erosion in digital content. A mixed-method approach combined quantitative surveys with AI tool demonstrations. Fifty-one participants from various European countries completed three questionnaires via Facebook, Gmail, and Instagram, measuring awareness, trust, and AI-driven deception experiences. Findings showed high exposure to deepfakes (76.5%), frequent voice cloning scams (66.7%), and significant distrust in digital media (86.3%). Statistical analysis (χ² = 25.548, p < 0.001) confirmed strong links between AI awareness, trust, and identity theft. Reliability analysis (Cronbach’s alpha = 0.783) indicated good internal consistency. BioID detection software analysis yielded definitive results: two analyzed deepfake videos scored 0.00053 and 0.06223, confirming their synthetic nature, while genuine selfies achieved a 0.92363 liveness score, validating their authenticity. Limitations include a small sample and geographic constraints. The study underscores the need for digital literacy programs, AI detection tools like BioID, and stronger regulations to mitigate AI-driven manipulation risks.

References

  • Ajzen, I. (2011). The theory of planned behaviour: Reactions and reflections. Psychology & Health, 26(9), 1113–1127.
  • Al-Khazraji, S. H., Saleh, H. H., Khalid, A. I., & Mishkhal, I. A. (2023). Impact of deepfake technology on social media: Detection, misinformation and societal implications. The Eurasia Proceedings of Science Technology Engineering and Mathematics, 23, 429–441.
  • Amerini, I., Barni, M., Battiato, S., Bestagini, P., Boato, G., Bonaventura, T. S., ... & Vitulano, D. (2024). Deepfake media forensics: State of the art and challenges ahead. arXiv Preprint. https://arxiv.org/abs/2408.00388
  • Carlson, M. (2020). Fake news as an informational moral panic: The symbolic deviancy of social media during the 2016 US presidential election. Information, Communication & Society, 23(3), 374–388. https://doi.org/10.1080/1369118X.2018.1505934
  • Chapagain, D., Kshetri, N., & Aryal, B. (2024). Deepfake disasters: A comprehensive review of technology, ethical concerns, countermeasures, and societal implications. In 2024 International Conference on Emerging Trends in Networks and Computer Communications (ETNCC) (pp. 1–9). IEEE.
  • Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press. https://doi.org/10.1515/9781503609754
  • Esezoobo, S. O., & Braimoh, J. J. (2023). Integrating legal, ethical, and technological strategies to mitigate AI deepfake risks through strategic communication. International Journal of Scientific Research and Management (IJSRM), 11(8), 914–928.
  • Fabuyi, J. A., Olaniyi, O. O., Olateju, O. O., Aideyan, N. T., Selesi-Aina, O., & Olaniyi, F. G. (2024). Deepfake regulations and their impact on content creation in the entertainment industry. Archives of Current Research International, 24(12), 52–74.
  • Fallis, D. (2021). The epistemic threat of deepfakes. Philosophy & Technology, 34(4), 623–643. https://doi.org/10.1007/s13347-020-00419-2
  • Field, A. (2024). Discovering statistics using IBM SPSS statistics (5th ed.). SAGE Publications.
  • Future, M. R. (2023). AI-generated media market research report.
  • George, A. S. (2023). Deepfakes: The evolution of hyper realistic media manipulation. Partners Universal Innovative Research Publication, 1(2), 58–74.
  • Gillespie, T. (2022). Governance of and by platforms. In J. Burgess, A. Marwick, & T. Poell (Eds.), The SAGE handbook of social media (pp. 254–278). SAGE Publications. https://doi.org/10.4135/9781473984066.n15
  • Karnouskos, S. (2020). Artificial intelligence in digital media: The era of deepfakes. IEEE Transactions on Technology and Society, 1(3), 138–147.
  • Kietzmann, J., Lee, L. W., McCarthy, I. P., & Kietzmann, T. C. (2020). Deepfakes: Trick or treat? Business Horizons, 63(2), 135–146. https://doi.org/10.1016/j.bushor.2019.11.006
  • Livingstone, S., & Lunt, P. (2023). Trust calibration in older adults: Media literacy interventions for the digital age. European Journal of Communication, 38(2), 145–162. https://doi.org/10.1177/02673231221147315
  • Malik, K. M., & Baig, R. (2023). Deepfake voice detection: Techniques, challenges, and future directions. IEEE Access, 11, 12504–12524. https://doi.org/10.1109/ACCESS.2023.3241711
  • Mirsky, Y., & Lee, W. (2021). The creation and detection of deepfakes: A survey. ACM Computing Surveys (CSUR), 54(1), 1–41.
  • Nasar, B. F., Sajini, T., & Lason, E. R. (2020). Deepfake detection in media files—audios, images and videos. In 2020 IEEE Recent Advances in Intelligent Computational Systems (RAICS) (pp. 74–79). IEEE.
  • Ng, Y. L. (2024). A longitudinal model of continued acceptance of conversational artificial intelligence. Information Technology & People.
  • Nowroozi, E., Seyedshoari, S., Mohammadi, M., & Jolfaei, A. (2022). Impact of media forensics and deepfake in society. In Breakthroughs in Digital Biometrics and Forensics (pp. 387–410). Springer.
  • Oza, P., Patel, N., & Patel, A. (2024). Deepfake technology: Overview and emerging trends in social media.
  • Papacharissi, Z. (2021). Affective publics: Sentiment, technology, and politics. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199999736.001.0001
  • Play.ht. (2024). VoiceCloning. https://play.ht/studio/files/59d00317-79f6-4092-801b-49d3429ec1da
  • Rask.AI. (2024). VideoDubbing. https://app.rask.ai/project/4d33a4f8-7e8b-4b54-99ebc23a11b2b182
  • Shah, D. V., Hiaeshutter-Rice, D., Lukito, J., & Wells, C. (2023). Deepfakes and democratic participation: How synthetic media awareness affects political engagement across demographic groups. Political Communication, 40(1), 98–119. https://doi.org/10.1080/10584609.2022.2144679
  • Shakil, M., & Mekuria, F. (2024). Balancing the risks and rewards of deepfake and synthetic media technology: A regulatory framework for emerging economies. In 2024 International Conference on Information and Communication Technology for Development for Africa (ICT4DA) (pp. 114–119). IEEE.
  • Sharma, M., & Kaur, M. (2022). A review of deepfake technology: An emerging AI threat. In Soft Computing for Security Applications: Proceedings of ICSCS 2021 (pp. 605–619).
  • Software, B. (2024). Deepfake technologies detection. https://www.bioid.com/playground/
  • Survey, A. (2024). Awareness and exposure. https://docs.google.com/forms/d/e/1FAIpQLSdGQgrBqq97XPFVr0r6-NLpEK_d6XMXqT9eWZheS1qzFSQ6pg/viewform
  • SwapFace. (2024). SwapFace technologies. https://www.swapface.org
  • SyncLabs. (2024). Lip synchronization. https://app.synclabs.so/share/lip-sync/514ec9c3-f0ab-463b-8f9c-b3a7372bf731
  • Temir, E. (2020). Deepfake: New era in the age of disinformation and end of reliable journalism. Selçuk İletişim, 13(2), 1009–1024.
  • TheftExperienceSurvey. (2024). Identity and theft experience. https://docs.google.com/forms/d/e/1FAIpQLSeHfV4eCQaTyi-f9S6X0UFzL5tEjCPhX4uRrmhykBKKeLgwkQ/viewform
  • Thies, J., Zollhöfer, M., & Nießner, M. (2020). Deferred neural rendering: Image synthesis using neural textures. ACM Transactions on Graphics, 38(4), 1–12. https://doi.org/10.1145/3306346.3323035
  • TrustAssessmentSurvey. (2024). Trust assessment survey. https://docs.google.com/forms/d/e/1FAIpQLSeQGrXEGKqR1G8KtzfBHBScB7fXfoSaqK4zax0G8DMMjHeKqw/viewform
  • Tuysuz, M. K., & Kılıç, A. (2023). Analyzing the legal and ethical considerations of deepfake technology. Interdisciplinary Studies in Society, Law, and Politics, 2(2), 4–10.
  • Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1), 2056305120903408.
  • Vizoso, Á., Vaz-Álvarez, M., & López-García, X. (2021). Fighting deepfakes: Media and internet giants’ converging and diverging strategies against hi-tech misinformation. Media and Communication, 9(1), 291–300.
  • Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 40–53. https://doi.org/10.22215/timreview/1282
  • Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. In Advances in Neural Information Processing Systems, 32.

Yapay Zeka Destekli Medya Manipülasyonu: Kamuoyu Farkındalığı, Güven ve Deepfake Teknolojilerini Ele Almada Algılama Çerçevelerinin Rolü

Year 2025, Volume: 2 Issue: 3, 98 - 133, 29.05.2025

Abstract

Yapay zeka tarafından desteklenen deepfake teknolojisi, son derece gerçekçi ancak uydurma görsel-işitsel içeriklerin oluşturulmasını sağlayarak dijital medyayı dönüştürdü. Deepfake'ler daha karmaşık hale geldikçe, kamuoyunun medyaya olan güvenine ilişkin endişeler artıyor ve yaygın yanlış bilgi çağında etik, politik ve güvenlik zorlukları ortaya çıkıyor. Bu çalışma, yapay zeka aldatma teknolojilerinin; ses klonlama, deepfake videolar ve yüz değiştirmenin, varlıklı Avrupalı ​​yaşlı vatandaşlar (40+) arasında dijital medyaya olan güven üzerindeki etkisini inceliyor. Dijital içerikteki farkındalıklarını, kimlik hırsızlığı deneyimlerini ve güven erozyonunu değerlendiriyor. Karma yöntemli bir yaklaşım, nicel anketleri yapay zeka aracı gösterileriyle birleştirdi. Çeşitli Avrupa ülkelerinden elli bir katılımcı, farkındalığı, güveni ve yapay zeka destekli aldatma deneyimlerini ölçen Facebook, Gmail ve Instagram üzerinden üç anketi tamamladı. Bulgular, deepfake'lere yüksek maruz kalma (%76,5), sık ses klonlama dolandırıcılıkları (%66,7) ve dijital medyaya önemli ölçüde güvensizlik (%86,3) olduğunu gösterdi. İstatistiksel analiz (χ² = 25.548, p < 0.001), yapay zeka farkındalığı, güven ve kimlik hırsızlığı arasında güçlü bağlantılar olduğunu doğruladı. Güvenilirlik analizi (Cronbach alfa = 0.783), iyi iç tutarlılık gösterdi. BioID tespit yazılımı analizi kesin sonuçlar verdi: analiz edilen iki deepfake videonun puanı 0.00053 ve 0.06223 oldu ve bu da sentetik yapılarını doğruladı; gerçek özçekimler ise 0.92363 canlılık puanı alarak gerçekliklerini doğruladı. Sınırlamalar arasında küçük bir örneklem ve coğrafi kısıtlamalar yer alıyor. Çalışma, dijital okuryazarlık programlarına, BioID gibi yapay zeka tespit araçlarına ve yapay zeka kaynaklı manipülasyon risklerini azaltmak için daha güçlü düzenlemelere olan ihtiyacın altını çiziyor.

References

  • Ajzen, I. (2011). The theory of planned behaviour: Reactions and reflections. Psychology & Health, 26(9), 1113–1127.
  • Al-Khazraji, S. H., Saleh, H. H., Khalid, A. I., & Mishkhal, I. A. (2023). Impact of deepfake technology on social media: Detection, misinformation and societal implications. The Eurasia Proceedings of Science Technology Engineering and Mathematics, 23, 429–441.
  • Amerini, I., Barni, M., Battiato, S., Bestagini, P., Boato, G., Bonaventura, T. S., ... & Vitulano, D. (2024). Deepfake media forensics: State of the art and challenges ahead. arXiv Preprint. https://arxiv.org/abs/2408.00388
  • Carlson, M. (2020). Fake news as an informational moral panic: The symbolic deviancy of social media during the 2016 US presidential election. Information, Communication & Society, 23(3), 374–388. https://doi.org/10.1080/1369118X.2018.1505934
  • Chapagain, D., Kshetri, N., & Aryal, B. (2024). Deepfake disasters: A comprehensive review of technology, ethical concerns, countermeasures, and societal implications. In 2024 International Conference on Emerging Trends in Networks and Computer Communications (ETNCC) (pp. 1–9). IEEE.
  • Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press. https://doi.org/10.1515/9781503609754
  • Esezoobo, S. O., & Braimoh, J. J. (2023). Integrating legal, ethical, and technological strategies to mitigate AI deepfake risks through strategic communication. International Journal of Scientific Research and Management (IJSRM), 11(8), 914–928.
  • Fabuyi, J. A., Olaniyi, O. O., Olateju, O. O., Aideyan, N. T., Selesi-Aina, O., & Olaniyi, F. G. (2024). Deepfake regulations and their impact on content creation in the entertainment industry. Archives of Current Research International, 24(12), 52–74.
  • Fallis, D. (2021). The epistemic threat of deepfakes. Philosophy & Technology, 34(4), 623–643. https://doi.org/10.1007/s13347-020-00419-2
  • Field, A. (2024). Discovering statistics using IBM SPSS statistics (5th ed.). SAGE Publications.
  • Future, M. R. (2023). AI-generated media market research report.
  • George, A. S. (2023). Deepfakes: The evolution of hyper realistic media manipulation. Partners Universal Innovative Research Publication, 1(2), 58–74.
  • Gillespie, T. (2022). Governance of and by platforms. In J. Burgess, A. Marwick, & T. Poell (Eds.), The SAGE handbook of social media (pp. 254–278). SAGE Publications. https://doi.org/10.4135/9781473984066.n15
  • Karnouskos, S. (2020). Artificial intelligence in digital media: The era of deepfakes. IEEE Transactions on Technology and Society, 1(3), 138–147.
  • Kietzmann, J., Lee, L. W., McCarthy, I. P., & Kietzmann, T. C. (2020). Deepfakes: Trick or treat? Business Horizons, 63(2), 135–146. https://doi.org/10.1016/j.bushor.2019.11.006
  • Livingstone, S., & Lunt, P. (2023). Trust calibration in older adults: Media literacy interventions for the digital age. European Journal of Communication, 38(2), 145–162. https://doi.org/10.1177/02673231221147315
  • Malik, K. M., & Baig, R. (2023). Deepfake voice detection: Techniques, challenges, and future directions. IEEE Access, 11, 12504–12524. https://doi.org/10.1109/ACCESS.2023.3241711
  • Mirsky, Y., & Lee, W. (2021). The creation and detection of deepfakes: A survey. ACM Computing Surveys (CSUR), 54(1), 1–41.
  • Nasar, B. F., Sajini, T., & Lason, E. R. (2020). Deepfake detection in media files—audios, images and videos. In 2020 IEEE Recent Advances in Intelligent Computational Systems (RAICS) (pp. 74–79). IEEE.
  • Ng, Y. L. (2024). A longitudinal model of continued acceptance of conversational artificial intelligence. Information Technology & People.
  • Nowroozi, E., Seyedshoari, S., Mohammadi, M., & Jolfaei, A. (2022). Impact of media forensics and deepfake in society. In Breakthroughs in Digital Biometrics and Forensics (pp. 387–410). Springer.
  • Oza, P., Patel, N., & Patel, A. (2024). Deepfake technology: Overview and emerging trends in social media.
  • Papacharissi, Z. (2021). Affective publics: Sentiment, technology, and politics. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199999736.001.0001
  • Play.ht. (2024). VoiceCloning. https://play.ht/studio/files/59d00317-79f6-4092-801b-49d3429ec1da
  • Rask.AI. (2024). VideoDubbing. https://app.rask.ai/project/4d33a4f8-7e8b-4b54-99ebc23a11b2b182
  • Shah, D. V., Hiaeshutter-Rice, D., Lukito, J., & Wells, C. (2023). Deepfakes and democratic participation: How synthetic media awareness affects political engagement across demographic groups. Political Communication, 40(1), 98–119. https://doi.org/10.1080/10584609.2022.2144679
  • Shakil, M., & Mekuria, F. (2024). Balancing the risks and rewards of deepfake and synthetic media technology: A regulatory framework for emerging economies. In 2024 International Conference on Information and Communication Technology for Development for Africa (ICT4DA) (pp. 114–119). IEEE.
  • Sharma, M., & Kaur, M. (2022). A review of deepfake technology: An emerging AI threat. In Soft Computing for Security Applications: Proceedings of ICSCS 2021 (pp. 605–619).
  • Software, B. (2024). Deepfake technologies detection. https://www.bioid.com/playground/
  • Survey, A. (2024). Awareness and exposure. https://docs.google.com/forms/d/e/1FAIpQLSdGQgrBqq97XPFVr0r6-NLpEK_d6XMXqT9eWZheS1qzFSQ6pg/viewform
  • SwapFace. (2024). SwapFace technologies. https://www.swapface.org
  • SyncLabs. (2024). Lip synchronization. https://app.synclabs.so/share/lip-sync/514ec9c3-f0ab-463b-8f9c-b3a7372bf731
  • Temir, E. (2020). Deepfake: New era in the age of disinformation and end of reliable journalism. Selçuk İletişim, 13(2), 1009–1024.
  • TheftExperienceSurvey. (2024). Identity and theft experience. https://docs.google.com/forms/d/e/1FAIpQLSeHfV4eCQaTyi-f9S6X0UFzL5tEjCPhX4uRrmhykBKKeLgwkQ/viewform
  • Thies, J., Zollhöfer, M., & Nießner, M. (2020). Deferred neural rendering: Image synthesis using neural textures. ACM Transactions on Graphics, 38(4), 1–12. https://doi.org/10.1145/3306346.3323035
  • TrustAssessmentSurvey. (2024). Trust assessment survey. https://docs.google.com/forms/d/e/1FAIpQLSeQGrXEGKqR1G8KtzfBHBScB7fXfoSaqK4zax0G8DMMjHeKqw/viewform
  • Tuysuz, M. K., & Kılıç, A. (2023). Analyzing the legal and ethical considerations of deepfake technology. Interdisciplinary Studies in Society, Law, and Politics, 2(2), 4–10.
  • Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1), 2056305120903408.
  • Vizoso, Á., Vaz-Álvarez, M., & López-García, X. (2021). Fighting deepfakes: Media and internet giants’ converging and diverging strategies against hi-tech misinformation. Media and Communication, 9(1), 291–300.
  • Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 40–53. https://doi.org/10.22215/timreview/1282
  • Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. In Advances in Neural Information Processing Systems, 32.
There are 41 citations in total.

Details

Primary Language English
Subjects Communication Theories, Communication Technology and Digital Media Studies, Internet, Mass Media, Media Technologies, International and Development Communication, New Media
Journal Section Research Article
Authors

Kareem Hussein 0000-0003-1706-7802

Bahire Özad 0000-0003-3615-5090

Publication Date May 29, 2025
Submission Date March 13, 2025
Acceptance Date May 27, 2025
Published in Issue Year 2025 Volume: 2 Issue: 3

Cite

APA Hussein, K., & Özad, B. (2025). AI-Driven Media Manipulation: Public Awareness, Trust, and the Role of Detection Frameworks in Addressing Deepfake Technologies. İnterdisipliner Medya Ve İletişim Çalışmaları, 2(3), 98-133.