Diğer
BibTex RIS Kaynak Göster

Yeni Nesil Tehdit: Derin Kurgu (DeepFake)

Yıl 2021, , 904 - 917, 30.09.2021
https://doi.org/10.37679/trta.1002526

Öz

İnsanlık tarihi boyunca en önemli olgulardan biri bilgiye erişebilmektir. Özellikle sanayi toplumundan bilgi toplumuna geçişle birlikte, bu olgu daha baskın hâle gelmiştir. İnternetin keşfi, yaygınlaşması, akıllı telefonlar sayesinde zaman ve mekândan bağımsız bir şekilde bağlantılı kalabilmek, bilgiye erişimi daha da kolaylaştırmıştır. Ancak bu erişim kolaylığı zamanla büyük bir zorluğa dönüşmüş, insanlığı daha kaliteli bilgiye kolaylıkla sahip olmaktan çok, bilgi ile mücadele eder bir hâle getirmiştir (Kominiarczuk and Ledzińska, 2014) (Özdemir, 2016)....

Kaynakça

  • Ajder, H., Patrini, G., Cavalli, F., & Cullen, L. (2019). The State of Deepfakes: Landscape, Threats and Impact. DeepTrace.
  • Bawden, D., Holtham, C., & Courtney, N. (1999, October). Perspectives on information overload. In Aslib Proceedings, MCB UP Ltd, 51(8), 249-255.
  • BBC Bitesize (2019). Deepfakes: What Are They and Why Would I Make One? [Online]. Available: https://www.bbc.co.uk/bitesize/articles/zfkwcq
  • Bergamaschi, S., Guerra, F., & Leiba, B. (2010). Guest editors' introduction: Information overload. IEEE Internet Computing, 14(6), 10-13.
  • Bregler, C., Covell, M., & Slaney, M. (1997). Video Rewrite: Driving Visual Speech with Audio. Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, 24, s. 353-360. doi:10.1145/258734.258880
  • Breland, J. (2019, March 15). The Bizarre and Terrifying Case of the “Deepfake” Video that Helped Bring an African Nation to the Brink. Mother Jones:https://www.motherjones.com/politics/2019/03/deepfake-gabon-ali-bongo/ adresinden alındı
  • CBS. (2019, May 26). Doctored Nancy Pelosi video highlights threat of "deepfake" tech. CBS News: https://www.cbsnews.com/news/doctored-nancy-pelosi-video-highlights-threat-of-deepfake-tech-2019-05-25/ adresinden alındı
  • Christopher, N. (18 February 2020). "We've Just Seen the First Use of Deepfakes in an Indian Election Campaign". Vice. Archived from the original on 19 February 2020.
  • Cole, S. (2019, June 11). This Deepfake of Mark Zuckerberg Tests Facebook’s Fake Video Policies.
  • Cole, S (19 June 2018). Gfycat's AI Solution for Fighting Deepfakes Isn't Working. Vice. Archived from the original on 8 November 2019.
  • Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine learning, 20(3), 273-297.
  • Deluca, J., Christodoulou, C., Diamond, B. J., Rosenstein, E. D., Kramer, N., & Natelson, B. H. (2004). Working memory deficits in chronic fatigue syndrome: differentiating between speed and accuracy of information processing. Journal of the International Neuropsychological Society, 10(1), 101-109.
  • Deng, L. &Yu, D. (2014). Deep Learning: Methods and Applications. Foundations and Trends® in Signal Processing: 7( 3-4), 197-387. http://dx.doi.org/10.1561/2000000039
  • Dickson, E. J. (7 October 2019). "Deepfake Porn Is Still a Threat, Particularly for K-Pop Stars". Rolling Stone. Archived from the original on 30 October 2019.
  • Economist (28 April 2020). Amabie: the mythical creature making a coronavirus comeback. The Economist. ISSN 0013-0613. Archived from the original on 20 May 2021.
  • Eelmaa, S. (25 March 2021). Sexualization of Children in Deepfakes and Hentai: Examining Reddit User Views. SocArxiv. doi:10.31235/osf.io/6wuhj. Archived from the original on 1 April 2021.
  • Edmunds, A., & Morris, A. (2000). The problem of information overload in business organisations: a review of the literature. International Journal of Information Management, 20(1), 17-28.
  • Eppler, M. J., & Mengis, J. (2004). The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. The Information Society, 20(5), 325-344.
  • Face, C. S. (2019, September 19). Better Call Trump: Money Laundering 101. Youtube: https://www.youtube.com/watch?v=Ho9h0ouemWQ&t=41s adresinden alındı
  • Farfade, S. S., Saberian, M. J., & Li, L. J. (2015, June). Multi-view face detection using deep convolutional neural networks. In Proceedings of the 5th ACM on International Conference on Multimedia Retrieval ACM, 643-650.
  • Farhoom and, A. F., & Drury, D. H. (2002). Overload. Communications of the ACM, 45(10), 127.
  • Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics, 36(4), 193-202.
  • Generated Media. (2018). 100,000 Faces Generated by AI. Generated Photos: https://generated.photos/ adresinden alındı
  • Goggin, Benjamin. "From porn to 'Game of Thrones': How deepfakes and realistic-looking fake videos hit it big". Business Insider. Archived from the original on 8 November 2019.
  • Himma, K. E. (2007). The concept of information overload: A preliminary step in understanding the nature of a harmful information-related condition. Ethics and Information Technology, 9(4), 259-272.
  • Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504-507.
  • Hofesmann, E. (2020, November 19). The State of Deepfakes in 2020. SKYNET TODAY: Putting AI News In Perspective: https://www.skynettoday.com/overviews/state-of-deepfakes-2020 adresinden alındı
  • Jacoby, J. 1977. Information load and decision quality: Some contested issues. Journal of Marketing Research, 14, 569–573.
  • Jacoby, J., Speller, D. E., and Berning, C. K. 1974. Brand choice behavior as a function of information load: Replication and extension. Journal of Consumer Research 1, 33–43.
  • Jacoby, J. 1984. Perspectives on information overload. Journal of Consumer Research 10, 432-436.
  • Karras, T., Laine, S., Aila, T. (2019). A Style-Based Generator Architecture for Generative Adversarial Networks. in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019
  • Kelley, T. M., & Laurie, A. H. (2021). Deep Fakes and National Security. Washington: Congressional Research Service.
  • Kominiarczuk, N., & Ledzińska, M. (2014). Turn down the noise: Information overload, conscientiousness and their connection to individual well-being. Personality and Individual Differences, 60, 76.
  • Korshunov, P., & Marcel, S. (2018, December 20). Deepfakes: a New Threat to Face Recognition Assessment and Detection. (C. University, Dü.) https://arxiv.org/abs/1812.08685 adresinden alındı
  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
  • Lee, Dave (3 February 2018). "'Fake porn' has serious consequences". Archived from the original on 1 December 2019.
  • Malhotra, N. K. (1982). Information load and consumer decision making. Journal of Consumer Research, 8, 419-430.
  • McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), 115-133.
  • Meyer, J. A. (1998). Information overload in marketing management. Marketing Intelligence & Planning, 16(3), 200-209.
  • Mikolov, T., Karafiát, M., Burget, L., Černocký, J., Khudanpur, S. (2010). Recurrent neural network based language model. In INTERSPEECH-2010, 1045-1048.
  • O'Sullivan, D. (2019, June 29). Congress to investigate deepfakes as doctored Pelosi video causes stir. (CNN, Röportaj Yapan)
  • Quayyum, A., Quadir, J., Janjua, M. U., & Vira, F. S. (2019). Using Blockchain to Rein in the New Post-Truth World and Check the Spread of Fake News. IT Professionals, 21(4), 16-24. doi:10.1109/MITP.2019.2910503
  • Özdemir, Ş. (2016). Individual contributions to infollution (information pollution): trust and share.
  • International Journal on New Trends in Education and Their Implications, 7(3), 23-33
  • Ranjan, R., Sankaranarayanan, S., Bansal, A., Bodla, N., Chen, J. C., Patel, V. M. & Chellappa, R. (2018). Deep learning for understanding faces: Machines may be just as good, or better, than humans. IEEE Signal Processing Magazine, 35(1), 66-83.
  • Roettgers, J. (21 February 2018). "Porn Producers Offer to Help Hollywood Take Down Deepfake Videos". Variety. Archived from the original on 10 June 2019.
  • Romano, A. (18 April 2018). "Jordan Peele's simulated Obama PSA is a double-edged warning against fake news". Vox. Archived from the original on 11 June 2019.
  • Patel, M., Gupta, A., Tanwar, S. , Obaidat, M. (2020). Trans-DF: A Transfer Learning-based end-to-end Deepfake Detector. 10.1109/ICCCA49541.2020.9250803.
  • Salakhutdinov, R., & Hinton, G. (2009, April). Deep boltzmann machines. In Artificial Intelligence and Statistics, 448-455.
  • Sparrow, P. (1999). Strategy and cognition: Understanding the role of management knowledge structures, organizational memory and information overload. Creativity and Innovation Management, 8(2), 140-148.
  • Somers, M. (2020, July 21). Deepfakes, explained. (M. M. School, Dü.) Boston, Usa.
  • Stupp, C. (2019, August 30). Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case. USA: Wall Street Journal. Ağustos 11, 2021 tarihinde https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402 adresinden alındı
  • Swenson, Kyle (11 January 2019). "A Seattle TV station aired doctored footage of Trump's Oval Office speech. The employee has been fired". The Washington Post. Archived from the original on 15 April 2019.
  • Taigman, Y., Yang, M., Ranzato, M. A., & Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE Conference On Computer Vision and Pattern Recognition (pp. 1701-1708)
  • Taslitz, A. E. (2013). Information overload, multi-tasking, and the socially networked jury: Why prosecutors should approach the media gingerly. The Journal of the Legal Profession, 37, 89-138.
  • "#TellTheTruthBelgium". Extinction Rebellion Belgium. Archived from the original on 25 April 2020.
  • Vincent, J. (2021, April 30). ‘Deepfake’ that supposedly fooled European politicians was just a look-alike, say pranksters. The Verge: https://www.theverge.com/2021/4/30/22407264/deepfake-european-polticians-leonid-volkov-vovan-lexus adresinden alındı
  • Vooruit. (2018, May 19). Vooruit. Facebook: https://www.facebook.com/vooruit.nu/videos/10155618434657151/?t=33 adresinden alındı
  • Wang, M. & Deng, W.(2018). Deep face recognition: A survey, ArXiv, 1804.06655.
  • Washington Post. (2020, February 13). The suspicious video that helped spark an attempted coup in Gabon. Youtube: https://www.youtube.com/watch?v=F5vzKs4z1dc adresinden alındı
  • Weng, J., Ahuja, N., & Huang, T. S. (1992, June). Cresceptron: a self-organizing neural network which grows adaptively. In [Proceedings 1992] IJCNN International Joint Conference on Neural Networks (Vol. 1, pp. 576-581). IEEE.
  • Wurman, R. S. (1989). Information Anxiety: What to do When Information Doesn't Tell You What You Need To Know. New York.
  • YouTube. (2018, January 17). Adolf Hitler "Downfall Movie" to Mauricio Macri. (Faceswap Deeplearning). YouTube: https://www.youtube.com/watch?v=M8t6hGRtDac&t=1s adresinden alındı
  • YouTube. (2018, January 28). Merkel Trump Deepfake. YouTube: https://www.youtube.com/watch?v=5hZOcmqWKzY&t=2s adresinden alındı
Toplam 64 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular İletişim ve Medya Çalışmaları
Bölüm Analiz / Değerlendirme
Yazarlar

Şebnem Özdemir 0000-0001-6668-6285

Yayımlanma Tarihi 30 Eylül 2021
Gönderilme Tarihi 5 Eylül 2021
Kabul Tarihi 7 Eylül 2021
Yayımlandığı Sayı Yıl 2021

Kaynak Göster

APA Özdemir, Ş. (2021). Yeni Nesil Tehdit: Derin Kurgu (DeepFake). TRT Akademi, 6(13), 904-917. https://doi.org/10.37679/trta.1002526