Araştırma Makalesi
BibTex RIS Kaynak Göster

Yıl 2026, Cilt: 16 Sayı: 1 , 437 - 450 , 26.03.2026
https://doi.org/10.30783/nevsosbilen.1744207
https://izlik.org/JA53RY53HS

Öz

Kaynakça

  • Anderson, B. (2006). Imagined communities: Reflections on the origin and spread of nationalism. Verso.
  • Angwin, J., Larson J., Mattu S. & Kirchner L. (2016). Machine bias. Propublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
  • Ateş, D. (2008). Etnisiteden ulusa, ulustan etnisiteye (?): kültürel, siyasî ve iktisadi çerçeveler. Doğu Batı, 11(44), 115-130.
  • Bontridder, N. & Poullet Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3 (e32). https://doi.org/10.1017/dap.2021.20
  • Bostrom, N. (2014). Superintelligence: paths, dangers, strategies. Oxford University Press.
  • Bradshaw, S., Bailey, H. & Howard, P. N. (2021). Industrialized disinformation: 2020 global inventory of organised social media manipulation. Programme on Democracy & Technology. https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/12/2021/02/CyberTroop-Report20-Draft9.pdf
  • Brubaker, R. & Laitin, D. D. (2008). Etnik ve milliyetçi şiddet. (K. Güler, Çev.). Doğu Batı, 11(44), 211-238.
  • Burton, J. (2023). Algorithmic extremism? The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence. Technology in Society, 75, 1-10. https://doi.org/10.1016/j.techsoc.2023.102262
  • Caliskan-Islam, A., Bryson, J. J. & Narayanan, A. (2016). Semantics derived automatically from language corpora necessarily contain human biases. Princeton Computer Science. https://www.cs.princeton.edu/~arvindn/publications/language-bias.pdf
  • Carr, E. H. (2007). Milliyetçilik ve sonrası (O. Akınhay, Çev.). İletişim Yayınları.
  • Castells, M. (2008). Enformasyon çağı: Ekonomi toplum ve kültür Cilt: 2: Kimliğin gücü (E. Kılıç, Çev.). Bilgi Üniversitesi Yayınları.
  • Chakhoyan, A. (2018). Deep fakes could threaten democracy. What are they and what can be done? World Economic Forum. https://www.weforum.org/agenda/2018/11/deep-fakes-may-destroy-democracy-can-they-be-stopped/
  • Chesney, B. & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753-1819. https://doi.org/10.15779/Z38RV0D15J
  • Clarke, M. & Aqeedi, R. A. (2021). What terrorism will look like in the near future. New Lines Institute. https://newlinesinstitute.org/nonstate-actors/what-terrorism-will-look-like-in-the-near-future/
  • Dieu, O. & Montasari, R. (2022). How states’ recourse to artificial intelligence for national security purposes threatens our most fundamental rights. In R. Montasari (Ed.), Artificial intelligence and national security (pp. 19-45). Springer.
  • Doytcheva, M. (2009). Çokkültürlülük (T. A. Onmuş, Çev.). İletişim Yayınları.
  • EPRS (2019). Polarisation and the use of technology in political campaigns and communication. European Parliament. https://www.europarl.europa.eu/RegData/etudes/STUD/2019/634414/EPRS_STU(2019)634414_EN.pdf
  • Ferry, J. M. (2010). Avrupa meselesi ve ulus-sonrası entegrasyon. A. Dieckhoff ve C. Jaffrelot (Ed.), Milliyetçiliği yeniden düşünmek kuramlar ve uygulamalar (D. Çetinkasap, Çev.) içinde (ss. 291-313). İletişim Yayınları.
  • Fetzer, J. H. (2004). Information: Does it have to be true? Minds and Machines, 14, 223-229.
  • Guibernau, M. (2004). Nation formation and national identity. Belgisch Tijdschrift Voor Nieuwste Geschiedenis/Revue Belge de Histoire Contemporaine, 34(4), 657-682. https://www.journalbelgianhistory.be/nl/journal/belgisch-tijdschrift-voor-nieuwste-geschiedenis-2004-4.
  • Gülalp, H. (2007). Giriş: Milliyete karşı vatandaşlık. H. Gülalp (Ed.), Vatandaşlık ve etnik çatışma: Ulus-devletin sorgulanması (E. Kılıç, Çev.) içinde (ss. 11-34). Metis Yayınları.
  • Habermas, J. (2012). Öteki olmak ötekiyle yaşamak: Siyaset kuramı yazıları (İ. Aka, Çev.). Yapı Kredi Yayınları.
  • Hall, S. & Held, D. (1995). Yurttaşlar ve yurttaşlık. S. Hall ve M. Jacques (Ed.), Yeni zamanlar: 1990’larda politikanın değişen çehresi (A. Yılmaz, Çev.) içinde (ss. 180-203). Ayrıntı Yayınları.
  • Hao, K. (2022). Artificial intelligence is creating a new colonial world order. MIT Technology Review. https://www.technologyreview.com/2022/04/19/1049592/artificial-intelligence-colonialism/
  • Hicks, M. T., Humphries, J. & Slater J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(38), 1-10. https://doi.org/10.1007/s10676-024-09775-5
  • Hunt, J. S. (2021). Countering cyber-enabled disinformation: Implications for national security. Australian Journal of Defence and Strategic Studies, 3(1), 83-88. https://doi.org/10.51174/AJDSS.0301/MLTD3707
  • Jackson, R. (2007). Regime security. In A. Collins (Ed.), Contemporary security studies (pp. 146-164). Oxford University Press.
  • Karpat, K. (2009). Osmanlı’dan günümüze kimlik ve ideoloji. Timaş Yayınları.
  • Kymlicka, W. (2006). Çağdaş siyaset felsefesine giriş (E. Kılıç, Çev.). Bilgi Üniversitesi Yayınları.
  • Maras, M. H. & Alexandrou A. (2018). Determining authenticity of video evidence in the age of artificial intelligence and in the wake of deepfake videos. The International Journal of Evidence & Proof, 23(3), 255-262. https://doi.org/10.1177/1365712718807226
  • media.mit.edu (2024). Project: Gender shades. MIT Media Lab. https://www.media.mit.edu/projects/gender-shades/overview/
  • Mittelstadt, B. D., Allo P., Taddeo M., Wachter S. & Floridi L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21. https://doi.org/10.1177/2053951716679679
  • Montoro-Montarroso, A., Cantón-Correa J., Rosso P., Chulvi B., Panizo-Lledot A., Huertas-Tato J., Calvo-Figueras B., Rementeria M. J. & Gómez-Romero J. (2023). Fighting disinformation with artificial intelligence: Fundamentals, advances and challenges. Profesional de la Informacion, 32(3), e320322. https://doi.org/10.3145/epi.2023.may.22
  • Murphy, G. & Flynn E. (2022). Deepfake false memories. Memory, 30(4), 480-492. https://doi.org/10.1080/09658211.2021.1919715
  • Musk, E. (2014). https://twitter.com/elonmusk/status/495759307346952192?lang=en
  • Nicoletti, L. & Bass, D. (2023). Humans are biased. Generative ai is even worse: stable diffusion’s text-to-image model amplifies stereotypes about race and gender, here’s why that matters. Bloomberg. https://www.bloomberg.com/graphics/2023-generative-ai-bias/
  • Parkin, S. (2019). The rise of the deepfake and the threat to democracy. The Guardian. https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy
  • Pasic, A. (1998). Culture, identity, and security: An overview. Rockefeller Brothers Fund, Inc.
  • Renan, E. (2018). What is a nation? And other political writings (M.F.N. Giglioli, Translated and edited). Columbia University Press.
  • Roe, P. (2007). Societal security. In A. Collins (Ed.), Contemporary security studies (pp. 165-181). Oxford University Press.
  • Roe, P. (2005). Ethnic violence and the societal security dilemma. Routledge.
  • Santamaria, Y. (1998). Ulus-devlet: Bir modelin tarihi. J. Leca (Ed.), Uluslar ve Milliyetçilikler (S. İdemen, Çev.) içinde (ss. 20-30). Metis Yayınları.
  • Siegel, D. & Doty, M. B. (2023). Weapons of mass disruption: artificial intelligence and the production of extremist propaganda. Global Network on Extremism and Technology. https://gnet-research.org/2023/02/17/weapons-of-mass-disruption-artificial-intelligence-and-the-production-of-extremist-propaganda/
  • Silva, M. (2016). Securitization as a nation-building instrument. Politikon: The IAPSS Journal of Political Science, 29, 201-214. https://doi.org/10.22151/politikon.29.12
  • Taylor, B. C. (2020). Defending the state from digital deceit: The reflexive securitization of deepfake. Critical Studies in Media Communication, 38(1), 1-17. https://doi.org/10.1080/15295036.2020.1833058
  • Tok, N. (2006). Çokkültürlülüğe bir yanıt olarak tekkültürcülük ve çokkültürcülük. Demokrasi Platformu, 2(5), 19-33.
  • Wæver, O. (1993). Societal security: The concept. In O. Wæver, B. Buzan, M. Kelstrup and P. Lemaitre (Ed.), Identity, migration and the new security agenda in Europe. (pp. 17-40). Pinter Publishers Ltd.
  • Wallerstein, I. (2007). Halklığın inşası: Irkçılık, milliyetçilik ve etniklik. E. Balibar ve I. Wallerstein (Ed.), Irk ulus sınıf: Belirsiz kimlikler (N. Ökten, Çev.) içinde (ss. 89-106). Metis Yayınları.
  • Webster, G., Creemers, R., Triolo, P. & Kania, E. (2017). China’s plan to ‘lead’ in ai: Purpose, prospects, and problems. New America. https://www.newamerica.org/cybersecurity-initiative/blog/chinas-plan-lead-ai-purpose-prospects-and-problems/
  • Whyte, C. (2020). Deepfake news: AI-enabled disinformation as a multi-level public policy challenge. Journal of Cyber Policy, 5(2), 1-19. https://doi.org/10.1080/23738871.2020.1797135
  • Yu, P., Xia Z., Fei J. & Y. Lu (2021). A survey on deepfake video detection. IET Biometrics, 10(6), 607-624. https://doi.org/10.1049/bme2.12031
  • Zeng, J. (2021). Securitization of artificial intelligence in China. The Chinese Journal of International Politics, 14(3), 417-445. https://doi.org/10.1093/cjip/poab005

Ulus-Devletin yeni baş ağrısı: Yapay zekâ

Yıl 2026, Cilt: 16 Sayı: 1 , 437 - 450 , 26.03.2026
https://doi.org/10.30783/nevsosbilen.1744207
https://izlik.org/JA53RY53HS

Öz

Yapay zekâ tarafsız bir teknoloji değildir. Kültürel kimlikler üzerinde manipülasyonlar gerçekleştirmeye imkân verebilecek olan deepfake videoların üretilmesi için kullanılabilir. Bu teknolojiden yararlanarak oluşturulabilecek deepfake videoların kötüye kullanımı yoluyla ulus-devlette ulus kimliğine zararlar verilebilir. Kimlik siyaseti buna olanak sağlamaktadır. Bu nedenle yapay zekâ teknolojisi kültürel çeşitliliğe sahip olan ulus-devletler için önemli bir güvenlik meselesidir. Bu konuda alan yazın üzerinde bir inceleme yapan bu çalışma enformasyon çağına ve bu çağın teknolojik gelişmelerine uygun bir ulus inşa politikası olarak yapay zekânın güvenlikleştirilmesinin gerekli ve meşru olduğunu ileri sürmekte ve bu politikayı postmodern dönemin zorunlu bir kamu politikası olarak önermektedir. Zira yapay zekâyı güvenlikleştirmenin ulus-devletler içindeki farklı toplumsal grupları güvenlikleştirmekten daha doğru bir yaklaşım olacağı düşünülmektedir. Çalışma yapay zekânın güvenlikleştirilmesinin gerekliliğini ileri sürmek ve önermekle sınırlıdır. Bunun nasıl başarılabileceğini tartışmak çalışmanın kapsamında değildir. Çalışmanın yapay zekâ nedeniyle hâlihazırda öngörülen risklere toplumsal güvenlik perspektifinden katkıda bulunabileceği ve bu konuda gerçekleştirilecek çalışmalara ilham verebileceği değerlendirilmektedir.

Kaynakça

  • Anderson, B. (2006). Imagined communities: Reflections on the origin and spread of nationalism. Verso.
  • Angwin, J., Larson J., Mattu S. & Kirchner L. (2016). Machine bias. Propublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
  • Ateş, D. (2008). Etnisiteden ulusa, ulustan etnisiteye (?): kültürel, siyasî ve iktisadi çerçeveler. Doğu Batı, 11(44), 115-130.
  • Bontridder, N. & Poullet Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3 (e32). https://doi.org/10.1017/dap.2021.20
  • Bostrom, N. (2014). Superintelligence: paths, dangers, strategies. Oxford University Press.
  • Bradshaw, S., Bailey, H. & Howard, P. N. (2021). Industrialized disinformation: 2020 global inventory of organised social media manipulation. Programme on Democracy & Technology. https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/12/2021/02/CyberTroop-Report20-Draft9.pdf
  • Brubaker, R. & Laitin, D. D. (2008). Etnik ve milliyetçi şiddet. (K. Güler, Çev.). Doğu Batı, 11(44), 211-238.
  • Burton, J. (2023). Algorithmic extremism? The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence. Technology in Society, 75, 1-10. https://doi.org/10.1016/j.techsoc.2023.102262
  • Caliskan-Islam, A., Bryson, J. J. & Narayanan, A. (2016). Semantics derived automatically from language corpora necessarily contain human biases. Princeton Computer Science. https://www.cs.princeton.edu/~arvindn/publications/language-bias.pdf
  • Carr, E. H. (2007). Milliyetçilik ve sonrası (O. Akınhay, Çev.). İletişim Yayınları.
  • Castells, M. (2008). Enformasyon çağı: Ekonomi toplum ve kültür Cilt: 2: Kimliğin gücü (E. Kılıç, Çev.). Bilgi Üniversitesi Yayınları.
  • Chakhoyan, A. (2018). Deep fakes could threaten democracy. What are they and what can be done? World Economic Forum. https://www.weforum.org/agenda/2018/11/deep-fakes-may-destroy-democracy-can-they-be-stopped/
  • Chesney, B. & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753-1819. https://doi.org/10.15779/Z38RV0D15J
  • Clarke, M. & Aqeedi, R. A. (2021). What terrorism will look like in the near future. New Lines Institute. https://newlinesinstitute.org/nonstate-actors/what-terrorism-will-look-like-in-the-near-future/
  • Dieu, O. & Montasari, R. (2022). How states’ recourse to artificial intelligence for national security purposes threatens our most fundamental rights. In R. Montasari (Ed.), Artificial intelligence and national security (pp. 19-45). Springer.
  • Doytcheva, M. (2009). Çokkültürlülük (T. A. Onmuş, Çev.). İletişim Yayınları.
  • EPRS (2019). Polarisation and the use of technology in political campaigns and communication. European Parliament. https://www.europarl.europa.eu/RegData/etudes/STUD/2019/634414/EPRS_STU(2019)634414_EN.pdf
  • Ferry, J. M. (2010). Avrupa meselesi ve ulus-sonrası entegrasyon. A. Dieckhoff ve C. Jaffrelot (Ed.), Milliyetçiliği yeniden düşünmek kuramlar ve uygulamalar (D. Çetinkasap, Çev.) içinde (ss. 291-313). İletişim Yayınları.
  • Fetzer, J. H. (2004). Information: Does it have to be true? Minds and Machines, 14, 223-229.
  • Guibernau, M. (2004). Nation formation and national identity. Belgisch Tijdschrift Voor Nieuwste Geschiedenis/Revue Belge de Histoire Contemporaine, 34(4), 657-682. https://www.journalbelgianhistory.be/nl/journal/belgisch-tijdschrift-voor-nieuwste-geschiedenis-2004-4.
  • Gülalp, H. (2007). Giriş: Milliyete karşı vatandaşlık. H. Gülalp (Ed.), Vatandaşlık ve etnik çatışma: Ulus-devletin sorgulanması (E. Kılıç, Çev.) içinde (ss. 11-34). Metis Yayınları.
  • Habermas, J. (2012). Öteki olmak ötekiyle yaşamak: Siyaset kuramı yazıları (İ. Aka, Çev.). Yapı Kredi Yayınları.
  • Hall, S. & Held, D. (1995). Yurttaşlar ve yurttaşlık. S. Hall ve M. Jacques (Ed.), Yeni zamanlar: 1990’larda politikanın değişen çehresi (A. Yılmaz, Çev.) içinde (ss. 180-203). Ayrıntı Yayınları.
  • Hao, K. (2022). Artificial intelligence is creating a new colonial world order. MIT Technology Review. https://www.technologyreview.com/2022/04/19/1049592/artificial-intelligence-colonialism/
  • Hicks, M. T., Humphries, J. & Slater J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(38), 1-10. https://doi.org/10.1007/s10676-024-09775-5
  • Hunt, J. S. (2021). Countering cyber-enabled disinformation: Implications for national security. Australian Journal of Defence and Strategic Studies, 3(1), 83-88. https://doi.org/10.51174/AJDSS.0301/MLTD3707
  • Jackson, R. (2007). Regime security. In A. Collins (Ed.), Contemporary security studies (pp. 146-164). Oxford University Press.
  • Karpat, K. (2009). Osmanlı’dan günümüze kimlik ve ideoloji. Timaş Yayınları.
  • Kymlicka, W. (2006). Çağdaş siyaset felsefesine giriş (E. Kılıç, Çev.). Bilgi Üniversitesi Yayınları.
  • Maras, M. H. & Alexandrou A. (2018). Determining authenticity of video evidence in the age of artificial intelligence and in the wake of deepfake videos. The International Journal of Evidence & Proof, 23(3), 255-262. https://doi.org/10.1177/1365712718807226
  • media.mit.edu (2024). Project: Gender shades. MIT Media Lab. https://www.media.mit.edu/projects/gender-shades/overview/
  • Mittelstadt, B. D., Allo P., Taddeo M., Wachter S. & Floridi L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21. https://doi.org/10.1177/2053951716679679
  • Montoro-Montarroso, A., Cantón-Correa J., Rosso P., Chulvi B., Panizo-Lledot A., Huertas-Tato J., Calvo-Figueras B., Rementeria M. J. & Gómez-Romero J. (2023). Fighting disinformation with artificial intelligence: Fundamentals, advances and challenges. Profesional de la Informacion, 32(3), e320322. https://doi.org/10.3145/epi.2023.may.22
  • Murphy, G. & Flynn E. (2022). Deepfake false memories. Memory, 30(4), 480-492. https://doi.org/10.1080/09658211.2021.1919715
  • Musk, E. (2014). https://twitter.com/elonmusk/status/495759307346952192?lang=en
  • Nicoletti, L. & Bass, D. (2023). Humans are biased. Generative ai is even worse: stable diffusion’s text-to-image model amplifies stereotypes about race and gender, here’s why that matters. Bloomberg. https://www.bloomberg.com/graphics/2023-generative-ai-bias/
  • Parkin, S. (2019). The rise of the deepfake and the threat to democracy. The Guardian. https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy
  • Pasic, A. (1998). Culture, identity, and security: An overview. Rockefeller Brothers Fund, Inc.
  • Renan, E. (2018). What is a nation? And other political writings (M.F.N. Giglioli, Translated and edited). Columbia University Press.
  • Roe, P. (2007). Societal security. In A. Collins (Ed.), Contemporary security studies (pp. 165-181). Oxford University Press.
  • Roe, P. (2005). Ethnic violence and the societal security dilemma. Routledge.
  • Santamaria, Y. (1998). Ulus-devlet: Bir modelin tarihi. J. Leca (Ed.), Uluslar ve Milliyetçilikler (S. İdemen, Çev.) içinde (ss. 20-30). Metis Yayınları.
  • Siegel, D. & Doty, M. B. (2023). Weapons of mass disruption: artificial intelligence and the production of extremist propaganda. Global Network on Extremism and Technology. https://gnet-research.org/2023/02/17/weapons-of-mass-disruption-artificial-intelligence-and-the-production-of-extremist-propaganda/
  • Silva, M. (2016). Securitization as a nation-building instrument. Politikon: The IAPSS Journal of Political Science, 29, 201-214. https://doi.org/10.22151/politikon.29.12
  • Taylor, B. C. (2020). Defending the state from digital deceit: The reflexive securitization of deepfake. Critical Studies in Media Communication, 38(1), 1-17. https://doi.org/10.1080/15295036.2020.1833058
  • Tok, N. (2006). Çokkültürlülüğe bir yanıt olarak tekkültürcülük ve çokkültürcülük. Demokrasi Platformu, 2(5), 19-33.
  • Wæver, O. (1993). Societal security: The concept. In O. Wæver, B. Buzan, M. Kelstrup and P. Lemaitre (Ed.), Identity, migration and the new security agenda in Europe. (pp. 17-40). Pinter Publishers Ltd.
  • Wallerstein, I. (2007). Halklığın inşası: Irkçılık, milliyetçilik ve etniklik. E. Balibar ve I. Wallerstein (Ed.), Irk ulus sınıf: Belirsiz kimlikler (N. Ökten, Çev.) içinde (ss. 89-106). Metis Yayınları.
  • Webster, G., Creemers, R., Triolo, P. & Kania, E. (2017). China’s plan to ‘lead’ in ai: Purpose, prospects, and problems. New America. https://www.newamerica.org/cybersecurity-initiative/blog/chinas-plan-lead-ai-purpose-prospects-and-problems/
  • Whyte, C. (2020). Deepfake news: AI-enabled disinformation as a multi-level public policy challenge. Journal of Cyber Policy, 5(2), 1-19. https://doi.org/10.1080/23738871.2020.1797135
  • Yu, P., Xia Z., Fei J. & Y. Lu (2021). A survey on deepfake video detection. IET Biometrics, 10(6), 607-624. https://doi.org/10.1049/bme2.12031
  • Zeng, J. (2021). Securitization of artificial intelligence in China. The Chinese Journal of International Politics, 14(3), 417-445. https://doi.org/10.1093/cjip/poab005

The new headache of the Nation-State: Artificial intelligence

Yıl 2026, Cilt: 16 Sayı: 1 , 437 - 450 , 26.03.2026
https://doi.org/10.30783/nevsosbilen.1744207
https://izlik.org/JA53RY53HS

Öz

Artificial intelligence is not a neutral technology. It can be used to produce deepfake videos that can allow manipulations on cultural identities. Through the misuse of deepfake videos that can be created using this technology, damage can be done to the nation-identity in the nation-state. Identity politics makes this possible. For this reason, artificial intelligence technology is an important security issue for nation-states which has culturally diverse. This study, which makes an examination at the literature on this subject, argues that the securitization of artificial intelligence is necessary and legitimate as a nation-building policy suitable for the information age and the technological developments of this age, and proposes this policy as a mandatory public policy of the postmodern era. Because it is considered that securitizing artificial intelligence would be a more accurate approach than securitizing different social groups within nation-states. The study is limited to claiming and suggesting the necessity of securitization of artificial intelligence. It is not within the scope of the study to discuss how this can be achieved. It is considered that the study can contribute to the risks foreseen due to artificial intelligence in the dimension of societal security and inspire the studies to be carried out on this subject.

Kaynakça

  • Anderson, B. (2006). Imagined communities: Reflections on the origin and spread of nationalism. Verso.
  • Angwin, J., Larson J., Mattu S. & Kirchner L. (2016). Machine bias. Propublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
  • Ateş, D. (2008). Etnisiteden ulusa, ulustan etnisiteye (?): kültürel, siyasî ve iktisadi çerçeveler. Doğu Batı, 11(44), 115-130.
  • Bontridder, N. & Poullet Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3 (e32). https://doi.org/10.1017/dap.2021.20
  • Bostrom, N. (2014). Superintelligence: paths, dangers, strategies. Oxford University Press.
  • Bradshaw, S., Bailey, H. & Howard, P. N. (2021). Industrialized disinformation: 2020 global inventory of organised social media manipulation. Programme on Democracy & Technology. https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/12/2021/02/CyberTroop-Report20-Draft9.pdf
  • Brubaker, R. & Laitin, D. D. (2008). Etnik ve milliyetçi şiddet. (K. Güler, Çev.). Doğu Batı, 11(44), 211-238.
  • Burton, J. (2023). Algorithmic extremism? The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence. Technology in Society, 75, 1-10. https://doi.org/10.1016/j.techsoc.2023.102262
  • Caliskan-Islam, A., Bryson, J. J. & Narayanan, A. (2016). Semantics derived automatically from language corpora necessarily contain human biases. Princeton Computer Science. https://www.cs.princeton.edu/~arvindn/publications/language-bias.pdf
  • Carr, E. H. (2007). Milliyetçilik ve sonrası (O. Akınhay, Çev.). İletişim Yayınları.
  • Castells, M. (2008). Enformasyon çağı: Ekonomi toplum ve kültür Cilt: 2: Kimliğin gücü (E. Kılıç, Çev.). Bilgi Üniversitesi Yayınları.
  • Chakhoyan, A. (2018). Deep fakes could threaten democracy. What are they and what can be done? World Economic Forum. https://www.weforum.org/agenda/2018/11/deep-fakes-may-destroy-democracy-can-they-be-stopped/
  • Chesney, B. & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753-1819. https://doi.org/10.15779/Z38RV0D15J
  • Clarke, M. & Aqeedi, R. A. (2021). What terrorism will look like in the near future. New Lines Institute. https://newlinesinstitute.org/nonstate-actors/what-terrorism-will-look-like-in-the-near-future/
  • Dieu, O. & Montasari, R. (2022). How states’ recourse to artificial intelligence for national security purposes threatens our most fundamental rights. In R. Montasari (Ed.), Artificial intelligence and national security (pp. 19-45). Springer.
  • Doytcheva, M. (2009). Çokkültürlülük (T. A. Onmuş, Çev.). İletişim Yayınları.
  • EPRS (2019). Polarisation and the use of technology in political campaigns and communication. European Parliament. https://www.europarl.europa.eu/RegData/etudes/STUD/2019/634414/EPRS_STU(2019)634414_EN.pdf
  • Ferry, J. M. (2010). Avrupa meselesi ve ulus-sonrası entegrasyon. A. Dieckhoff ve C. Jaffrelot (Ed.), Milliyetçiliği yeniden düşünmek kuramlar ve uygulamalar (D. Çetinkasap, Çev.) içinde (ss. 291-313). İletişim Yayınları.
  • Fetzer, J. H. (2004). Information: Does it have to be true? Minds and Machines, 14, 223-229.
  • Guibernau, M. (2004). Nation formation and national identity. Belgisch Tijdschrift Voor Nieuwste Geschiedenis/Revue Belge de Histoire Contemporaine, 34(4), 657-682. https://www.journalbelgianhistory.be/nl/journal/belgisch-tijdschrift-voor-nieuwste-geschiedenis-2004-4.
  • Gülalp, H. (2007). Giriş: Milliyete karşı vatandaşlık. H. Gülalp (Ed.), Vatandaşlık ve etnik çatışma: Ulus-devletin sorgulanması (E. Kılıç, Çev.) içinde (ss. 11-34). Metis Yayınları.
  • Habermas, J. (2012). Öteki olmak ötekiyle yaşamak: Siyaset kuramı yazıları (İ. Aka, Çev.). Yapı Kredi Yayınları.
  • Hall, S. & Held, D. (1995). Yurttaşlar ve yurttaşlık. S. Hall ve M. Jacques (Ed.), Yeni zamanlar: 1990’larda politikanın değişen çehresi (A. Yılmaz, Çev.) içinde (ss. 180-203). Ayrıntı Yayınları.
  • Hao, K. (2022). Artificial intelligence is creating a new colonial world order. MIT Technology Review. https://www.technologyreview.com/2022/04/19/1049592/artificial-intelligence-colonialism/
  • Hicks, M. T., Humphries, J. & Slater J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(38), 1-10. https://doi.org/10.1007/s10676-024-09775-5
  • Hunt, J. S. (2021). Countering cyber-enabled disinformation: Implications for national security. Australian Journal of Defence and Strategic Studies, 3(1), 83-88. https://doi.org/10.51174/AJDSS.0301/MLTD3707
  • Jackson, R. (2007). Regime security. In A. Collins (Ed.), Contemporary security studies (pp. 146-164). Oxford University Press.
  • Karpat, K. (2009). Osmanlı’dan günümüze kimlik ve ideoloji. Timaş Yayınları.
  • Kymlicka, W. (2006). Çağdaş siyaset felsefesine giriş (E. Kılıç, Çev.). Bilgi Üniversitesi Yayınları.
  • Maras, M. H. & Alexandrou A. (2018). Determining authenticity of video evidence in the age of artificial intelligence and in the wake of deepfake videos. The International Journal of Evidence & Proof, 23(3), 255-262. https://doi.org/10.1177/1365712718807226
  • media.mit.edu (2024). Project: Gender shades. MIT Media Lab. https://www.media.mit.edu/projects/gender-shades/overview/
  • Mittelstadt, B. D., Allo P., Taddeo M., Wachter S. & Floridi L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21. https://doi.org/10.1177/2053951716679679
  • Montoro-Montarroso, A., Cantón-Correa J., Rosso P., Chulvi B., Panizo-Lledot A., Huertas-Tato J., Calvo-Figueras B., Rementeria M. J. & Gómez-Romero J. (2023). Fighting disinformation with artificial intelligence: Fundamentals, advances and challenges. Profesional de la Informacion, 32(3), e320322. https://doi.org/10.3145/epi.2023.may.22
  • Murphy, G. & Flynn E. (2022). Deepfake false memories. Memory, 30(4), 480-492. https://doi.org/10.1080/09658211.2021.1919715
  • Musk, E. (2014). https://twitter.com/elonmusk/status/495759307346952192?lang=en
  • Nicoletti, L. & Bass, D. (2023). Humans are biased. Generative ai is even worse: stable diffusion’s text-to-image model amplifies stereotypes about race and gender, here’s why that matters. Bloomberg. https://www.bloomberg.com/graphics/2023-generative-ai-bias/
  • Parkin, S. (2019). The rise of the deepfake and the threat to democracy. The Guardian. https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy
  • Pasic, A. (1998). Culture, identity, and security: An overview. Rockefeller Brothers Fund, Inc.
  • Renan, E. (2018). What is a nation? And other political writings (M.F.N. Giglioli, Translated and edited). Columbia University Press.
  • Roe, P. (2007). Societal security. In A. Collins (Ed.), Contemporary security studies (pp. 165-181). Oxford University Press.
  • Roe, P. (2005). Ethnic violence and the societal security dilemma. Routledge.
  • Santamaria, Y. (1998). Ulus-devlet: Bir modelin tarihi. J. Leca (Ed.), Uluslar ve Milliyetçilikler (S. İdemen, Çev.) içinde (ss. 20-30). Metis Yayınları.
  • Siegel, D. & Doty, M. B. (2023). Weapons of mass disruption: artificial intelligence and the production of extremist propaganda. Global Network on Extremism and Technology. https://gnet-research.org/2023/02/17/weapons-of-mass-disruption-artificial-intelligence-and-the-production-of-extremist-propaganda/
  • Silva, M. (2016). Securitization as a nation-building instrument. Politikon: The IAPSS Journal of Political Science, 29, 201-214. https://doi.org/10.22151/politikon.29.12
  • Taylor, B. C. (2020). Defending the state from digital deceit: The reflexive securitization of deepfake. Critical Studies in Media Communication, 38(1), 1-17. https://doi.org/10.1080/15295036.2020.1833058
  • Tok, N. (2006). Çokkültürlülüğe bir yanıt olarak tekkültürcülük ve çokkültürcülük. Demokrasi Platformu, 2(5), 19-33.
  • Wæver, O. (1993). Societal security: The concept. In O. Wæver, B. Buzan, M. Kelstrup and P. Lemaitre (Ed.), Identity, migration and the new security agenda in Europe. (pp. 17-40). Pinter Publishers Ltd.
  • Wallerstein, I. (2007). Halklığın inşası: Irkçılık, milliyetçilik ve etniklik. E. Balibar ve I. Wallerstein (Ed.), Irk ulus sınıf: Belirsiz kimlikler (N. Ökten, Çev.) içinde (ss. 89-106). Metis Yayınları.
  • Webster, G., Creemers, R., Triolo, P. & Kania, E. (2017). China’s plan to ‘lead’ in ai: Purpose, prospects, and problems. New America. https://www.newamerica.org/cybersecurity-initiative/blog/chinas-plan-lead-ai-purpose-prospects-and-problems/
  • Whyte, C. (2020). Deepfake news: AI-enabled disinformation as a multi-level public policy challenge. Journal of Cyber Policy, 5(2), 1-19. https://doi.org/10.1080/23738871.2020.1797135
  • Yu, P., Xia Z., Fei J. & Y. Lu (2021). A survey on deepfake video detection. IET Biometrics, 10(6), 607-624. https://doi.org/10.1049/bme2.12031
  • Zeng, J. (2021). Securitization of artificial intelligence in China. The Chinese Journal of International Politics, 14(3), 417-445. https://doi.org/10.1093/cjip/poab005
Toplam 52 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Karşılaştırmalı Siyasi Hareketler
Bölüm Araştırma Makalesi
Yazarlar

Hüseyin Aras 0000-0001-8117-6574

Gönderilme Tarihi 16 Temmuz 2025
Kabul Tarihi 17 Şubat 2026
Yayımlanma Tarihi 26 Mart 2026
DOI https://doi.org/10.30783/nevsosbilen.1744207
IZ https://izlik.org/JA53RY53HS
Yayımlandığı Sayı Yıl 2026 Cilt: 16 Sayı: 1

Kaynak Göster

APA Aras, H. (2026). Ulus-Devletin yeni baş ağrısı: Yapay zekâ. Nevşehir Hacı Bektaş Veli Üniversitesi SBE Dergisi, 16(1), 437-450. https://doi.org/10.30783/nevsosbilen.1744207