Teorik Makale
BibTex RIS Kaynak Göster

Early Detection of Lone-Wolf Radicalization: The Role of Conversational Artificial Intelligence

Yıl 2026, Cilt: 17 Sayı: 1, 1 - 25, 01.03.2026
https://izlik.org/JA76LN39TN

Öz

This article reveals the potential of conversational artificial intelligence as an early-warning tool in detecting lone-wolf radicalization. Drawing on psychological approaches to radicalization, particularly Moghaddam’s “staircase to terrorism” and Horgan’s model of terrorist engagement, the study analyzes how dialogue-based AI systems might identify behavioral and linguistic cues of extremist trajectories. The research also evaluates the ethical and governance implications of integrating AI into a counter-radicalization framework concerning the EU AI Act and UNESCO’s Recommendation on AI Ethics. The paper claims that the responsible deployment of AI can complement traditional prevention mechanisms by enhancing situational awareness and early intervention capacities. Ultimately, the study contributes to bridging the gap between digital ethics and security studies by offering an agenda for ethically aligned, preventive AI governance.

Kaynakça

  • AlgorithmWatch. (2021). UNESCO’s AI ethics recommendation: Promise and pitfalls. https://algorithmwatch.org
  • Borum, R. (2011). Radicalization into violent extremism I: A review of social science theories. Journal of Strategic Security, 4(4), 7–36. https://doi.org/10.5038/1944-0472.4.4.1
  • Cacioppo, J. T., & Patrick, W. (2008). Loneliness: Human nature and the need for social connection. W. W. Norton.
  • Cambria, E., Poria, S., Hazarika, D., & Kwok, K. (2017). SenticNet 5: Discovering conceptual primitives for sentiment analysis by means of context embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10655
  • Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080
  • Conway, M. (2017). Determining the role of the Internet in violent extremism and terrorism: Six suggestions for progressing research. Studies in Conflict & Terrorism, 40(1), 77–98. https://doi.org/10.1080/1057610X.2016.1157408
  • European Commission. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence (AI Act). Official Journal of the European Union.
  • Floridi, L. (2013). The ethics of information. Oxford University Press.
  • Floridi, L. (2016). The logic of information: A theory of philosophy as conceptual design. Oxford University Press.
  • Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology, 32(2), 185–193. https://doi.org/10.1007/s13347-019-00354-x
  • Franklin, M., Moreira Tomei, R., & Gorman, T. (2023). Critical reflections on the EU AI Act: Between innovation and regulation. Computer Law & Security Review, 49, 105773. https://doi.org/10.1016/j.clsr.2023.105773
  • George, A. L., & Bennett, A. (2005). Case studies and theory development in the social sciences. MIT Press.
  • Horgan, J. (2005). The psychology of terrorism. Routledge.
  • Horgan, J. (2014). The psychology of terrorism (2nd ed.). Routledge.
  • Judiciary of England and Wales. (2023). R v Jaswant Singh Chail: Sentencing remarks and court documents. The National Archives (UK).
  • Kruglanski, A. W., Gelfand, M. J., Bélanger, J. J., Sheveland, A., Hetiarachchi, M., & Gunaratna, R. (2014). The significance quest theory: A motivational account of radicalization and terrorism. Political Psychology, 35(S1), 69–93. https://doi.org/10.1111/pops.12163
  • Lygre, R., Eid, J., Larsson, G., & Ranstorp, M. (2011). Terrorist groups and lone actors: A comparison of psychological profiles. Studies in Conflict & Terrorism, 34(6), 495–515. https://doi.org/10.1080/1057610X.2011.571193
  • Mathur, A., Broekaert, E., & Clarke, M. (2024). Artificial intelligence and radicalization: Risks, ethics, and governance. International Centre for Counter-Terrorism. https://icct.nl/publication/ai-and-radicalization
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679
  • Moghaddam, F. M. (2005). The staircase to terrorism: A psychological exploration. American Psychologist, 60(2), 161–169. https://doi.org/10.1037/0003-066X.60.2.161
  • Novelli, C., Hacker, P., Morley, J., Trondal, J., & Floridi, L. (2024). Institutionalizing trustworthy AI in Europe: The architecture of the EU AI Act. AI & Society, 39(3), 1123–1141. https://doi.org/10.1007/s00146-024-01867-0
  • Picard, R. W. (1997). Affective computing. MIT Press.
  • Reuters. (2023, February 3). Man who plotted to kill Queen encouraged by AI chatbot, UK court hears. https://www.reuters.com
  • Schmidt, A., & Wiegand, M. (2017). A survey on hate speech detection using natural language processing. Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media (SocialNLP 2017), 1–10. https://doi.org/10.18653/v1/W17-1101
  • Spaaij, R. (2012). Understanding lone wolf terrorism: Global patterns, motivations and prevention. Springer.
  • The Ethics of AI or Techno-Solutionism. (2025). Critical perspectives on global AI governance. Springer Nature.
  • Tucker, J. A., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., Stukal, D., & Nyhan, B. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature. Political Science Quarterly, 133(3), 555–593. https://doi.org/10.1002/polq.12791
  • Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.
  • UNESCO. (2021). Recommendation on the ethics of artificial intelligence. https://unesdoc.unesco.org
  • van Wynsberghe, A., Cath, C., Jobin, A., & Floridi, L. (2025). From principles to practice: Challenges for global AI governance. AI and Ethics, 5(1), 23–41. https://doi.org/10.1007/s43681-024-00351-7
  • Weaver, M. (2023, February 3). Queen assassination plot: AI chatbot urged man to carry out attack, court hears. The Guardian. https://www.theguardian.com
  • Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
  • Yin, R. K. (2018). Case study research and applications: Design and methods (6th ed.). SAGE Publications.

Yalnız Aktör Radikalleşmesinin Erken Tespiti: Sohbet Tabanlı Yapay Zekanın Rolü

Yıl 2026, Cilt: 17 Sayı: 1, 1 - 25, 01.03.2026
https://izlik.org/JA76LN39TN

Öz

Bu makale, sohbet tabanlı yapay zekânın yalnız aktör radikalleşmesini erken seviyede tespit edebilecek bir uyarı aracı olarak kullanılma ihtimalini incelemektedir. Radikalleşmenin psikolojik kuramlarından Moghaddam’ın “terörizme giden merdiven modeli” ve Horgan’ın katılım süreci yaklaşımı çerçevesinde, sohbet tabanlı yapay zeka sistemlerinin aşırı eğilimlerle bağlantılı dilsel ve davranışsal göstergeleri nasıl algılayabileceği analiz edilmektedir. Ayrıca, bu teknolojilerin radikalleşmeyle mücadele modellerine entegre edilmesinin etik ve yönetişim boyutları, AB Yapay Zekâ Yasası ve UNESCO Yapay Zekâ Etiği Tavsiyesi kapsamında değerlendirilmektedir. Etik gözetim altında geliştirilen yapay zeka uygulamalarının durumsal farkındalığı artırarak önleyici politika araçlarını güçlendirebileceği çalışmanın hipotezidir. Çalışma, dijital etik ile güvenlik çalışmaları arasındaki boşluğu doldurmayı amaçlayarak etik uyumlu bir yapay zeka yönetişimi yaklaşımını önermektedir.

Kaynakça

  • AlgorithmWatch. (2021). UNESCO’s AI ethics recommendation: Promise and pitfalls. https://algorithmwatch.org
  • Borum, R. (2011). Radicalization into violent extremism I: A review of social science theories. Journal of Strategic Security, 4(4), 7–36. https://doi.org/10.5038/1944-0472.4.4.1
  • Cacioppo, J. T., & Patrick, W. (2008). Loneliness: Human nature and the need for social connection. W. W. Norton.
  • Cambria, E., Poria, S., Hazarika, D., & Kwok, K. (2017). SenticNet 5: Discovering conceptual primitives for sentiment analysis by means of context embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10655
  • Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080
  • Conway, M. (2017). Determining the role of the Internet in violent extremism and terrorism: Six suggestions for progressing research. Studies in Conflict & Terrorism, 40(1), 77–98. https://doi.org/10.1080/1057610X.2016.1157408
  • European Commission. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence (AI Act). Official Journal of the European Union.
  • Floridi, L. (2013). The ethics of information. Oxford University Press.
  • Floridi, L. (2016). The logic of information: A theory of philosophy as conceptual design. Oxford University Press.
  • Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology, 32(2), 185–193. https://doi.org/10.1007/s13347-019-00354-x
  • Franklin, M., Moreira Tomei, R., & Gorman, T. (2023). Critical reflections on the EU AI Act: Between innovation and regulation. Computer Law & Security Review, 49, 105773. https://doi.org/10.1016/j.clsr.2023.105773
  • George, A. L., & Bennett, A. (2005). Case studies and theory development in the social sciences. MIT Press.
  • Horgan, J. (2005). The psychology of terrorism. Routledge.
  • Horgan, J. (2014). The psychology of terrorism (2nd ed.). Routledge.
  • Judiciary of England and Wales. (2023). R v Jaswant Singh Chail: Sentencing remarks and court documents. The National Archives (UK).
  • Kruglanski, A. W., Gelfand, M. J., Bélanger, J. J., Sheveland, A., Hetiarachchi, M., & Gunaratna, R. (2014). The significance quest theory: A motivational account of radicalization and terrorism. Political Psychology, 35(S1), 69–93. https://doi.org/10.1111/pops.12163
  • Lygre, R., Eid, J., Larsson, G., & Ranstorp, M. (2011). Terrorist groups and lone actors: A comparison of psychological profiles. Studies in Conflict & Terrorism, 34(6), 495–515. https://doi.org/10.1080/1057610X.2011.571193
  • Mathur, A., Broekaert, E., & Clarke, M. (2024). Artificial intelligence and radicalization: Risks, ethics, and governance. International Centre for Counter-Terrorism. https://icct.nl/publication/ai-and-radicalization
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679
  • Moghaddam, F. M. (2005). The staircase to terrorism: A psychological exploration. American Psychologist, 60(2), 161–169. https://doi.org/10.1037/0003-066X.60.2.161
  • Novelli, C., Hacker, P., Morley, J., Trondal, J., & Floridi, L. (2024). Institutionalizing trustworthy AI in Europe: The architecture of the EU AI Act. AI & Society, 39(3), 1123–1141. https://doi.org/10.1007/s00146-024-01867-0
  • Picard, R. W. (1997). Affective computing. MIT Press.
  • Reuters. (2023, February 3). Man who plotted to kill Queen encouraged by AI chatbot, UK court hears. https://www.reuters.com
  • Schmidt, A., & Wiegand, M. (2017). A survey on hate speech detection using natural language processing. Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media (SocialNLP 2017), 1–10. https://doi.org/10.18653/v1/W17-1101
  • Spaaij, R. (2012). Understanding lone wolf terrorism: Global patterns, motivations and prevention. Springer.
  • The Ethics of AI or Techno-Solutionism. (2025). Critical perspectives on global AI governance. Springer Nature.
  • Tucker, J. A., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., Stukal, D., & Nyhan, B. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature. Political Science Quarterly, 133(3), 555–593. https://doi.org/10.1002/polq.12791
  • Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.
  • UNESCO. (2021). Recommendation on the ethics of artificial intelligence. https://unesdoc.unesco.org
  • van Wynsberghe, A., Cath, C., Jobin, A., & Floridi, L. (2025). From principles to practice: Challenges for global AI governance. AI and Ethics, 5(1), 23–41. https://doi.org/10.1007/s43681-024-00351-7
  • Weaver, M. (2023, February 3). Queen assassination plot: AI chatbot urged man to carry out attack, court hears. The Guardian. https://www.theguardian.com
  • Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
  • Yin, R. K. (2018). Case study research and applications: Design and methods (6th ed.). SAGE Publications.
Toplam 33 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Konuşma Üretimi, Yapay Yaşam ve Karmaşık Uyarlanabilir Sistemler
Bölüm Teorik Makale
Yazarlar

Birce Beşgül 0000-0002-6324-2141

Gönderilme Tarihi 26 Ekim 2025
Kabul Tarihi 12 Şubat 2026
Yayımlanma Tarihi 1 Mart 2026
IZ https://izlik.org/JA76LN39TN
Yayımlandığı Sayı Yıl 2026 Cilt: 17 Sayı: 1

Kaynak Göster

APA Beşgül, B. (2026). Early Detection of Lone-Wolf Radicalization: The Role of Conversational Artificial Intelligence. AJIT-e: Academic Journal of Information Technology, 17(1), 1-25. https://izlik.org/JA76LN39TN