Research Article
BibTex RIS Cite

YAPAY ZEKÂ VE ONTOLOJİK GÜVENSİZLİK: BİREYSEL VE TOPLUMSAL KAYGI DİNAMİKLERİ ÜZERİNE BİR DEĞERLENDİRME

Year 2023, Volume: XIV Issue: I, 22 - 51, 28.09.2023

Abstract

Hayatımızın her anında toplumun tüm bireyleriyle etkileşimini daha da arttıran ve her geçen gün gündelik yaşamımızı pratikleştirmeye imkân sağlayan yapay zekâ teknolojilerinin birçok farklı kulvarda yaygınlaşması, getirdiği faydaların yanında artan sorunları da mümkün kılmaktadır. Çalışmamız yapay zekânın bir tanımını kapsamaktan ziyade onunla beraber gelen yeniliklerin birey ve toplum yapısında oluşturduğu değişim ve tehditleri ontolojik güvenlik çalışmalarının birey ve toplumu tetikleyen kaygı nosyonu ile bağdaştırmaktadır. Yapay zekânın birey ve toplumla olan entegrasyonunda bireysel ve toplumsal yaşamı tehdit eden sorunların kombinasyonunu ontolojik güvenlik kavramı çerçevesinde ele alan bu çalışmada kaygı ve güven problemlerini tetikleyici unsurlar ve sorunlar tartışılmaktadır.

References

  • Ağyar, Z. (2015). “Yapay sinir ağlarının kullanım alanları ve bir uygulama”. Mühendis ve Makine, 56(662), 22-23.
  • Akkaya, B., Özkan, A., & Özkan, H. (2021). “Yapay zekâ kaygı (YZK) ölçeği: Türkçeye uyarlama, geçerlik ve güvenirlik çalışması”. Alanya Akademik Bakış, 5(2), 1125-1146.
  • Amini, A., Soleimany, A. P., Schwarting, W., Bhatia, S. N., & Rus, D. (2019). “Uncovering and mitigating algorithmic bias through learned latent structure”. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (s. 289-295).
  • Arfi, B. (2020). “Security qua existential surviving (while becoming otherwise) through performative leaps of faith”. International Theory, 12(2), 291-305.
  • Arrigo, A. B. (Ed.). (2018). The SAGE encyclopedia of surveillance, security, and privacy. Charlotte, NC: SAGE Publications.
  • Berk, M. E. (2020). “Dijital çağın yeni tehlikesi deepfake”. OPUS International Journal of Society Researches, 16(28), 1508-1523.
  • Binark, M., Demir, E. M., Sezgin, S., & Özsu, G. (2023). “Türkiye müzik endüstrisinde platformlar ve algoritmik kürasyonun yeni kültürel aracılık rolü-Que vadis?”. Kültür ve İletişim, 26(51), 108-141
  • Boyd, R., & Holton, R. J. (2018). “Technology, innovation, employment and power: Does robotics and artificial intelligence really mean social transformation?”. Journal of Sociology, 54(3), 331-345.
  • Castelluccia, C., & Le Métayer, D. (2019). “Understanding algorithmic decision-making: Opportunities and challenges”. European Parliamentary Research Service.
  • Castelvecchi, D. (2016). “Can we open the black box of AI?”. Nature News, 538(7623), 20.
  • Chace, J., & Carr, C. (1988). America invulnerable: The quest for absolute security from 1812 to Starwars. Summit Books.
  • Chen, M. (2021). “Trust and trust-engineering in artificial intelligence research: Theory and praxis”. Philosophy & Technology, 34(4), 1429-1447.
  • Chui, M., Harryson, M., Manyika, J., Roberts, R., Chung, R., van Heteren, A., & Nel, P. (2018). “Notes from the AI frontier: Applying AI for social good”. McKinsey Global Institute.
  • Couldry, N., & Mejias, U. A. (2019). “Data colonialism: Rethinking big data’s relation to the contemporary subject”. Television & New Media, 20(4), 336-349.
  • Crevier, D. (1993). AI: The tumultuous history of the search for artificial intelligence. Basic Books.
  • Croft, S. (2012). Securitizing Islam: Identity and the search for security. Cambridge University Press.
  • Davey, T. (2017). “Artificial intelligence and the future of work: An interview with Moshe Vardi”. Future of Life.
  • Dilmaghani, S., Brust, M. R., Danoy, G., Cassagnes, N., Pecero, J., & Bouvry, P. (2019). “Privacy and security of big data in AI systems: A research and standards perspective”. In IEEE International Conference on Big Data (pp. 5737-5743). IEEE.
  • Dilworth, R. L. (1988). “Artificial intelligence: The time is now”. Public Productivity Review, 123-130.
  • Efe, A. (2021). “Yapay zekâ risklerinin etik yönünden değerlendirilmesi”. Bilgi ve İletişim Teknolojileri Dergisi, 3(1), 1-24.
  • Ejdus, F. (2020). Crisis and ontological insecurity: Serbia’s anxiety over Kosovo’s secession. Palgrave Macmillan.
  • Erdoğdu, S. (2021). “Yapay zekâ rekabeti bağlamında Çin’in ontolojik güvenlik algısı”. Uluslararası Hukuk ve Sosyal Bilim Araştırmaları Dergisi, 3(2), 1-10.
  • European Commission. (2020). White paper on artificial intelligence: A European approach to excellence and trust. Brussels.
  • Fido, D., Rao, J., & Harper, A. (2022). “Celebrity status, sex, and variation in psychopathy predicts judgements of and proclivity to generate and distribute deepfake pornography”. Computers in Human Behavior.
  • Freud, S. (1917). Introductory lectures on psycho-analysis. In J. Strachey (Ed.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 16, s. 241-463). Hogarth Press.
  • Frey, C. B., & Osborne, M. A. (2017). “The future of employment: How susceptible are jobs to computerisation?”. Technological Forecasting and Social Change, 114, 254-280.
  • Geiger, R. S., Yu, K., Yang, Y., Dai, M., Qiu, J., Tang, R., & Huang, J. (2020). “Garbage in, garbage out? Do machine learning application papers in social computing report where human-labeled training data comes from?”. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (s. 325-336).
  • George, G. (1986). Technology and justice. House of Anansi Press.
  • Giddens, A. (1991). Modernity and self-identity: Self and society in the late modern age. Stanford University Press.
  • Gustafsson, K., & Krickel-Choi, N. C. (2020). “Returning to the roots of ontological security: Insights from the existentialist anxiety literatüre”. European Journal of International Relations, 26(3), 875-895.
  • Haenlein, M., & Kaplan, A. (2019). “A brief history of artificial intelligence: On the past, present, and future of artificial intelligence”. California Management Review, 61(4), 5-14.
  • Han, B. C. (2019). Psikopolitika (H. Barışcan, Çev.). Metis Yayınları.
  • Hunt, E. (2016). “Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter”. The Guardian, 24(3).
  • IRGC. (2018). The governance of decision-making algorithms.
  • Jacob, V. S., Moore, J. C., & Whinston, A. B. (1988). “Artificial intelligence and the management science practitioner: Rational choice and artificial intelligence”. Interfaces, 18(4), 24-35.
  • Kabalcı, E. (2014). Yapay sinir ağları. Ders Notları.
  • Katzenbach, C., & Ulbricht, L. (2019). Algorithmic governance. Internet Policy Review, 8(4), 1-18.
  • Kayaalp, K., & Süzen, A. A. (2018). Derin öğrenme ve Türkiye’deki uygulamaları. İKSAD Yayınevi.
  • Kierkegaard, S. (2013). Kaygı kavramı (T. Armaner, Çev.). Türkiye İş Bankası Kültür Yayınevi.
  • Kinnvall, C. (2004). “Globalization and religious nationalism: Self, identity, and the search for ontological security”. Political Psychology, 25(5), 741-767.
  • Kitchin, R. (2013). “Big data and human geography: Opportunities, challenges and risks”. Dialogues in Human Geography, 3(3), 262-267.
  • Korumaz, E. K. (2019). “The crisis of the individual in the modern society: An analysis of the works of Marx, Durkheim and Weber”. Sosyoloji Dergisi, (40), 265-280.
  • Kula, S., & Çakar, B. (2015). “Maslow ihtiyaçlar hiyerarşisi bağlamında toplumda bireylerin güvenlik algısı ve yaşam doyumu arasındaki ilişki”. Bartın Üniversitesi İİBF Dergisi, 6(12), 191-210.
  • Kurzweil, R. (1985). “What is artificial intelligence anyway? As the techniques of computing grow more sophisticated, machines are beginning to appear intelligent—but can they actually think?”. American Scientist, 73(3), 258-264.
  • Laing, R. D. (1960). The divided self: An existential study in sanity and madness. Penguin Books.
  • Larson, E. J. (2022). Yapay zekâ miti (K. Y. Us, Çev.). Fol Yayınları.
  • Lawhead, W. F. (2003). The philosophical journey: an interactive approach. McGraw Hill.
  • Leavy, S. (2018). “Gender bias in artificial intelligence: The need for diversity and gender theory in machine learning”. In Proceedings of the 1st International Workshop on Gender Equality in Software Engineering (s. 14-16).
  • Lebow, R. N. (2016). National identities and international relations. Cambridge University Press.
  • Macnish, K., Ryan, M., & Gregory, A. (2019). SHERPA deliverable D1. 1 Case studies. De Montfort University.
  • Mardin, Ş. (1973). “Center-periphery relations: A key to Turkish politics?”. Daedalus, 169-190.
  • Marquart, S. (2017). “Aligning super intelligence with human interests”. Future of Life.
  • Marr, B. (2018). “What is deep learning AI? A simple guide with 8 practical examples”. Forbes.
  • May, R. (1977). The meaning of anxiety (Rev. ed.). W.W. Norton.
  • McDermid, J. A., Jia, Y., Porter, Z., & Habli, I. (2021). “Artificial intelligence explainability: The technical and ethical dimensions”. Philosophical Transactions of the Royal Society A, 379(2207), 20200363.
  • Meyer, D. (2017). “Vladimir Putin says whoever leads in artificial intelligence will rule the World”. Fortune, 4.
  • Mitzen, J. (2006). “Ontological security in world politics: State identity and the security dilemma”. European Journal of International Relations, 12(3), 341-370.
  • Mullahy, P. (1946). “A theory of interpersonal relations and the evolution of personality”. Psychiatry, 8(2), 177-205.
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Penguin.
  • Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). “Dissecting racial bias in an algorithm used to manage the health of populations”. Science, 366(6464), 447-453.
  • OECD. (2019). Artificial intelligence in society. OECD Publishing.
  • Okolie, C. (2023). “Artificial intelligence-altered videos (deepfakes), image-based sexual abuse, and data privacy concerns”. Journal of International Women’s Studies, 25(2), 11.
  • Okuyan, E. (2003). İnşaat proje yönetiminde istihdam edilecek teknik personel performansı değerlendirmesinde bir uzman sistem modeli geliştirmesi (Yayımlanmamış doktora tezi). İstanbul Üniversitesi.
  • Paganoni, M. C. (2019). Framing big data: A linguistic and discursive approach. Palgrave Macmillan.
  • Palmiotto, F. (2023). “Detecting deep fake evidence with artificial intelligence: A critical look from a criminal law perspective”. Available at SSRN 4384122.
  • Paris, B., & Donovan, J. (2019). “Deepfakes and cheap fakes: The manipulation of audio and visual evidence”. Data & Society.
  • Partal, T., Kahya, E., & Cığızoğlu, K. (2011). “Yağış verilerinin yapay sinir ağları ve dalgacık dönüşümü yöntemleri ile tahmini”. İTÜDERGİSİ/d, 7(3).
  • Ringrose, J., Milne, B., Mishna, F., Regehr, K., & Slane, A. (2022). “Young people’s experiences of image-based sexual harassment and abuse in England and Canada: Toward a feminist framing of technologically facilitated sexual violence”. Women’s Studies International Forum, 93.
  • Rumelili, B. (2020). “Integrating anxiety into international relations theory: Hobbes, existentialism, and ontological security”. International Theory, 12(2), 257-272.
  • Rumelili, B. (Ed.). (2014). Conflict resolution and ontological security: Peace anxieties. Routledge.
  • Rumelili, B., & Adısönmez, U. C. (2020). “Uluslararası ilişkilerde kimlik-güvenlik ilişkisine dair yeni bir paradigma: Ontolojik güvenlik teorisi”. Uluslararası İlişkiler Dergisi, 17(66), 23-39.
  • Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach. Pearson Education.
  • Ryan, M. (2020). “In AI we trust: Ethics, artificial intelligence, and reliability”. Science and Engineering Ethics, 26(5), 2749-2767.
  • Sarıbay, A. Y. (2014). Toplumun mantığı: Bir mantıksal anlatı olarak sosyoloji. Sentez Yayıncılık.
  • Sarpatwar, K., Vaculin, R., Min, H., Su, G., Heath, T., Ganapavarapu, G., & Dillenberger, D. (2019). “Towards enabling trusted artificial intelligence via blockchain”. In Policy-based autonomic data governance (s. 137-153). Springer.
  • Sell, P. (1985). Expert systems: A practical introduction. Macmillian Publishers.
  • Siau, K., & Wang, W. (2020). “Artificial intelligence (AI) ethics: Ethics of AI and ethical AI”. Journal of Database Management, 31(2), 74-87.
  • Smith, C. (2006). The history of artificial intelligence. University of Washington.
  • Stahl, B. C. (2021). “Ethical issues of AI”. In Artificial intelligence for a better future: An ecosystem perspective on the ethics of AI and emerging digital technologies. Springer.
  • Stahl, B. C., & Wright, D. (2018). “Ethics and privacy in AI and big data: Implementing responsible research and innovation”. IEEE Security & Privacy, 16(3), 26-33.
  • Stilgoe, J. (2018). “Machine learning, social learning and the governance of self-driving cars”. Social Studies of Science, 48(1), 25-56.
  • Strubell, E., Ganesh, A., & McCallum, A. (2019). “Energy and policy considerations for deep learning in NLP”. arXiv. https://arxiv.org/abs/1906.02243
  • Taş, D., & Turangil, F. (2020). “Sağlık çalışanlarının bilgisayar teknolojisine karşı tutumları ile teknoloji öz-yeterliliği düzeylerinin işgücü devrine etkisi: Gaziantep Üniversitesi Tıp Fakültesi Hastanesi örneği”. Anadolu Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi, 21(2), 1-17.
  • Tegmark, M. (2014). “Transcending complacency on super-intelligent machines”. HuffPost.
  • Tektaş, M., Akbaş, A., & Topuz, M. (2002). “Yapay zekâ tekniklerinin trafik kontrolünde kullanılması üzerine bir inceleme”. Uluslararası Trafik ve Yol Güvenliği Kongresi. Gazi Üniversitesi.
  • The World Economic Forum. (2012). Big data, big impact: New possibilities for international development.
  • The World Economic Forum. (2018). How to prevent discriminatory outcomes in machine learning.
  • Topol, E. J. (2019). “High-performance medicine: The convergence of human and artificial intelligence”. Nature Medicine, 25(1), 44-56.
  • Wang, Y. Y., & Wang, Y. S. (2019). “Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior”. Interactive Learning Environments, 1-16.
  • Yalçınkaya, A. (2019). “Yapay zekâ ve sosyal bilimler”. XI. Uluslararası Uludağ Uluslararası İlişkiler Kongresi. Bursa.
  • Yıldız, S. K. (2023). “Dijital enformasyon çağında sentetik kitle manipülasyonu: Deepfake (derin sahtelik) ürünleri”. Yeni medya araştırmaları: Dil, imaj, fenomenler, teknoloji, dezenformasyon (s. 163).

ARTIFICAL INTELLIGENCE AND ONTOLOGICAL INSECURITY: AN ASSESSMENT ON THE DYNAMICS OF INDIVIDUAL AND SOCIAL ANXIETY

Year 2023, Volume: XIV Issue: I, 22 - 51, 28.09.2023

Abstract

The proliferation of artificial intelligence technologies, which increase the interaction with all members of the society at every moment of our lives and enable us to practicalize our daily lives day by day, makes it possible to increase the problems as well as the benefits it brings. Rather than covering a definition of artificial intelligence, our study links the changes and threats posed by the innovations that come with it in the structure of the individual and society with the notion of anxiety that triggers the individual and society in ontological security studies. In this study, which deals with the combination of problems that threaten individual and social life in the integration of artificial intelligence with the individual and society within the framework of the ontological security concept, the elements and problems that trigger anxiety and trust problems are discussed

References

  • Ağyar, Z. (2015). “Yapay sinir ağlarının kullanım alanları ve bir uygulama”. Mühendis ve Makine, 56(662), 22-23.
  • Akkaya, B., Özkan, A., & Özkan, H. (2021). “Yapay zekâ kaygı (YZK) ölçeği: Türkçeye uyarlama, geçerlik ve güvenirlik çalışması”. Alanya Akademik Bakış, 5(2), 1125-1146.
  • Amini, A., Soleimany, A. P., Schwarting, W., Bhatia, S. N., & Rus, D. (2019). “Uncovering and mitigating algorithmic bias through learned latent structure”. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (s. 289-295).
  • Arfi, B. (2020). “Security qua existential surviving (while becoming otherwise) through performative leaps of faith”. International Theory, 12(2), 291-305.
  • Arrigo, A. B. (Ed.). (2018). The SAGE encyclopedia of surveillance, security, and privacy. Charlotte, NC: SAGE Publications.
  • Berk, M. E. (2020). “Dijital çağın yeni tehlikesi deepfake”. OPUS International Journal of Society Researches, 16(28), 1508-1523.
  • Binark, M., Demir, E. M., Sezgin, S., & Özsu, G. (2023). “Türkiye müzik endüstrisinde platformlar ve algoritmik kürasyonun yeni kültürel aracılık rolü-Que vadis?”. Kültür ve İletişim, 26(51), 108-141
  • Boyd, R., & Holton, R. J. (2018). “Technology, innovation, employment and power: Does robotics and artificial intelligence really mean social transformation?”. Journal of Sociology, 54(3), 331-345.
  • Castelluccia, C., & Le Métayer, D. (2019). “Understanding algorithmic decision-making: Opportunities and challenges”. European Parliamentary Research Service.
  • Castelvecchi, D. (2016). “Can we open the black box of AI?”. Nature News, 538(7623), 20.
  • Chace, J., & Carr, C. (1988). America invulnerable: The quest for absolute security from 1812 to Starwars. Summit Books.
  • Chen, M. (2021). “Trust and trust-engineering in artificial intelligence research: Theory and praxis”. Philosophy & Technology, 34(4), 1429-1447.
  • Chui, M., Harryson, M., Manyika, J., Roberts, R., Chung, R., van Heteren, A., & Nel, P. (2018). “Notes from the AI frontier: Applying AI for social good”. McKinsey Global Institute.
  • Couldry, N., & Mejias, U. A. (2019). “Data colonialism: Rethinking big data’s relation to the contemporary subject”. Television & New Media, 20(4), 336-349.
  • Crevier, D. (1993). AI: The tumultuous history of the search for artificial intelligence. Basic Books.
  • Croft, S. (2012). Securitizing Islam: Identity and the search for security. Cambridge University Press.
  • Davey, T. (2017). “Artificial intelligence and the future of work: An interview with Moshe Vardi”. Future of Life.
  • Dilmaghani, S., Brust, M. R., Danoy, G., Cassagnes, N., Pecero, J., & Bouvry, P. (2019). “Privacy and security of big data in AI systems: A research and standards perspective”. In IEEE International Conference on Big Data (pp. 5737-5743). IEEE.
  • Dilworth, R. L. (1988). “Artificial intelligence: The time is now”. Public Productivity Review, 123-130.
  • Efe, A. (2021). “Yapay zekâ risklerinin etik yönünden değerlendirilmesi”. Bilgi ve İletişim Teknolojileri Dergisi, 3(1), 1-24.
  • Ejdus, F. (2020). Crisis and ontological insecurity: Serbia’s anxiety over Kosovo’s secession. Palgrave Macmillan.
  • Erdoğdu, S. (2021). “Yapay zekâ rekabeti bağlamında Çin’in ontolojik güvenlik algısı”. Uluslararası Hukuk ve Sosyal Bilim Araştırmaları Dergisi, 3(2), 1-10.
  • European Commission. (2020). White paper on artificial intelligence: A European approach to excellence and trust. Brussels.
  • Fido, D., Rao, J., & Harper, A. (2022). “Celebrity status, sex, and variation in psychopathy predicts judgements of and proclivity to generate and distribute deepfake pornography”. Computers in Human Behavior.
  • Freud, S. (1917). Introductory lectures on psycho-analysis. In J. Strachey (Ed.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 16, s. 241-463). Hogarth Press.
  • Frey, C. B., & Osborne, M. A. (2017). “The future of employment: How susceptible are jobs to computerisation?”. Technological Forecasting and Social Change, 114, 254-280.
  • Geiger, R. S., Yu, K., Yang, Y., Dai, M., Qiu, J., Tang, R., & Huang, J. (2020). “Garbage in, garbage out? Do machine learning application papers in social computing report where human-labeled training data comes from?”. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (s. 325-336).
  • George, G. (1986). Technology and justice. House of Anansi Press.
  • Giddens, A. (1991). Modernity and self-identity: Self and society in the late modern age. Stanford University Press.
  • Gustafsson, K., & Krickel-Choi, N. C. (2020). “Returning to the roots of ontological security: Insights from the existentialist anxiety literatüre”. European Journal of International Relations, 26(3), 875-895.
  • Haenlein, M., & Kaplan, A. (2019). “A brief history of artificial intelligence: On the past, present, and future of artificial intelligence”. California Management Review, 61(4), 5-14.
  • Han, B. C. (2019). Psikopolitika (H. Barışcan, Çev.). Metis Yayınları.
  • Hunt, E. (2016). “Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter”. The Guardian, 24(3).
  • IRGC. (2018). The governance of decision-making algorithms.
  • Jacob, V. S., Moore, J. C., & Whinston, A. B. (1988). “Artificial intelligence and the management science practitioner: Rational choice and artificial intelligence”. Interfaces, 18(4), 24-35.
  • Kabalcı, E. (2014). Yapay sinir ağları. Ders Notları.
  • Katzenbach, C., & Ulbricht, L. (2019). Algorithmic governance. Internet Policy Review, 8(4), 1-18.
  • Kayaalp, K., & Süzen, A. A. (2018). Derin öğrenme ve Türkiye’deki uygulamaları. İKSAD Yayınevi.
  • Kierkegaard, S. (2013). Kaygı kavramı (T. Armaner, Çev.). Türkiye İş Bankası Kültür Yayınevi.
  • Kinnvall, C. (2004). “Globalization and religious nationalism: Self, identity, and the search for ontological security”. Political Psychology, 25(5), 741-767.
  • Kitchin, R. (2013). “Big data and human geography: Opportunities, challenges and risks”. Dialogues in Human Geography, 3(3), 262-267.
  • Korumaz, E. K. (2019). “The crisis of the individual in the modern society: An analysis of the works of Marx, Durkheim and Weber”. Sosyoloji Dergisi, (40), 265-280.
  • Kula, S., & Çakar, B. (2015). “Maslow ihtiyaçlar hiyerarşisi bağlamında toplumda bireylerin güvenlik algısı ve yaşam doyumu arasındaki ilişki”. Bartın Üniversitesi İİBF Dergisi, 6(12), 191-210.
  • Kurzweil, R. (1985). “What is artificial intelligence anyway? As the techniques of computing grow more sophisticated, machines are beginning to appear intelligent—but can they actually think?”. American Scientist, 73(3), 258-264.
  • Laing, R. D. (1960). The divided self: An existential study in sanity and madness. Penguin Books.
  • Larson, E. J. (2022). Yapay zekâ miti (K. Y. Us, Çev.). Fol Yayınları.
  • Lawhead, W. F. (2003). The philosophical journey: an interactive approach. McGraw Hill.
  • Leavy, S. (2018). “Gender bias in artificial intelligence: The need for diversity and gender theory in machine learning”. In Proceedings of the 1st International Workshop on Gender Equality in Software Engineering (s. 14-16).
  • Lebow, R. N. (2016). National identities and international relations. Cambridge University Press.
  • Macnish, K., Ryan, M., & Gregory, A. (2019). SHERPA deliverable D1. 1 Case studies. De Montfort University.
  • Mardin, Ş. (1973). “Center-periphery relations: A key to Turkish politics?”. Daedalus, 169-190.
  • Marquart, S. (2017). “Aligning super intelligence with human interests”. Future of Life.
  • Marr, B. (2018). “What is deep learning AI? A simple guide with 8 practical examples”. Forbes.
  • May, R. (1977). The meaning of anxiety (Rev. ed.). W.W. Norton.
  • McDermid, J. A., Jia, Y., Porter, Z., & Habli, I. (2021). “Artificial intelligence explainability: The technical and ethical dimensions”. Philosophical Transactions of the Royal Society A, 379(2207), 20200363.
  • Meyer, D. (2017). “Vladimir Putin says whoever leads in artificial intelligence will rule the World”. Fortune, 4.
  • Mitzen, J. (2006). “Ontological security in world politics: State identity and the security dilemma”. European Journal of International Relations, 12(3), 341-370.
  • Mullahy, P. (1946). “A theory of interpersonal relations and the evolution of personality”. Psychiatry, 8(2), 177-205.
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Penguin.
  • Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). “Dissecting racial bias in an algorithm used to manage the health of populations”. Science, 366(6464), 447-453.
  • OECD. (2019). Artificial intelligence in society. OECD Publishing.
  • Okolie, C. (2023). “Artificial intelligence-altered videos (deepfakes), image-based sexual abuse, and data privacy concerns”. Journal of International Women’s Studies, 25(2), 11.
  • Okuyan, E. (2003). İnşaat proje yönetiminde istihdam edilecek teknik personel performansı değerlendirmesinde bir uzman sistem modeli geliştirmesi (Yayımlanmamış doktora tezi). İstanbul Üniversitesi.
  • Paganoni, M. C. (2019). Framing big data: A linguistic and discursive approach. Palgrave Macmillan.
  • Palmiotto, F. (2023). “Detecting deep fake evidence with artificial intelligence: A critical look from a criminal law perspective”. Available at SSRN 4384122.
  • Paris, B., & Donovan, J. (2019). “Deepfakes and cheap fakes: The manipulation of audio and visual evidence”. Data & Society.
  • Partal, T., Kahya, E., & Cığızoğlu, K. (2011). “Yağış verilerinin yapay sinir ağları ve dalgacık dönüşümü yöntemleri ile tahmini”. İTÜDERGİSİ/d, 7(3).
  • Ringrose, J., Milne, B., Mishna, F., Regehr, K., & Slane, A. (2022). “Young people’s experiences of image-based sexual harassment and abuse in England and Canada: Toward a feminist framing of technologically facilitated sexual violence”. Women’s Studies International Forum, 93.
  • Rumelili, B. (2020). “Integrating anxiety into international relations theory: Hobbes, existentialism, and ontological security”. International Theory, 12(2), 257-272.
  • Rumelili, B. (Ed.). (2014). Conflict resolution and ontological security: Peace anxieties. Routledge.
  • Rumelili, B., & Adısönmez, U. C. (2020). “Uluslararası ilişkilerde kimlik-güvenlik ilişkisine dair yeni bir paradigma: Ontolojik güvenlik teorisi”. Uluslararası İlişkiler Dergisi, 17(66), 23-39.
  • Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach. Pearson Education.
  • Ryan, M. (2020). “In AI we trust: Ethics, artificial intelligence, and reliability”. Science and Engineering Ethics, 26(5), 2749-2767.
  • Sarıbay, A. Y. (2014). Toplumun mantığı: Bir mantıksal anlatı olarak sosyoloji. Sentez Yayıncılık.
  • Sarpatwar, K., Vaculin, R., Min, H., Su, G., Heath, T., Ganapavarapu, G., & Dillenberger, D. (2019). “Towards enabling trusted artificial intelligence via blockchain”. In Policy-based autonomic data governance (s. 137-153). Springer.
  • Sell, P. (1985). Expert systems: A practical introduction. Macmillian Publishers.
  • Siau, K., & Wang, W. (2020). “Artificial intelligence (AI) ethics: Ethics of AI and ethical AI”. Journal of Database Management, 31(2), 74-87.
  • Smith, C. (2006). The history of artificial intelligence. University of Washington.
  • Stahl, B. C. (2021). “Ethical issues of AI”. In Artificial intelligence for a better future: An ecosystem perspective on the ethics of AI and emerging digital technologies. Springer.
  • Stahl, B. C., & Wright, D. (2018). “Ethics and privacy in AI and big data: Implementing responsible research and innovation”. IEEE Security & Privacy, 16(3), 26-33.
  • Stilgoe, J. (2018). “Machine learning, social learning and the governance of self-driving cars”. Social Studies of Science, 48(1), 25-56.
  • Strubell, E., Ganesh, A., & McCallum, A. (2019). “Energy and policy considerations for deep learning in NLP”. arXiv. https://arxiv.org/abs/1906.02243
  • Taş, D., & Turangil, F. (2020). “Sağlık çalışanlarının bilgisayar teknolojisine karşı tutumları ile teknoloji öz-yeterliliği düzeylerinin işgücü devrine etkisi: Gaziantep Üniversitesi Tıp Fakültesi Hastanesi örneği”. Anadolu Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi, 21(2), 1-17.
  • Tegmark, M. (2014). “Transcending complacency on super-intelligent machines”. HuffPost.
  • Tektaş, M., Akbaş, A., & Topuz, M. (2002). “Yapay zekâ tekniklerinin trafik kontrolünde kullanılması üzerine bir inceleme”. Uluslararası Trafik ve Yol Güvenliği Kongresi. Gazi Üniversitesi.
  • The World Economic Forum. (2012). Big data, big impact: New possibilities for international development.
  • The World Economic Forum. (2018). How to prevent discriminatory outcomes in machine learning.
  • Topol, E. J. (2019). “High-performance medicine: The convergence of human and artificial intelligence”. Nature Medicine, 25(1), 44-56.
  • Wang, Y. Y., & Wang, Y. S. (2019). “Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior”. Interactive Learning Environments, 1-16.
  • Yalçınkaya, A. (2019). “Yapay zekâ ve sosyal bilimler”. XI. Uluslararası Uludağ Uluslararası İlişkiler Kongresi. Bursa.
  • Yıldız, S. K. (2023). “Dijital enformasyon çağında sentetik kitle manipülasyonu: Deepfake (derin sahtelik) ürünleri”. Yeni medya araştırmaları: Dil, imaj, fenomenler, teknoloji, dezenformasyon (s. 163).
There are 91 citations in total.

Details

Primary Language Turkish
Subjects Information Modelling, Management and Ontologies
Journal Section Research
Authors

Merve Abanoz 0000-0002-8630-6806

Eray Acar

Publication Date September 28, 2023
Published in Issue Year 2023 Volume: XIV Issue: I

Cite

APA Abanoz, M., & Acar, E. (2023). YAPAY ZEKÂ VE ONTOLOJİK GÜVENSİZLİK: BİREYSEL VE TOPLUMSAL KAYGI DİNAMİKLERİ ÜZERİNE BİR DEĞERLENDİRME. LAÜ Sosyal Bilimler Dergisi, XIV(I), 22-51.

Lefke Avrupa Üniversitesi (LAÜ) Sosyal Bilimler Dergisi haziran ve aralık aylarında olmak üzere yılda iki defa yayınlanan iki hakemli bir dergidir. Derginin yelpazesi toplum bilimlerinin tüm disiplinlerini ve dallarını kapsamaktadır. LAÜ Sosyal Bilimler Dergisi yalnızca Türkçe ve İngilizce makaleleri kabul etmektedir.  http://euljss.eul.edu.tr/euljss/