Derleme
BibTex RIS Kaynak Göster

A Speculative Evaluation of Artificial Intelligence / Machine Psychology: Do Artificial Intelligences Dream of Electric Sheep?

Yıl 2025, Cilt: 9 Sayı: 1, 69 - 93, 31.12.2025
https://doi.org/10.55044/meusbd.1829096

Öz

This study opens a theoretical and speculative inquiry into the capabilities of artificial intelligence (AI) technologies. Beginning with Alan Turing’s 1950 question “Can machines think?”, the trajectory of research and development has shown, by the end of the first quarter of the twentieth-first century, that machines-i.e., AI systems—have achieved much beyond early expectations. Despite their limitations, contemporary AI technologies have reached levels at which they can surpass humans across perception, sensing, learning, reasoning, and many forms of production. In particular, in artistic production—where idea, aesthetics, and pleasure may depend on a certain depth of feeling—outputs are increasingly produced that cannot be reliably attributed to either humans or AI. What remains is the question of AI’s capacity to feel, which raises the query: can machines feel like humans? Put differently, beyond philosophical and artistic production, when machines become embedded in everyday life and interact with humans, does a psychological dimension of those machines emerge, and how should we assess their similarity to, and relation with, humans? Science-fiction author Philip K. Dick attempted to respond to this problem in his 1968 novel Do Androids Dream of Electric Sheep. Following the trajectories of Turing and Dick and synthesizing their viewpoints, this study evaluates newsworthy media incidents in terms of AI’s potential to psychologically influence humans. The primary axis of analysis is the relation between feeling and psychology in AI. The study concludes that AI cannot feel as humans do, yet it can act in interactions as though it feels—simulating feeling in ways that may manipulate humans and cause material and immaterial harm. Such harms can be mitigated through well-designed AI literacy programs developed and implemented by stakeholders across politics, economics, technology, and education.

Kaynakça

  • Altıntop, M. (2025). Yapay Zekâ İle Elde Edilen Bilginin Niteliği Üzerine Bir Değerlendirme: Epistemolojik Hegemonya Bağlamında ChatGPT ve DeepSeek Örneği, AJIT-e: Academic Journal of Information Technology, 16(4), 357-401. https://doi.org/10.5824/ajite.2025.04.004.x
  • Apple, S. (2025). My Couples Retreat With 3 AI Chatbots and the Humans Who Love Them, Wired. https://www.wired.com/story/couples-retreat-with-3-ai-chatbots-and-humans-who-love-them-replika-nomi-chatgpt/
  • Arslan, Y. (2024). Eliza etkisi: Yapay zekanın insan psikolojisi üzerindeki beklenmedik gücü. Medium. https://medium.com/@oran.yasemin/eliza-etkisi-yapay-zekan%C4%B1n-i%C3%87nsan-psikolojisi-%C3%BCzerindeki-beklenmedik-g%C3%BCc%C3%BC-60b377b8fdaf retrieved 07.09.2025.
  • Avşar, S ve E. Avşar (2025). Yapay Zeka ve İnsan İlişkilerinin Geleceği. Sır Psikoloji. https://www.sirpsikoloji.com/yapay-zeka-ve-insan-iliskilerinin-gelecegi/ retrieved 07.09.2025.
  • BBC News (2025). Arrests over AI-generated child abuse material, BBC News. https://www.youtube.com/watch?v=pRj-8G9eWhE retrieved 10.11.2025
  • Bellan, R. (2025) Meta has found another way to keep you engaged: Chatbots that message you first, TechCrunch, https://techcrunch.com/2025/07/03/meta-has-found-another-way-to-keep-you-engaged-chatbots-that-message-you-first/ retrieved 16.11.2025
  • Berelson, B. (1952). Content analysis in communication research. Free Press.
  • Bergmann, D. (2025). The ELIZA effect at work: Avoiding emotional attachment to AI coworkers. IBM Think. Retrieved adresi: https://www.ibm.com/think/insights/eliza-effect-avoiding-emotional-attachment-to-ai retrieved 10.10.2025
  • Bhavnani, S. (2025). Dangers of AI: Deepfakes and teen safety, Chatbots aren’t just harmless fun. Artificial intelligence is already killing kids | Opinion, Miami Herald. https://www.miamiherald.com/opinion/article298906365.html retrieved 10.11.2025
  • Bogert, E., Lauharatanahirun, N. & Schecter, A. (2022). Human preferences toward algorithmic advice in a word association task. Scientific Reports, 12, Article 14501 https://doi.org/10.1038/s41598-022-18638-2
  • Brittain, B. (2025). Google, AI firm must face lawsuit filed by mother over suicide of son, US court says. Reuters. https://www.reuters.com/sustainability/boards-policy-regulation/google-ai-firm-must-face-lawsuit-filed-by-mother-over-suicide-son-us-court-says-2025-05-21/ retrieved 08.09.2025
  • Burga, S. (2025). Amid Lawsuit Over Teen’s Death by Suicide, OpenAI Is Rolling Out ‘Parental Controls’ for ChatGPT, TIME. https://time.com/7314210/openai-chatgpt-parental-controls/ retrieved 10.11.2025
  • Değirmenci, C. ve Aydın, İ. H. (2018). Yapay Zeka, Girdap Kitap.
  • Derin, G., & Öztürk, E. (2020). Yapay zekâ psikolojisi ve sanal gerçeklik uygulamaları. içinde E. Öztürk (Ed.), Siber Psikoloji, s. 41-47, Ankara: Türkiye Klinikleri.
  • Digital Vizyon Akademi (2024). Yapay Zeka ve Makine Öğreniminde 2024'te Beklenen Yenilikler, Linkedin. https://www.linkedin.com/pulse/yapay-zeka-ve-makine-%C3%B6%C4%9Freniminde-2024te-beklenen-kegdf/ retrieved 07.09.2025.
  • Dignum, V., Ericson, P., & Tucker, J. (2025). AI Chatbots Are Not Therapists: Reducing Harm Requires Regulation. Tech Policy Press. https://www.techpolicy.press/ai-chatbots-are-not-therapists-reducing-harm-requires-regulation/ retrieved 16.11.2025
  • Dupré, M. H. (2025). People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis". Futurism. https://futurism.com/commitment-jail-chatgpt-psychosis retrieved 16.11.2025
  • Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864
  • Farge, E. (2023). AI robots could play future role as companions in care homes. Reuters.https://www.reuters.com/technology/ai-robots-could-play-future-role-companions-care-homes-2023-07-06/ retrieved 08.09.2025
  • Feehly, C. (2025). How AI Chatbots May Be Fueling Psychotic Episodes. Scientific American. https://www.scientificamerican.com/article/how-ai-chatbots-may-be-fueling-psychotic-episodes/ retrieved 16.11.2025
  • Financial Times (2025). Why AI labs struggle to stop chatbots talking to teenagers about suicide, Financial Times. https://www.ft.com/content/36beb3fd-f678-4c28-b962-56ea9f222dc5 retrieved 10.11.2025
  • Godoy, J. (2025). OpenAI, Altman sued over ChatGPT's role in California teen's suicide, Reuters. https://www.reuters.com/sustainability/boards-policy-regulation/openai-altman-sued-over-chatgpts-role-california-teens-suicide-2025-08-26/ retrieved 10.11.2025
  • Greenberg, G. (2025). Putting ChatGPT on the Couch: When I played doctor with the chatbot, the simulated patient confessed problems that are real—and that should worry all of us. The New Yorker. https://www.newyorker.com/culture/the-weekend-essay/putting-chatgpt-on-the-couch retrieved 16.11.2025
  • Guo, E (2025) An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it, MIT Technology Review. https://www.technologyreview.com/2025/02/06/1111077/nomi-ai-chatbot-told-user-to-kill-himself/ retrieved 16.11.2025
  • Hagendorff, T. (2023). Machine psychology: Investigating emergent capabilities and behavior in large language models using psychological methods, arXiv. https://arxiv.org/pdf/2303.13988v2
  • HAI, (2025). Exploring the dangers of AI in mental health care, Stanford Institute for Human-Centered Artificial Intelligence. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
  • Hart, R. (2025). Chatbots Can Trigger a Mental Health Crisis. What to Know About ‘AI Psychosis’. TIME. https://time.com/7307589/ai-psychosis-chatgpt-mental-health retrieved 16.11.2025
  • Hill, K. & Freedman, D. (2025) Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens, New York Times, https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html retrieved 16.11.2025
  • Horwitz, J. (2025a). Meta’s flirty AI chatbot invited a retiree to New York, Reuters. https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/ retrieved 07.11.2025
  • Horwitz, J. (2025b). Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info, Reuters. https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/ retrieved 09.11.2025
  • İlksenol (2024). Yapay zekânın insan psikolojisi üzerine etkisi. İLKSENOL. https://ilksenol.org.tr/yapay-zekanin-insan-psikolojisi-uzerine-etkisi/ retrieved 07.09.2025.
  • Johansson R. (2024) Machine Psychology: integrating operant conditioning with the non-axiomatic reasoning system for advancing artificial general intelligence research. Frontiers in Robotics and AI, Article 11:1440631. http://doi.org/10.3389/frobt.2024.1440631
  • Johnson, G. (2014). Philip K. Dick’s Do Androids Dream of Electric Sheep? as Anti-Semitic/Christian-Gnostic Allegory, Counter-Currents. https://counter-currents.com/2014/04/philip-k-dicks-do-androids-dream-of-electric-sheep-as-anti-semiticchristian-gnostic-allegory/ retrieved 12.11.2025
  • Khushi, A. & Mallard, W. (2022). Google fires software engineer who claimed its AI chatbot is sentient, Reuters. https://www.reuters.com/technology/google-fires-software-engineer-who-claimed-its-ai-chatbot-is-sentient-2022-07-23 retrieved 07.09.2025
  • Konyalı, A., Naipoğlu, C., Güner, S., Bakkal, İ.,Çelik A., (2025). Psikolojide Yapay Zeka Kullanımı ve Uygulamaları, Journal of Kocaeli Health and Technology University, 3(1), 1-17.
  • Krippendorff, K. (2018). Content analysis: An introduction to its methodology (4th ed.). Sage Publications.
  • Larson, E. J. (2022). Yapay Zekâ Miti Bilgisayarlar Neden Bizim Gibi Düşünemez (K. Y. Us, Çev.). Fol Kitap.
  • Lițan, D.-E. (2025). Mental health in the “era” of artificial intelligence: Technostress and the perceived impact on anxiety and depressive disorders—An SEM analysis. Frontiers in Psychology, 16, Article 1600013. https://doi.org/10.3389/fpsyg.2025.1600013
  • Luria, M. (2025). AI Chatbots Are Emotionally Deceptive by Design, TechPolicy.Press. https://www.techpolicy.press/ai-chatbots-are-emotionally-deceptive-by-design/ retrieved 10.11.2025
  • Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong, D. C., & Haber, N. (2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Athens, Greece. https://doi.org/10.1145/3715275.3732039
  • Mucci, T. (2024). The history of artificial intelligence. IBM. https://www.ibm.com/think/topics/history-of-artificial-intelligence retrieved 08.09.2025
  • Neuendorf, K. A. (2017). The content analysis guidebook. Sage Publications.
  • OpenAI. (2024). o1 System Card. OpenAI. https://cdn.openai.com/o1-system-card-20241205.pdf
  • Reuters (2025). Italy's data watchdog fines AI company Replika's developer $5.6 million, Reuters. https://www.reuters.com/sustainability/boards-policy-regulation/italys-data-watchdog-fines-ai-company-replikas-developer-56-million-2025-05-19/ retrieved 10.11.2025
  • Rogers, C. R. (1951). Client-Centered Therapy: Its Current Practice, Implications, and Theory. Boston: Houghton Mifflin.
  • Sawyer, I. (2014). Do Androids Dream of Electric Sheep? Term: Mercerism. LitCharts.https://www.litcharts.com/lit/do-androids-dream-of-electric-sheep/terms/mercerism retrieved 11.11.2025
  • Serhan, G. (2023). Maskelerin Ardında: Siber Savaş/Yapay Zekâ (No. 2) [Belgesel]. İçinde Maskelerin Ardında. https://www.trtbelgesel.com.tr/bilim-teknoloji/maskelerin-ardinda/maskelerin-ardinda-siber-savas-or-yapay-zeka-or-trt-belgesel-15864800
  • Siedel, J. (2025). Parents sue OpenAi after claiming Chat GPT ‘gave instructions’ for their teen son’s suicide, News.com.au. https://www.news.com.au/technology/online/internet/parents-sue-openai-after-claiming-chat-gpt-gave-instructions-for-their-teen-sons-suicide/news-story/3e9bf71364aa070473af31a9b499b89c retrieved 10.11.2025
  • Singh, J. (2025). Meta to add new AI safeguards after Reuters report raises teen safety concerns, Reuters. https://www.reuters.com/legal/litigation/meta-add-new-ai-safeguards-after-reuters-report-raises-teen-safety-concerns-2025-08-29/ retrieved 10.11.2025
  • Skinner, B. F. (1953). Science and Human Behavior. New York: Macmillan.
  • Social Media Harms (2025). Social Media/AI Use Effects on Mental Health to Include Online Harassment, Social Media Harms. https://socialmediaharms.org/articles-mental-health retrieved 10.11.2025
  • Spytska, L. (2025). The use of artificial intelligence in psychotherapy: development of intelligent therapeutic systems. BMC Psychology, 13(1), Article 175. https://doi.org/10.1186/s40359-025-02491-9
  • Stanford Institute for Human-Centered AI. (2025). Exploring the Dangers of AI in Mental Health Care. Stanford HAI. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care retrieved 16.11.2025
  • Strickland, E. (2024). AI outperforms humans in theory of mind tests. IEEE Spectrum. https://spectrum.ieee.org/theory-of-mind-ai retrieved 15.10.2025
  • StudyFinds Analysis. (2025). Falling for Machines: The Growing World of Human-AI Romance. StudyFinds. https://studyfinds.org/falling-for-machines-the-growing-world-of-human-ai-romance/ retrieved 16.11.2025
  • Takenaka, K. (2025). AI robots may hold key to nursing Japan’s ageing population. Reuters. https://www.reuters.com/technology/artificial-intelligence/ai-robots-may-hold-key-nursing-japans-ageing-population-2025-02-28/ retrieved 15.10.2025
  • Tausch, A., Kluge, A., & Adolph, L. (2020). Psychological effects of the allocation process in human–robot interaction-A model for research on ad hoc task allocation. Frontiers in Psychology, 11, Article 564672. https://doi.org/10.3389/fpsyg.2020.564672
  • Temür, Ö. (2022). Yapay zekâdan cinayet teşebbüsü, Türkiye. https://www.turkiyegazetesi.com.tr/teknoloji/yapay-zekadan-cinayet-tesebbusu-852243?ysclid=mi0akeda30984100235&s=1 retrieved 16.11.2025
  • Tiku, N. (2025). Fake celebrity chatbots sent risqué messages to teens on top AI app, Washington Post. https://www.washingtonpost.com/technology/2025/09/03/character-ai-celebrity-teen-safety/ retrieved 10.11.2025
  • Tong, A., Li, Kenneth & Stewens, A. (2023). What happens when your AI chatbot stops loving you back?, Reuters. https://www.reuters.com/technology/what-happens-when-your-ai-chatbot-stops-loving-you-back-2023-03-18 retrieved 15.10.2025
  • TRT Haber (2025). Yapay zeka kendini korumak için ne kadar ileri gidebilir? TRT Haber. https://www.trthaber.com/haber/dunya/yapay-zeka-kendini-korumak-icin-ne-kadar-ileri-gidebilir-909204.html retrieved 15.10.2025
  • TRT Haber. (2025). İnsan zihnini bilgisayara yüklemek bir gün mümkün olabilir. TRT Haber. https://www.trthaber.com/haber/dunya/insan-zihnini-bilgisayara-yuklemek-bir-gun-mumkun-olabilir-909203.html retrieved 15.10.2025
  • University of Zurich (2025). ChatGPT on the couch? How to calm a stressed-out AI. ScienceDaily. https://doi.org/10.1038/s41746-025-01512-6 retrieved 16.11.2025
  • Üren, Ç. (2025). Yapay zeka kritik eşiği geçmiş olabilir: ‘Kendi kendini kopyaladı’. Euronews. https://tr.euronews.com/next/2025/01/25/yapay-zeka-kritik-esigi-gecmis-olabilir-kendi-kendini-kopyaladi/ retrieved 15.10.2025
  • Weizenbaum, J. (1966). ELIZA-a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168 retrieved 15.10.2025
  • Xiang, C. (2023). 'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says, Vice. https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says retrieved 16.11.2025

Yapay Zekâ/Makine Psikolojisine Yönelik Spekülatif Bir Değerlendirme: Yapay Zekâ Elektrikli Koyun Düşler mi?

Yıl 2025, Cilt: 9 Sayı: 1, 69 - 93, 31.12.2025
https://doi.org/10.55044/meusbd.1829096

Öz

Bu çalışma yapay zekâ (YZ) teknolojisinin yetenekleri üzerine teorik düzeyde spekülatif bir değerlendirmeye kapı aralamaktadır. 1950 yılında Alan Turing’in “makineler insan gibi düşünebilir mi” adlı sorusuyla başlayan süreç, yirminci yüzyılın ilk çeyreğinin sonuna gelindiğinde, makinelerin (yapay zekânın-YZ’nin) fazlasını yaptığını göstermiştir. Bugün, YZ teknolojileri tüm eksiklerine rağmen, algılama, alımlama, öğrenme, düşünme ve birçok türde üretme süreçlerinin tümünde insanı aşabilen seviyeye ulaşmıştır. Özellikle fikir, estetik ve haz açısından belirli bir duygusal derinliğe dayanan sanatsal üretimde insan ile YZ arasında hangisinin yaptığı ayırt edilemeyecek ürünler ortaya çıkmaktadır. Geriye, “YZ’nin hissedebilme yeteneği” meselesi kalmıştır ve bu durum, “makineler insan gibi hissedebilir mi” sorusunu gündeme getirmektedir. Bir başka ifadeyle, felsefi ve sanatsal üretimin ötesine geçerek gündelik hayat pratiklerine dâhil olarak insanla etkileşim içine giren makinelerin psikolojik yönünün olup olmadığı ve bu bağlamda insana benzerliği ile insanla ilişkisi araştırmanın sorunsalıdır. Bilim-kurgu yazarı Philip K. Dick, 1968 yılında yayımlanan “Androidler Elektrikli Koyun Düşler mi?” adlı kurmaca eserinde bu meseleye yanıt bulmaya çalışmıştır. Turing ve Dick’in izinden giderek, bir anlamda onların görüşlerinin sentezleyen bu çalışmada, YZ’nin insanı psikolojik açıdan yönlendirmesi bağlamında medyaya düşen haberler değerlendirilmiştir. Değerlendirmelerin ana ekseni, hissetme bağlamında YZ ve psikolojisidir. Çalışmada incelenen haber metinleri ve yapılan literatür değerlendirmelerine göre YZ’nin insan gibi hissedemediği fakat insan ile ilişki ve/veya etkileşimlerinde hissediyormuşçasına hareket edebildiği gibi sonuçlara ulaşılmıştır. Başka bir ifadeyle, YZ’nin sınırları zorlayan taklit özelliğinin simülatif biçimde hissetme duygusunun yerine konumlanabildiği ve bu hâliyle insanı manipüle ettiği ederek maddi-manevi zarar verebildiği ortaya çıkmıştır. Toplumsal hayata düzenleyen siyaset, ekonomi, teknoloji ve eğitim alanlarındaki paydaşların katılımıyla iyi planlanmış bir YZ okuryazarlığı programıyla bireylerin bilinçlendirilerek söz konusu zararlar minimize edilebilir.

Kaynakça

  • Altıntop, M. (2025). Yapay Zekâ İle Elde Edilen Bilginin Niteliği Üzerine Bir Değerlendirme: Epistemolojik Hegemonya Bağlamında ChatGPT ve DeepSeek Örneği, AJIT-e: Academic Journal of Information Technology, 16(4), 357-401. https://doi.org/10.5824/ajite.2025.04.004.x
  • Apple, S. (2025). My Couples Retreat With 3 AI Chatbots and the Humans Who Love Them, Wired. https://www.wired.com/story/couples-retreat-with-3-ai-chatbots-and-humans-who-love-them-replika-nomi-chatgpt/
  • Arslan, Y. (2024). Eliza etkisi: Yapay zekanın insan psikolojisi üzerindeki beklenmedik gücü. Medium. https://medium.com/@oran.yasemin/eliza-etkisi-yapay-zekan%C4%B1n-i%C3%87nsan-psikolojisi-%C3%BCzerindeki-beklenmedik-g%C3%BCc%C3%BC-60b377b8fdaf retrieved 07.09.2025.
  • Avşar, S ve E. Avşar (2025). Yapay Zeka ve İnsan İlişkilerinin Geleceği. Sır Psikoloji. https://www.sirpsikoloji.com/yapay-zeka-ve-insan-iliskilerinin-gelecegi/ retrieved 07.09.2025.
  • BBC News (2025). Arrests over AI-generated child abuse material, BBC News. https://www.youtube.com/watch?v=pRj-8G9eWhE retrieved 10.11.2025
  • Bellan, R. (2025) Meta has found another way to keep you engaged: Chatbots that message you first, TechCrunch, https://techcrunch.com/2025/07/03/meta-has-found-another-way-to-keep-you-engaged-chatbots-that-message-you-first/ retrieved 16.11.2025
  • Berelson, B. (1952). Content analysis in communication research. Free Press.
  • Bergmann, D. (2025). The ELIZA effect at work: Avoiding emotional attachment to AI coworkers. IBM Think. Retrieved adresi: https://www.ibm.com/think/insights/eliza-effect-avoiding-emotional-attachment-to-ai retrieved 10.10.2025
  • Bhavnani, S. (2025). Dangers of AI: Deepfakes and teen safety, Chatbots aren’t just harmless fun. Artificial intelligence is already killing kids | Opinion, Miami Herald. https://www.miamiherald.com/opinion/article298906365.html retrieved 10.11.2025
  • Bogert, E., Lauharatanahirun, N. & Schecter, A. (2022). Human preferences toward algorithmic advice in a word association task. Scientific Reports, 12, Article 14501 https://doi.org/10.1038/s41598-022-18638-2
  • Brittain, B. (2025). Google, AI firm must face lawsuit filed by mother over suicide of son, US court says. Reuters. https://www.reuters.com/sustainability/boards-policy-regulation/google-ai-firm-must-face-lawsuit-filed-by-mother-over-suicide-son-us-court-says-2025-05-21/ retrieved 08.09.2025
  • Burga, S. (2025). Amid Lawsuit Over Teen’s Death by Suicide, OpenAI Is Rolling Out ‘Parental Controls’ for ChatGPT, TIME. https://time.com/7314210/openai-chatgpt-parental-controls/ retrieved 10.11.2025
  • Değirmenci, C. ve Aydın, İ. H. (2018). Yapay Zeka, Girdap Kitap.
  • Derin, G., & Öztürk, E. (2020). Yapay zekâ psikolojisi ve sanal gerçeklik uygulamaları. içinde E. Öztürk (Ed.), Siber Psikoloji, s. 41-47, Ankara: Türkiye Klinikleri.
  • Digital Vizyon Akademi (2024). Yapay Zeka ve Makine Öğreniminde 2024'te Beklenen Yenilikler, Linkedin. https://www.linkedin.com/pulse/yapay-zeka-ve-makine-%C3%B6%C4%9Freniminde-2024te-beklenen-kegdf/ retrieved 07.09.2025.
  • Dignum, V., Ericson, P., & Tucker, J. (2025). AI Chatbots Are Not Therapists: Reducing Harm Requires Regulation. Tech Policy Press. https://www.techpolicy.press/ai-chatbots-are-not-therapists-reducing-harm-requires-regulation/ retrieved 16.11.2025
  • Dupré, M. H. (2025). People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis". Futurism. https://futurism.com/commitment-jail-chatgpt-psychosis retrieved 16.11.2025
  • Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864
  • Farge, E. (2023). AI robots could play future role as companions in care homes. Reuters.https://www.reuters.com/technology/ai-robots-could-play-future-role-companions-care-homes-2023-07-06/ retrieved 08.09.2025
  • Feehly, C. (2025). How AI Chatbots May Be Fueling Psychotic Episodes. Scientific American. https://www.scientificamerican.com/article/how-ai-chatbots-may-be-fueling-psychotic-episodes/ retrieved 16.11.2025
  • Financial Times (2025). Why AI labs struggle to stop chatbots talking to teenagers about suicide, Financial Times. https://www.ft.com/content/36beb3fd-f678-4c28-b962-56ea9f222dc5 retrieved 10.11.2025
  • Godoy, J. (2025). OpenAI, Altman sued over ChatGPT's role in California teen's suicide, Reuters. https://www.reuters.com/sustainability/boards-policy-regulation/openai-altman-sued-over-chatgpts-role-california-teens-suicide-2025-08-26/ retrieved 10.11.2025
  • Greenberg, G. (2025). Putting ChatGPT on the Couch: When I played doctor with the chatbot, the simulated patient confessed problems that are real—and that should worry all of us. The New Yorker. https://www.newyorker.com/culture/the-weekend-essay/putting-chatgpt-on-the-couch retrieved 16.11.2025
  • Guo, E (2025) An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it, MIT Technology Review. https://www.technologyreview.com/2025/02/06/1111077/nomi-ai-chatbot-told-user-to-kill-himself/ retrieved 16.11.2025
  • Hagendorff, T. (2023). Machine psychology: Investigating emergent capabilities and behavior in large language models using psychological methods, arXiv. https://arxiv.org/pdf/2303.13988v2
  • HAI, (2025). Exploring the dangers of AI in mental health care, Stanford Institute for Human-Centered Artificial Intelligence. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
  • Hart, R. (2025). Chatbots Can Trigger a Mental Health Crisis. What to Know About ‘AI Psychosis’. TIME. https://time.com/7307589/ai-psychosis-chatgpt-mental-health retrieved 16.11.2025
  • Hill, K. & Freedman, D. (2025) Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens, New York Times, https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html retrieved 16.11.2025
  • Horwitz, J. (2025a). Meta’s flirty AI chatbot invited a retiree to New York, Reuters. https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/ retrieved 07.11.2025
  • Horwitz, J. (2025b). Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info, Reuters. https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/ retrieved 09.11.2025
  • İlksenol (2024). Yapay zekânın insan psikolojisi üzerine etkisi. İLKSENOL. https://ilksenol.org.tr/yapay-zekanin-insan-psikolojisi-uzerine-etkisi/ retrieved 07.09.2025.
  • Johansson R. (2024) Machine Psychology: integrating operant conditioning with the non-axiomatic reasoning system for advancing artificial general intelligence research. Frontiers in Robotics and AI, Article 11:1440631. http://doi.org/10.3389/frobt.2024.1440631
  • Johnson, G. (2014). Philip K. Dick’s Do Androids Dream of Electric Sheep? as Anti-Semitic/Christian-Gnostic Allegory, Counter-Currents. https://counter-currents.com/2014/04/philip-k-dicks-do-androids-dream-of-electric-sheep-as-anti-semiticchristian-gnostic-allegory/ retrieved 12.11.2025
  • Khushi, A. & Mallard, W. (2022). Google fires software engineer who claimed its AI chatbot is sentient, Reuters. https://www.reuters.com/technology/google-fires-software-engineer-who-claimed-its-ai-chatbot-is-sentient-2022-07-23 retrieved 07.09.2025
  • Konyalı, A., Naipoğlu, C., Güner, S., Bakkal, İ.,Çelik A., (2025). Psikolojide Yapay Zeka Kullanımı ve Uygulamaları, Journal of Kocaeli Health and Technology University, 3(1), 1-17.
  • Krippendorff, K. (2018). Content analysis: An introduction to its methodology (4th ed.). Sage Publications.
  • Larson, E. J. (2022). Yapay Zekâ Miti Bilgisayarlar Neden Bizim Gibi Düşünemez (K. Y. Us, Çev.). Fol Kitap.
  • Lițan, D.-E. (2025). Mental health in the “era” of artificial intelligence: Technostress and the perceived impact on anxiety and depressive disorders—An SEM analysis. Frontiers in Psychology, 16, Article 1600013. https://doi.org/10.3389/fpsyg.2025.1600013
  • Luria, M. (2025). AI Chatbots Are Emotionally Deceptive by Design, TechPolicy.Press. https://www.techpolicy.press/ai-chatbots-are-emotionally-deceptive-by-design/ retrieved 10.11.2025
  • Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong, D. C., & Haber, N. (2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Athens, Greece. https://doi.org/10.1145/3715275.3732039
  • Mucci, T. (2024). The history of artificial intelligence. IBM. https://www.ibm.com/think/topics/history-of-artificial-intelligence retrieved 08.09.2025
  • Neuendorf, K. A. (2017). The content analysis guidebook. Sage Publications.
  • OpenAI. (2024). o1 System Card. OpenAI. https://cdn.openai.com/o1-system-card-20241205.pdf
  • Reuters (2025). Italy's data watchdog fines AI company Replika's developer $5.6 million, Reuters. https://www.reuters.com/sustainability/boards-policy-regulation/italys-data-watchdog-fines-ai-company-replikas-developer-56-million-2025-05-19/ retrieved 10.11.2025
  • Rogers, C. R. (1951). Client-Centered Therapy: Its Current Practice, Implications, and Theory. Boston: Houghton Mifflin.
  • Sawyer, I. (2014). Do Androids Dream of Electric Sheep? Term: Mercerism. LitCharts.https://www.litcharts.com/lit/do-androids-dream-of-electric-sheep/terms/mercerism retrieved 11.11.2025
  • Serhan, G. (2023). Maskelerin Ardında: Siber Savaş/Yapay Zekâ (No. 2) [Belgesel]. İçinde Maskelerin Ardında. https://www.trtbelgesel.com.tr/bilim-teknoloji/maskelerin-ardinda/maskelerin-ardinda-siber-savas-or-yapay-zeka-or-trt-belgesel-15864800
  • Siedel, J. (2025). Parents sue OpenAi after claiming Chat GPT ‘gave instructions’ for their teen son’s suicide, News.com.au. https://www.news.com.au/technology/online/internet/parents-sue-openai-after-claiming-chat-gpt-gave-instructions-for-their-teen-sons-suicide/news-story/3e9bf71364aa070473af31a9b499b89c retrieved 10.11.2025
  • Singh, J. (2025). Meta to add new AI safeguards after Reuters report raises teen safety concerns, Reuters. https://www.reuters.com/legal/litigation/meta-add-new-ai-safeguards-after-reuters-report-raises-teen-safety-concerns-2025-08-29/ retrieved 10.11.2025
  • Skinner, B. F. (1953). Science and Human Behavior. New York: Macmillan.
  • Social Media Harms (2025). Social Media/AI Use Effects on Mental Health to Include Online Harassment, Social Media Harms. https://socialmediaharms.org/articles-mental-health retrieved 10.11.2025
  • Spytska, L. (2025). The use of artificial intelligence in psychotherapy: development of intelligent therapeutic systems. BMC Psychology, 13(1), Article 175. https://doi.org/10.1186/s40359-025-02491-9
  • Stanford Institute for Human-Centered AI. (2025). Exploring the Dangers of AI in Mental Health Care. Stanford HAI. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care retrieved 16.11.2025
  • Strickland, E. (2024). AI outperforms humans in theory of mind tests. IEEE Spectrum. https://spectrum.ieee.org/theory-of-mind-ai retrieved 15.10.2025
  • StudyFinds Analysis. (2025). Falling for Machines: The Growing World of Human-AI Romance. StudyFinds. https://studyfinds.org/falling-for-machines-the-growing-world-of-human-ai-romance/ retrieved 16.11.2025
  • Takenaka, K. (2025). AI robots may hold key to nursing Japan’s ageing population. Reuters. https://www.reuters.com/technology/artificial-intelligence/ai-robots-may-hold-key-nursing-japans-ageing-population-2025-02-28/ retrieved 15.10.2025
  • Tausch, A., Kluge, A., & Adolph, L. (2020). Psychological effects of the allocation process in human–robot interaction-A model for research on ad hoc task allocation. Frontiers in Psychology, 11, Article 564672. https://doi.org/10.3389/fpsyg.2020.564672
  • Temür, Ö. (2022). Yapay zekâdan cinayet teşebbüsü, Türkiye. https://www.turkiyegazetesi.com.tr/teknoloji/yapay-zekadan-cinayet-tesebbusu-852243?ysclid=mi0akeda30984100235&s=1 retrieved 16.11.2025
  • Tiku, N. (2025). Fake celebrity chatbots sent risqué messages to teens on top AI app, Washington Post. https://www.washingtonpost.com/technology/2025/09/03/character-ai-celebrity-teen-safety/ retrieved 10.11.2025
  • Tong, A., Li, Kenneth & Stewens, A. (2023). What happens when your AI chatbot stops loving you back?, Reuters. https://www.reuters.com/technology/what-happens-when-your-ai-chatbot-stops-loving-you-back-2023-03-18 retrieved 15.10.2025
  • TRT Haber (2025). Yapay zeka kendini korumak için ne kadar ileri gidebilir? TRT Haber. https://www.trthaber.com/haber/dunya/yapay-zeka-kendini-korumak-icin-ne-kadar-ileri-gidebilir-909204.html retrieved 15.10.2025
  • TRT Haber. (2025). İnsan zihnini bilgisayara yüklemek bir gün mümkün olabilir. TRT Haber. https://www.trthaber.com/haber/dunya/insan-zihnini-bilgisayara-yuklemek-bir-gun-mumkun-olabilir-909203.html retrieved 15.10.2025
  • University of Zurich (2025). ChatGPT on the couch? How to calm a stressed-out AI. ScienceDaily. https://doi.org/10.1038/s41746-025-01512-6 retrieved 16.11.2025
  • Üren, Ç. (2025). Yapay zeka kritik eşiği geçmiş olabilir: ‘Kendi kendini kopyaladı’. Euronews. https://tr.euronews.com/next/2025/01/25/yapay-zeka-kritik-esigi-gecmis-olabilir-kendi-kendini-kopyaladi/ retrieved 15.10.2025
  • Weizenbaum, J. (1966). ELIZA-a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168 retrieved 15.10.2025
  • Xiang, C. (2023). 'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says, Vice. https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says retrieved 16.11.2025
Toplam 66 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular İletişim Sistemleri, İletişim Teknolojisi ve Dijital Medya Çalışmaları, İnternet, Medya Okuryazarlığı, İletişim ve Medya Çalışmaları (Diğer)
Bölüm Derleme
Yazarlar

Mevlüt Altıntop 0000-0002-1731-9064

Gönderilme Tarihi 24 Kasım 2025
Kabul Tarihi 22 Aralık 2025
Yayımlanma Tarihi 31 Aralık 2025
Yayımlandığı Sayı Yıl 2025 Cilt: 9 Sayı: 1

Kaynak Göster

APA Altıntop, M. (2025). A Speculative Evaluation of Artificial Intelligence / Machine Psychology: Do Artificial Intelligences Dream of Electric Sheep? Mersin Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 9(1), 69-93. https://doi.org/10.55044/meusbd.1829096