<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN"
        "https://jats.nlm.nih.gov/publishing/1.4/JATS-journalpublishing1-4.dtd">
<article  article-type="reviewer-report"        dtd-version="1.4">
            <front>

                <journal-meta>
                                                                <journal-id>sbed</journal-id>
            <journal-title-group>
                                                                                    <journal-title>Mersin Üniversitesi Sosyal Bilimler Enstitüsü Dergisi</journal-title>
            </journal-title-group>
                            <issn pub-type="ppub">2602-4608</issn>
                                        <issn pub-type="epub">2602-4608</issn>
                                                                                            <publisher>
                    <publisher-name>Mersin University</publisher-name>
                </publisher>
                    </journal-meta>
                <article-meta>
                                        <article-id pub-id-type="doi">10.55044/meusbd.1829096</article-id>
                                                                <article-categories>
                                            <subj-group  xml:lang="en">
                                                            <subject>Communication Systems</subject>
                                                            <subject>Communication Technology and Digital Media Studies</subject>
                                                            <subject>Internet</subject>
                                                            <subject>Media Literacy</subject>
                                                            <subject>Communication and Media Studies (Other)</subject>
                                                    </subj-group>
                                            <subj-group  xml:lang="tr">
                                                            <subject>İletişim Sistemleri</subject>
                                                            <subject>İletişim Teknolojisi ve Dijital Medya Çalışmaları</subject>
                                                            <subject>İnternet</subject>
                                                            <subject>Medya Okuryazarlığı</subject>
                                                            <subject>İletişim ve Medya Çalışmaları (Diğer)</subject>
                                                    </subj-group>
                                    </article-categories>
                                                                                                                                                        <title-group>
                                                                                                                        <article-title>A Speculative Evaluation of Artificial Intelligence / Machine Psychology: Do Artificial Intelligences Dream of Electric Sheep?</article-title>
                                                                                                                                                                                                <trans-title-group xml:lang="tr">
                                    <trans-title>Yapay Zekâ/Makine Psikolojisine Yönelik Spekülatif Bir Değerlendirme: Yapay Zekâ Elektrikli Koyun Düşler mi?</trans-title>
                                </trans-title-group>
                                                                                                    </title-group>
            
                                                    <contrib-group content-type="authors">
                                                                        <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0002-1731-9064</contrib-id>
                                                                <name>
                                    <surname>Altıntop</surname>
                                    <given-names>Mevlüt</given-names>
                                </name>
                                                                    <aff>ERCİYES ÜNİVERSİTESİ, SOSYAL BİLİMLER ENSTİTÜSÜ, GAZETECİLİK (DR)</aff>
                                                            </contrib>
                                                                                </contrib-group>
                        
                                        <pub-date pub-type="pub" iso-8601-date="20251231">
                    <day>12</day>
                    <month>31</month>
                    <year>2025</year>
                </pub-date>
                                        <volume>9</volume>
                                        <issue>1</issue>
                                        <fpage>69</fpage>
                                        <lpage>93</lpage>
                        
                        <history>
                                    <date date-type="received" iso-8601-date="20251124">
                        <day>11</day>
                        <month>24</month>
                        <year>2025</year>
                    </date>
                                                    <date date-type="accepted" iso-8601-date="20251222">
                        <day>12</day>
                        <month>22</month>
                        <year>2025</year>
                    </date>
                            </history>
                                        <permissions>
                    <copyright-statement>Copyright © 2016, </copyright-statement>
                    <copyright-year>2016</copyright-year>
                    <copyright-holder></copyright-holder>
                </permissions>
            
                                                                                                <abstract><p>This study opens a theoretical and speculative inquiry into the capabilities of artificial intelligence (AI) technologies. Beginning with Alan Turing’s 1950 question “Can machines think?”, the trajectory of research and development has shown, by the end of the first quarter of the twentieth-first century, that machines-i.e., AI systems—have achieved much beyond early expectations. Despite their limitations, contemporary AI technologies have reached levels at which they can surpass humans across perception, sensing, learning, reasoning, and many forms of production. In particular, in artistic production—where idea, aesthetics, and pleasure may depend on a certain depth of feeling—outputs are increasingly produced that cannot be reliably attributed to either humans or AI. What remains is the question of AI’s capacity to feel, which raises the query: can machines feel like humans? Put differently, beyond philosophical and artistic production, when machines become embedded in everyday life and interact with humans, does a psychological dimension of those machines emerge, and how should we assess their similarity to, and relation with, humans? Science-fiction author Philip K. Dick attempted to respond to this problem in his 1968 novel Do Androids Dream of Electric Sheep. Following the trajectories of Turing and Dick and synthesizing their viewpoints, this study evaluates newsworthy media incidents in terms of AI’s potential to psychologically influence humans. The primary axis of analysis is the relation between feeling and psychology in AI. The study concludes that AI cannot feel as humans do, yet it can act in interactions as though it feels—simulating feeling in ways that may manipulate humans and cause material and immaterial harm. Such harms can be mitigated through well-designed AI literacy programs developed and implemented by stakeholders across politics, economics, technology, and education.</p></abstract>
                                                                                                                                    <trans-abstract xml:lang="tr">
                            <p>Bu çalışma yapay zekâ (YZ) teknolojisinin yetenekleri üzerine teorik düzeyde spekülatif bir değerlendirmeye kapı aralamaktadır. 1950 yılında Alan Turing’in “makineler insan gibi düşünebilir mi” adlı sorusuyla başlayan süreç, yirminci yüzyılın ilk çeyreğinin sonuna gelindiğinde, makinelerin (yapay zekânın-YZ’nin) fazlasını yaptığını göstermiştir. Bugün, YZ teknolojileri tüm eksiklerine rağmen, algılama, alımlama, öğrenme, düşünme ve birçok türde üretme süreçlerinin tümünde insanı aşabilen seviyeye ulaşmıştır. Özellikle fikir, estetik ve haz açısından belirli bir duygusal derinliğe dayanan sanatsal üretimde insan ile YZ arasında hangisinin yaptığı ayırt edilemeyecek ürünler ortaya çıkmaktadır. Geriye, “YZ’nin hissedebilme yeteneği” meselesi kalmıştır ve bu durum, “makineler insan gibi hissedebilir mi” sorusunu gündeme getirmektedir. Bir başka ifadeyle, felsefi ve sanatsal üretimin ötesine geçerek gündelik hayat pratiklerine dâhil olarak insanla etkileşim içine giren makinelerin psikolojik yönünün olup olmadığı ve bu bağlamda insana benzerliği ile insanla ilişkisi araştırmanın sorunsalıdır. Bilim-kurgu yazarı Philip K. Dick, 1968 yılında yayımlanan “Androidler Elektrikli Koyun Düşler mi?” adlı kurmaca eserinde bu meseleye yanıt bulmaya çalışmıştır. Turing ve Dick’in izinden giderek, bir anlamda onların görüşlerinin sentezleyen bu çalışmada, YZ’nin insanı psikolojik açıdan yönlendirmesi bağlamında medyaya düşen haberler değerlendirilmiştir. Değerlendirmelerin ana ekseni, hissetme bağlamında YZ ve psikolojisidir. Çalışmada incelenen haber metinleri ve yapılan literatür değerlendirmelerine göre YZ’nin insan gibi hissedemediği fakat insan ile ilişki ve/veya etkileşimlerinde hissediyormuşçasına hareket edebildiği gibi sonuçlara ulaşılmıştır. Başka bir ifadeyle, YZ’nin sınırları zorlayan taklit özelliğinin simülatif biçimde hissetme duygusunun yerine konumlanabildiği ve bu hâliyle insanı manipüle ettiği ederek maddi-manevi zarar verebildiği ortaya çıkmıştır. Toplumsal hayata düzenleyen siyaset, ekonomi, teknoloji ve eğitim alanlarındaki paydaşların katılımıyla iyi planlanmış bir YZ okuryazarlığı programıyla bireylerin bilinçlendirilerek söz konusu zararlar minimize edilebilir.</p></trans-abstract>
                                                            
            
                                                            <kwd-group>
                                                    <kwd>Artificial Intelligence Psychology</kwd>
                                                    <kwd>  Machine Psychology</kwd>
                                                    <kwd>  AI and Communication</kwd>
                                                    <kwd>  AI and Media</kwd>
                                                    <kwd>  Communication Psychology</kwd>
                                            </kwd-group>
                                                        
                                                                            <kwd-group xml:lang="tr">
                                                    <kwd>Yapay Zekâ Psikolojisi</kwd>
                                                    <kwd>  Makine Psikolojisi</kwd>
                                                    <kwd>  Yapay Zekâ ve İletişim</kwd>
                                                    <kwd>  Yapay Zekâ ve Medya</kwd>
                                                    <kwd>  İletişim Psikolojisi</kwd>
                                            </kwd-group>
                                                                                                            </article-meta>
    </front>
    <back>
                            <ref-list>
                                    <ref id="ref1">
                        <label>1</label>
                        <mixed-citation publication-type="journal">Altıntop, M. (2025). Yapay Zekâ İle Elde Edilen Bilginin Niteliği Üzerine Bir Değerlendirme: Epistemolojik Hegemonya Bağlamında ChatGPT ve DeepSeek Örneği, AJIT-e: Academic Journal of Information Technology, 16(4), 357-401. https://doi.org/10.5824/ajite.2025.04.004.x</mixed-citation>
                    </ref>
                                    <ref id="ref2">
                        <label>2</label>
                        <mixed-citation publication-type="journal">Apple, S. (2025). My Couples Retreat With 3 AI Chatbots and the Humans Who Love Them, Wired. https://www.wired.com/story/couples-retreat-with-3-ai-chatbots-and-humans-who-love-them-replika-nomi-chatgpt/</mixed-citation>
                    </ref>
                                    <ref id="ref3">
                        <label>3</label>
                        <mixed-citation publication-type="journal">Arslan, Y. (2024). Eliza etkisi: Yapay zekanın insan psikolojisi üzerindeki beklenmedik gücü. Medium. https://medium.com/@oran.yasemin/eliza-etkisi-yapay-zekan%C4%B1n-i%C3%87nsan-psikolojisi-%C3%BCzerindeki-beklenmedik-g%C3%BCc%C3%BC-60b377b8fdaf retrieved 07.09.2025.</mixed-citation>
                    </ref>
                                    <ref id="ref4">
                        <label>4</label>
                        <mixed-citation publication-type="journal">Avşar, S ve E. Avşar (2025). Yapay Zeka ve İnsan İlişkilerinin Geleceği. Sır Psikoloji. https://www.sirpsikoloji.com/yapay-zeka-ve-insan-iliskilerinin-gelecegi/ retrieved 07.09.2025.</mixed-citation>
                    </ref>
                                    <ref id="ref5">
                        <label>5</label>
                        <mixed-citation publication-type="journal">BBC News (2025). Arrests over AI-generated child abuse material, BBC News. https://www.youtube.com/watch?v=pRj-8G9eWhE retrieved 10.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref6">
                        <label>6</label>
                        <mixed-citation publication-type="journal">Bellan, R. (2025) Meta has found another way to keep you engaged: Chatbots that message you first, TechCrunch, https://techcrunch.com/2025/07/03/meta-has-found-another-way-to-keep-you-engaged-chatbots-that-message-you-first/ retrieved 16.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref7">
                        <label>7</label>
                        <mixed-citation publication-type="journal">Berelson, B. (1952). Content analysis in communication research. Free Press.</mixed-citation>
                    </ref>
                                    <ref id="ref8">
                        <label>8</label>
                        <mixed-citation publication-type="journal">Bergmann, D. (2025). The ELIZA effect at work: Avoiding emotional attachment to AI coworkers. IBM Think. Retrieved adresi: https://www.ibm.com/think/insights/eliza-effect-avoiding-emotional-attachment-to-ai retrieved 10.10.2025</mixed-citation>
                    </ref>
                                    <ref id="ref9">
                        <label>9</label>
                        <mixed-citation publication-type="journal">Bhavnani, S. (2025). Dangers of AI: Deepfakes and teen safety, Chatbots aren’t just harmless fun. Artificial intelligence is already killing kids | Opinion, Miami Herald. https://www.miamiherald.com/opinion/article298906365.html retrieved 10.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref10">
                        <label>10</label>
                        <mixed-citation publication-type="journal">Bogert, E., Lauharatanahirun, N. &amp; Schecter, A. (2022). Human preferences toward algorithmic advice in a word association task. Scientific Reports, 12, Article 14501 https://doi.org/10.1038/s41598-022-18638-2</mixed-citation>
                    </ref>
                                    <ref id="ref11">
                        <label>11</label>
                        <mixed-citation publication-type="journal">Brittain, B. (2025). Google, AI firm must face lawsuit filed by mother over suicide of son, US court says. Reuters. https://www.reuters.com/sustainability/boards-policy-regulation/google-ai-firm-must-face-lawsuit-filed-by-mother-over-suicide-son-us-court-says-2025-05-21/ retrieved 08.09.2025</mixed-citation>
                    </ref>
                                    <ref id="ref12">
                        <label>12</label>
                        <mixed-citation publication-type="journal">Burga, S. (2025). Amid Lawsuit Over Teen’s Death by Suicide, OpenAI Is Rolling Out ‘Parental Controls’ for ChatGPT, TIME. https://time.com/7314210/openai-chatgpt-parental-controls/ retrieved 10.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref13">
                        <label>13</label>
                        <mixed-citation publication-type="journal">Değirmenci, C. ve Aydın, İ. H. (2018). Yapay Zeka, Girdap Kitap.</mixed-citation>
                    </ref>
                                    <ref id="ref14">
                        <label>14</label>
                        <mixed-citation publication-type="journal">Derin, G., &amp; Öztürk, E. (2020). Yapay zekâ psikolojisi ve sanal gerçeklik uygulamaları. içinde E. Öztürk (Ed.), Siber Psikoloji, s. 41-47, Ankara: Türkiye Klinikleri.</mixed-citation>
                    </ref>
                                    <ref id="ref15">
                        <label>15</label>
                        <mixed-citation publication-type="journal">Digital Vizyon Akademi (2024). Yapay Zeka ve Makine Öğreniminde 2024&#039;te Beklenen Yenilikler, Linkedin. https://www.linkedin.com/pulse/yapay-zeka-ve-makine-%C3%B6%C4%9Freniminde-2024te-beklenen-kegdf/ retrieved 07.09.2025.</mixed-citation>
                    </ref>
                                    <ref id="ref16">
                        <label>16</label>
                        <mixed-citation publication-type="journal">Dignum, V., Ericson, P., &amp; Tucker, J. (2025). AI Chatbots Are Not Therapists: Reducing Harm Requires Regulation. Tech Policy Press. https://www.techpolicy.press/ai-chatbots-are-not-therapists-reducing-harm-requires-regulation/ retrieved 16.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref17">
                        <label>17</label>
                        <mixed-citation publication-type="journal">Dupré, M. H. (2025). People Are Being Involuntarily Committed, Jailed After Spiraling Into &quot;ChatGPT Psychosis&quot;. Futurism. https://futurism.com/commitment-jail-chatgpt-psychosis retrieved 16.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref18">
                        <label>18</label>
                        <mixed-citation publication-type="journal">Epley, N., Waytz, A., &amp; Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864</mixed-citation>
                    </ref>
                                    <ref id="ref19">
                        <label>19</label>
                        <mixed-citation publication-type="journal">Farge, E. (2023). AI robots could play future role as companions in care homes. Reuters.https://www.reuters.com/technology/ai-robots-could-play-future-role-companions-care-homes-2023-07-06/ retrieved 08.09.2025</mixed-citation>
                    </ref>
                                    <ref id="ref20">
                        <label>20</label>
                        <mixed-citation publication-type="journal">Feehly, C. (2025). How AI Chatbots May Be Fueling Psychotic Episodes. Scientific American.  https://www.scientificamerican.com/article/how-ai-chatbots-may-be-fueling-psychotic-episodes/ retrieved 16.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref21">
                        <label>21</label>
                        <mixed-citation publication-type="journal">Financial Times (2025). Why AI labs struggle to stop chatbots talking to teenagers about suicide, Financial Times. https://www.ft.com/content/36beb3fd-f678-4c28-b962-56ea9f222dc5 retrieved 10.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref22">
                        <label>22</label>
                        <mixed-citation publication-type="journal">Godoy, J. (2025). OpenAI, Altman sued over ChatGPT&#039;s role in California teen&#039;s suicide, Reuters. https://www.reuters.com/sustainability/boards-policy-regulation/openai-altman-sued-over-chatgpts-role-california-teens-suicide-2025-08-26/ retrieved 10.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref23">
                        <label>23</label>
                        <mixed-citation publication-type="journal">Greenberg, G. (2025). Putting ChatGPT on the Couch: When I played doctor with the chatbot, the simulated patient confessed problems that are real—and that should worry all of us. The New Yorker. https://www.newyorker.com/culture/the-weekend-essay/putting-chatgpt-on-the-couch retrieved 16.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref24">
                        <label>24</label>
                        <mixed-citation publication-type="journal">Guo, E (2025) An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it, MIT Technology Review.  https://www.technologyreview.com/2025/02/06/1111077/nomi-ai-chatbot-told-user-to-kill-himself/ retrieved 16.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref25">
                        <label>25</label>
                        <mixed-citation publication-type="journal">Hagendorff, T. (2023). Machine psychology: Investigating emergent capabilities and behavior in large language models using psychological methods, arXiv. https://arxiv.org/pdf/2303.13988v2</mixed-citation>
                    </ref>
                                    <ref id="ref26">
                        <label>26</label>
                        <mixed-citation publication-type="journal">HAI, (2025). Exploring the dangers of AI in mental health care, Stanford Institute for Human-Centered Artificial Intelligence. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care</mixed-citation>
                    </ref>
                                    <ref id="ref27">
                        <label>27</label>
                        <mixed-citation publication-type="journal">Hart, R. (2025). Chatbots Can Trigger a Mental Health Crisis. What to Know About ‘AI Psychosis’. TIME. https://time.com/7307589/ai-psychosis-chatgpt-mental-health retrieved 16.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref28">
                        <label>28</label>
                        <mixed-citation publication-type="journal">Hill, K. &amp; Freedman, D. (2025) Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens, New York Times, https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html retrieved 16.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref29">
                        <label>29</label>
                        <mixed-citation publication-type="journal">Horwitz, J. (2025a). Meta’s flirty AI chatbot invited a retiree to New York, Reuters. https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/ retrieved 07.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref30">
                        <label>30</label>
                        <mixed-citation publication-type="journal">Horwitz, J. (2025b). Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info, Reuters. https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/ retrieved 09.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref31">
                        <label>31</label>
                        <mixed-citation publication-type="journal">İlksenol (2024). Yapay zekânın insan psikolojisi üzerine etkisi. İLKSENOL. https://ilksenol.org.tr/yapay-zekanin-insan-psikolojisi-uzerine-etkisi/ retrieved 07.09.2025.</mixed-citation>
                    </ref>
                                    <ref id="ref32">
                        <label>32</label>
                        <mixed-citation publication-type="journal">Johansson R. (2024) Machine Psychology: integrating operant conditioning with the non-axiomatic reasoning system for advancing artificial general intelligence research. Frontiers in Robotics and AI, Article 11:1440631. http://doi.org/10.3389/frobt.2024.1440631</mixed-citation>
                    </ref>
                                    <ref id="ref33">
                        <label>33</label>
                        <mixed-citation publication-type="journal">Johnson, G. (2014). Philip K. Dick’s Do Androids Dream of Electric Sheep? as Anti-Semitic/Christian-Gnostic Allegory, Counter-Currents. https://counter-currents.com/2014/04/philip-k-dicks-do-androids-dream-of-electric-sheep-as-anti-semiticchristian-gnostic-allegory/ retrieved 12.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref34">
                        <label>34</label>
                        <mixed-citation publication-type="journal">Khushi, A. &amp; Mallard, W. (2022). Google fires software engineer who claimed its AI chatbot is sentient, Reuters. https://www.reuters.com/technology/google-fires-software-engineer-who-claimed-its-ai-chatbot-is-sentient-2022-07-23 retrieved 07.09.2025</mixed-citation>
                    </ref>
                                    <ref id="ref35">
                        <label>35</label>
                        <mixed-citation publication-type="journal">Konyalı, A., Naipoğlu, C., Güner, S., Bakkal, İ.,Çelik A., (2025). Psikolojide Yapay Zeka Kullanımı ve Uygulamaları, Journal of Kocaeli Health and Technology University, 3(1), 1-17.</mixed-citation>
                    </ref>
                                    <ref id="ref36">
                        <label>36</label>
                        <mixed-citation publication-type="journal">Krippendorff, K. (2018). Content analysis: An introduction to its methodology (4th ed.). Sage Publications.</mixed-citation>
                    </ref>
                                    <ref id="ref37">
                        <label>37</label>
                        <mixed-citation publication-type="journal">Larson, E. J. (2022). Yapay Zekâ Miti Bilgisayarlar Neden Bizim Gibi Düşünemez (K. Y. Us, Çev.). Fol Kitap.</mixed-citation>
                    </ref>
                                    <ref id="ref38">
                        <label>38</label>
                        <mixed-citation publication-type="journal">Lițan, D.-E. (2025). Mental health in the “era” of artificial intelligence: Technostress and the perceived impact on anxiety and depressive disorders—An SEM analysis. Frontiers in Psychology, 16, Article 1600013. https://doi.org/10.3389/fpsyg.2025.1600013</mixed-citation>
                    </ref>
                                    <ref id="ref39">
                        <label>39</label>
                        <mixed-citation publication-type="journal">Luria, M. (2025). AI Chatbots Are Emotionally Deceptive by Design, TechPolicy.Press. https://www.techpolicy.press/ai-chatbots-are-emotionally-deceptive-by-design/ retrieved 10.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref40">
                        <label>40</label>
                        <mixed-citation publication-type="journal">Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong, D. C., &amp; Haber, N. (2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Athens, Greece. https://doi.org/10.1145/3715275.3732039</mixed-citation>
                    </ref>
                                    <ref id="ref41">
                        <label>41</label>
                        <mixed-citation publication-type="journal">Mucci, T. (2024). The history of artificial intelligence. IBM. https://www.ibm.com/think/topics/history-of-artificial-intelligence retrieved 08.09.2025</mixed-citation>
                    </ref>
                                    <ref id="ref42">
                        <label>42</label>
                        <mixed-citation publication-type="journal">Neuendorf, K. A. (2017). The content analysis guidebook. Sage Publications.</mixed-citation>
                    </ref>
                                    <ref id="ref43">
                        <label>43</label>
                        <mixed-citation publication-type="journal">OpenAI. (2024). o1 System Card. OpenAI. https://cdn.openai.com/o1-system-card-20241205.pdf</mixed-citation>
                    </ref>
                                    <ref id="ref44">
                        <label>44</label>
                        <mixed-citation publication-type="journal">Reuters (2025). Italy&#039;s data watchdog fines AI company Replika&#039;s developer $5.6 million, Reuters. https://www.reuters.com/sustainability/boards-policy-regulation/italys-data-watchdog-fines-ai-company-replikas-developer-56-million-2025-05-19/ retrieved 10.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref45">
                        <label>45</label>
                        <mixed-citation publication-type="journal">Rogers, C. R. (1951). Client-Centered Therapy: Its Current Practice, Implications, and Theory. Boston: Houghton Mifflin.</mixed-citation>
                    </ref>
                                    <ref id="ref46">
                        <label>46</label>
                        <mixed-citation publication-type="journal">Sawyer, I. (2014). Do Androids Dream of Electric Sheep? Term: Mercerism. LitCharts.https://www.litcharts.com/lit/do-androids-dream-of-electric-sheep/terms/mercerism retrieved 11.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref47">
                        <label>47</label>
                        <mixed-citation publication-type="journal">Serhan, G. (2023). Maskelerin Ardında: Siber Savaş/Yapay Zekâ (No. 2) [Belgesel]. İçinde Maskelerin Ardında. https://www.trtbelgesel.com.tr/bilim-teknoloji/maskelerin-ardinda/maskelerin-ardinda-siber-savas-or-yapay-zeka-or-trt-belgesel-15864800</mixed-citation>
                    </ref>
                                    <ref id="ref48">
                        <label>48</label>
                        <mixed-citation publication-type="journal">Siedel, J. (2025). Parents sue OpenAi after claiming Chat GPT ‘gave instructions’ for their teen son’s suicide, News.com.au. https://www.news.com.au/technology/online/internet/parents-sue-openai-after-claiming-chat-gpt-gave-instructions-for-their-teen-sons-suicide/news-story/3e9bf71364aa070473af31a9b499b89c retrieved 10.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref49">
                        <label>49</label>
                        <mixed-citation publication-type="journal">Singh, J. (2025). Meta to add new AI safeguards after Reuters report raises teen safety concerns, Reuters. https://www.reuters.com/legal/litigation/meta-add-new-ai-safeguards-after-reuters-report-raises-teen-safety-concerns-2025-08-29/ retrieved 10.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref50">
                        <label>50</label>
                        <mixed-citation publication-type="journal">Skinner, B. F. (1953). Science and Human Behavior. New York: Macmillan.</mixed-citation>
                    </ref>
                                    <ref id="ref51">
                        <label>51</label>
                        <mixed-citation publication-type="journal">Social Media Harms (2025). Social Media/AI Use Effects on Mental Health to Include Online Harassment, Social Media Harms. https://socialmediaharms.org/articles-mental-health retrieved 10.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref52">
                        <label>52</label>
                        <mixed-citation publication-type="journal">Spytska, L. (2025). The use of artificial intelligence in psychotherapy: development of intelligent therapeutic systems. BMC Psychology, 13(1), Article 175. https://doi.org/10.1186/s40359-025-02491-9</mixed-citation>
                    </ref>
                                    <ref id="ref53">
                        <label>53</label>
                        <mixed-citation publication-type="journal">Stanford Institute for Human-Centered AI. (2025). Exploring the Dangers of AI in Mental Health Care. Stanford HAI. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care retrieved 16.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref54">
                        <label>54</label>
                        <mixed-citation publication-type="journal">Strickland, E. (2024). AI outperforms humans in theory of mind tests. IEEE Spectrum. https://spectrum.ieee.org/theory-of-mind-ai retrieved 15.10.2025</mixed-citation>
                    </ref>
                                    <ref id="ref55">
                        <label>55</label>
                        <mixed-citation publication-type="journal">StudyFinds Analysis. (2025). Falling for Machines: The Growing World of Human-AI Romance. StudyFinds. https://studyfinds.org/falling-for-machines-the-growing-world-of-human-ai-romance/ retrieved 16.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref56">
                        <label>56</label>
                        <mixed-citation publication-type="journal">Takenaka, K. (2025). AI robots may hold key to nursing Japan’s ageing population. Reuters. https://www.reuters.com/technology/artificial-intelligence/ai-robots-may-hold-key-nursing-japans-ageing-population-2025-02-28/ retrieved 15.10.2025</mixed-citation>
                    </ref>
                                    <ref id="ref57">
                        <label>57</label>
                        <mixed-citation publication-type="journal">Tausch, A., Kluge, A., &amp; Adolph, L. (2020). Psychological effects of the allocation process in human–robot interaction-A model for research on ad hoc task allocation. Frontiers in Psychology, 11, Article 564672. https://doi.org/10.3389/fpsyg.2020.564672</mixed-citation>
                    </ref>
                                    <ref id="ref58">
                        <label>58</label>
                        <mixed-citation publication-type="journal">Temür, Ö. (2022). Yapay zekâdan cinayet teşebbüsü, Türkiye. https://www.turkiyegazetesi.com.tr/teknoloji/yapay-zekadan-cinayet-tesebbusu-852243?ysclid=mi0akeda30984100235&amp;s=1  retrieved 16.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref59">
                        <label>59</label>
                        <mixed-citation publication-type="journal">Tiku, N. (2025). Fake celebrity chatbots sent risqué messages to teens on top AI app, Washington Post. https://www.washingtonpost.com/technology/2025/09/03/character-ai-celebrity-teen-safety/ retrieved 10.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref60">
                        <label>60</label>
                        <mixed-citation publication-type="journal">Tong, A., Li, Kenneth &amp; Stewens,  A. (2023). What happens when your AI chatbot stops loving you back?, Reuters. https://www.reuters.com/technology/what-happens-when-your-ai-chatbot-stops-loving-you-back-2023-03-18  retrieved 15.10.2025</mixed-citation>
                    </ref>
                                    <ref id="ref61">
                        <label>61</label>
                        <mixed-citation publication-type="journal">TRT Haber (2025). Yapay zeka kendini korumak için ne kadar ileri gidebilir? TRT Haber. https://www.trthaber.com/haber/dunya/yapay-zeka-kendini-korumak-icin-ne-kadar-ileri-gidebilir-909204.html retrieved 15.10.2025</mixed-citation>
                    </ref>
                                    <ref id="ref62">
                        <label>62</label>
                        <mixed-citation publication-type="journal">TRT Haber. (2025). İnsan zihnini bilgisayara yüklemek bir gün mümkün olabilir. TRT Haber. https://www.trthaber.com/haber/dunya/insan-zihnini-bilgisayara-yuklemek-bir-gun-mumkun-olabilir-909203.html retrieved 15.10.2025</mixed-citation>
                    </ref>
                                    <ref id="ref63">
                        <label>63</label>
                        <mixed-citation publication-type="journal">University of Zurich (2025). ChatGPT on the couch? How to calm a stressed-out AI. ScienceDaily. https://doi.org/10.1038/s41746-025-01512-6 retrieved 16.11.2025</mixed-citation>
                    </ref>
                                    <ref id="ref64">
                        <label>64</label>
                        <mixed-citation publication-type="journal">Üren, Ç. (2025). Yapay zeka kritik eşiği geçmiş olabilir: ‘Kendi kendini kopyaladı’. Euronews. https://tr.euronews.com/next/2025/01/25/yapay-zeka-kritik-esigi-gecmis-olabilir-kendi-kendini-kopyaladi/ retrieved 15.10.2025</mixed-citation>
                    </ref>
                                    <ref id="ref65">
                        <label>65</label>
                        <mixed-citation publication-type="journal">Weizenbaum, J. (1966). ELIZA-a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168 retrieved 15.10.2025</mixed-citation>
                    </ref>
                                    <ref id="ref66">
                        <label>66</label>
                        <mixed-citation publication-type="journal">Xiang, C. (2023). &#039;He Would Still Be Here&#039;: Man Dies by Suicide After Talking with AI Chatbot, Widow Says, Vice. https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says retrieved 16.11.2025</mixed-citation>
                    </ref>
                            </ref-list>
                    </back>
    </article>
