Araştırma Makalesi
BibTex RIS Kaynak Göster

Yıl 2026, Cilt: 11 Sayı: 1, 223 - 238, 24.03.2026
https://doi.org/10.30622/tarr.1863625
https://izlik.org/JA59TG52MT

Öz

Kaynakça

  • Altunya Sayan, H. (2025). Dijital çağda muhakeme ve hikmetten yoksun mantıksal düşünme üzerine bir soruşturma. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  • Attah, N. O. (2025). Do language models lack communicative intentions? Synthese, 205(5), 187. https://doi.org/10.1007/s11229-025-05022-6
  • Austin, J. L. (1962). How to do things with words (M. Sbisá & J. O. Urmson, Eds.). Clarendon Press.
  • Başarslan, B., & Büyükyılmaz, Y. (2025). Is it possible to engage in dialogue with generative AI technologies? Beytulhikme An International Journal of Philosophy, 15(4), 1473–1496. https://doi.org/10.29228/beytulhikme.86607
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
  • Benveniste, E. (1971). Problems in general linguistics (M. E. Meek, Trans.). University of Miami Press.
  • Büyükada, S. (2025). Mantık, akıl yürütme ve yapay zeka. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  • Caucci, G. M., & Kreuz, R. J. (2012). Social and paralinguistic cues to sarcasm. Humor, 25(1), 1–22. https://doi.org/10.1515/humor-2012-0001
  • Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and Semantics, Volume 3: Speech Acts (pp. 41–58). Academic Pr.
  • Grindrod, J. (2024). Large language models and linguistic intentionality. Synthese, 204(2), 71. https://doi.org/10.1007/s11229-024-04723-8
  • Gubelmann, R. (2024). Large language models, agency, and why speech acts are beyond them (for now) – a Kantian-cum-pragmatist case. Philosophy & Technology, 37(1), 32. https://doi.org/10.1007/s13347-024-00696-1
  • Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1), 335–346. https://doi.org/10.1016/0167-2789(90)90087-6
  • Harnad, S. (2025). Language writ large: LLMs, ChatGPT, meaning, and understanding. Frontiers in Artificial Intelligence, 7. https://doi.org/10.3389/frai.2024.1490698
  • Kaplan, D. (1989a). Afterthoughts. In J. Almog, J. Perry, & H. Wettstein (Eds.), Themes From Kaplan (pp. 565–614). Oxford University Press.
  • Kaplan, D. (1989b). Demonstratives. In J. Almog, J. Perry, & H. K. Wettstein (Eds.), Themes from Kaplan (pp. 483–563). Oxford University Press.
  • Kömürcü, K. (2025). Dijital sofizm ve mantık. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  • Li, Z., Zhang, Y., Gao, X., Nayak, S., & Coler, M. (2025). Making machines sound sarcastic: LLM-enhanced and retrieval-guided sarcastic speech synthesis (arXiv:2510.07096; Version 1). arXiv. https://doi.org/10.48550/arXiv.2510.07096
  • Mandelkern, M., & Linzen, T. (2024). Do language models’ words refer? Computational Linguistics, 50(3), 1191–1200. https://doi.org/10.1162/coli_a_00522
  • Mollema, T. (2024). Social AI and the temptation of equating Wittgenstein’s language-user with Calvino’s literature machine. International Review of Literary Studies, 6(1), 39–55. https://doi.org/10.53057/irls/2024.6.1.4
  • Oğuz, M., Bakman, Y. F., & Yaldiz, D. N. (2025). Un-considering contextual information: Assessing LLMs’ understanding of indexical elements. In W. Che, J. Nabende, E. Shutova, & M. T. Pilehvar (Eds.), Findings of the Association for Computational Linguistics: ACL 2025 (pp. 23410–23427). Association for Computational Linguistics. https://doi.org/10.18653/v1/2025.findings-acl.1203
  • Özdil, M. S. (2025). Non-linguistic elements and issues of reference in logic (Y. E. Akbay, Ed.). Palet Publications. Perry, J. (1979). The problem of the essential indexical. Noûs, 13(1), 3–21. https://doi.org/10.2307/2214792
  • Qian, K., Fan, X., Ni, J., Shechtman, S., Hasegawa-Johnson, M. A., Gan, C., & Zhang, Y. (2025). ProsodyLM: Uncovering the emerging prosody processing capabilities in speech language models. Second Conference on Language Modeling. https://openreview.net/forum?id=uBg8PClMUu#discussion
  • Rosen, Z. P., & Dale, R. (2024). LLMs don’t “do things with words” but their lack of illocution can inform the study of human discourse. Proceedings of the Annual Meeting of the Cognitive Science Society, 46(0). https://escholarship.org/uc/item/25k7z0mz
  • Searle, J. R. (1969). Speech acts: An essay in the philosophy of language. Cambridge University Press. Sharon, T. (2025). Technosolutionism and the empathetic medical chatbot. AI & Society. https://doi.org/10.1007/s00146-025-02441-4
  • Startari, A. V. (2025). Indexical collapse: Reference disappears, authority remains in predictive systems. Elsevier BV. https://doi.org/10.5281/zenodo.17228239
  • Wittgenstein, L. (1986). Philosophical investigations (G. E. M. Anscombe, Trans.). Basil Blackwell.

Büyük Dil Modelleri Neden Söyler ama Kastedemez

Yıl 2026, Cilt: 11 Sayı: 1, 223 - 238, 24.03.2026
https://doi.org/10.30622/tarr.1863625
https://izlik.org/JA59TG52MT

Öz

Bu çalışma, Büyük Dil Modelleri’nin (BDM) artan üretken akıcılıklarına ve çok-modlu yeteneklerine rağmen, yapısal olarak hakiki bir dilsel ima (implication) eyleminden aciz olduğunu savunmaktadır. Metodolojik olarak çalışma, analitik dil felsefesi ve yapay zeka etiği literatürünü sentezleyen teorik bir analiz çerçevesine dayanmaktadır. Araştırma yöntemi; John Austin ve John Searle’ün Söz Edimi Kuramı, Paul Grice’ın pragmatik ilkeleri, David Kaplan’ın duyusal işaret (indexicality) analizi ve Ludwig Wittgenstein’ın "yaşam biçimleri" kavramlarının, mevcut BDM mimarilerine uygulanması yoluyla yürütülen eleştirel bir soruşturmadan oluşmaktadır. Argüman beş temel aşamada yapılandırılmıştır. İlk olarak, modellerin konuşma normlarını kasıtlı olarak ihlal etme yetisine sahip olmadıkları, dolayısıyla "söylenenden" fazlasını kastetmek için gereken "niyetli faillik" (intentional agency) koşulunu sağlamadıkları gösterilmektedir. İkinci olarak, yapay zeka çıktılarında kullanılan "Ben", "Burada" ve "Şimdi" gibi duyusal işaret lafızlarının, cisimleşmiş bir perspektife dayanmadığı için "merciî bir boşluğa" (referential hollowing) düştüğü analiz edilmektedir. Üçüncü aşamada, ses ve görüntü işleyebilen çok-modlu sistemlerin kinaye veya empati gibi durumları simüle etseler bile, bu eylemleri doğrulayacak toplumsal dayanaktan ve ontolojik statüden yoksun oldukları vurgulanmaktadır. Dördüncü aşamada, Wittgenstein’ın felsefesinden hareketle, imanın sadece dilsel bir oyun değil, risklerin ve sorumlulukların paylaşıldığı bir "yaşam biçimine" katılımı gerektirdiği; dolayısıyla BDM'lerin bu alana erişemeyeceği savunulmaktadır. Son olarak, ses ve görüntü işleyebilen çok-modlu sistemlerin kinaye veya empati gibi durumları simüle etseler bile, bu eylemleri doğrulayacak toplumsal dayanaktan, ontolojik statüden ve niyetli durumlardan yoksun oldukları sonucuna varılmaktadır. Çalışma, bu sistemleri pragmatik failler olarak ele almanın, insan iletişiminin temelini oluşturan normatif kavramların içinin boşaltılması riskini taşıyan bir "işlevselci ayartma" olduğu sonucuna varmaktadır.

Kaynakça

  • Altunya Sayan, H. (2025). Dijital çağda muhakeme ve hikmetten yoksun mantıksal düşünme üzerine bir soruşturma. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  • Attah, N. O. (2025). Do language models lack communicative intentions? Synthese, 205(5), 187. https://doi.org/10.1007/s11229-025-05022-6
  • Austin, J. L. (1962). How to do things with words (M. Sbisá & J. O. Urmson, Eds.). Clarendon Press.
  • Başarslan, B., & Büyükyılmaz, Y. (2025). Is it possible to engage in dialogue with generative AI technologies? Beytulhikme An International Journal of Philosophy, 15(4), 1473–1496. https://doi.org/10.29228/beytulhikme.86607
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
  • Benveniste, E. (1971). Problems in general linguistics (M. E. Meek, Trans.). University of Miami Press.
  • Büyükada, S. (2025). Mantık, akıl yürütme ve yapay zeka. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  • Caucci, G. M., & Kreuz, R. J. (2012). Social and paralinguistic cues to sarcasm. Humor, 25(1), 1–22. https://doi.org/10.1515/humor-2012-0001
  • Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and Semantics, Volume 3: Speech Acts (pp. 41–58). Academic Pr.
  • Grindrod, J. (2024). Large language models and linguistic intentionality. Synthese, 204(2), 71. https://doi.org/10.1007/s11229-024-04723-8
  • Gubelmann, R. (2024). Large language models, agency, and why speech acts are beyond them (for now) – a Kantian-cum-pragmatist case. Philosophy & Technology, 37(1), 32. https://doi.org/10.1007/s13347-024-00696-1
  • Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1), 335–346. https://doi.org/10.1016/0167-2789(90)90087-6
  • Harnad, S. (2025). Language writ large: LLMs, ChatGPT, meaning, and understanding. Frontiers in Artificial Intelligence, 7. https://doi.org/10.3389/frai.2024.1490698
  • Kaplan, D. (1989a). Afterthoughts. In J. Almog, J. Perry, & H. Wettstein (Eds.), Themes From Kaplan (pp. 565–614). Oxford University Press.
  • Kaplan, D. (1989b). Demonstratives. In J. Almog, J. Perry, & H. K. Wettstein (Eds.), Themes from Kaplan (pp. 483–563). Oxford University Press.
  • Kömürcü, K. (2025). Dijital sofizm ve mantık. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  • Li, Z., Zhang, Y., Gao, X., Nayak, S., & Coler, M. (2025). Making machines sound sarcastic: LLM-enhanced and retrieval-guided sarcastic speech synthesis (arXiv:2510.07096; Version 1). arXiv. https://doi.org/10.48550/arXiv.2510.07096
  • Mandelkern, M., & Linzen, T. (2024). Do language models’ words refer? Computational Linguistics, 50(3), 1191–1200. https://doi.org/10.1162/coli_a_00522
  • Mollema, T. (2024). Social AI and the temptation of equating Wittgenstein’s language-user with Calvino’s literature machine. International Review of Literary Studies, 6(1), 39–55. https://doi.org/10.53057/irls/2024.6.1.4
  • Oğuz, M., Bakman, Y. F., & Yaldiz, D. N. (2025). Un-considering contextual information: Assessing LLMs’ understanding of indexical elements. In W. Che, J. Nabende, E. Shutova, & M. T. Pilehvar (Eds.), Findings of the Association for Computational Linguistics: ACL 2025 (pp. 23410–23427). Association for Computational Linguistics. https://doi.org/10.18653/v1/2025.findings-acl.1203
  • Özdil, M. S. (2025). Non-linguistic elements and issues of reference in logic (Y. E. Akbay, Ed.). Palet Publications. Perry, J. (1979). The problem of the essential indexical. Noûs, 13(1), 3–21. https://doi.org/10.2307/2214792
  • Qian, K., Fan, X., Ni, J., Shechtman, S., Hasegawa-Johnson, M. A., Gan, C., & Zhang, Y. (2025). ProsodyLM: Uncovering the emerging prosody processing capabilities in speech language models. Second Conference on Language Modeling. https://openreview.net/forum?id=uBg8PClMUu#discussion
  • Rosen, Z. P., & Dale, R. (2024). LLMs don’t “do things with words” but their lack of illocution can inform the study of human discourse. Proceedings of the Annual Meeting of the Cognitive Science Society, 46(0). https://escholarship.org/uc/item/25k7z0mz
  • Searle, J. R. (1969). Speech acts: An essay in the philosophy of language. Cambridge University Press. Sharon, T. (2025). Technosolutionism and the empathetic medical chatbot. AI & Society. https://doi.org/10.1007/s00146-025-02441-4
  • Startari, A. V. (2025). Indexical collapse: Reference disappears, authority remains in predictive systems. Elsevier BV. https://doi.org/10.5281/zenodo.17228239
  • Wittgenstein, L. (1986). Philosophical investigations (G. E. M. Anscombe, Trans.). Basil Blackwell.

Why Large Language Models Can Say but Cannot Imply

Yıl 2026, Cilt: 11 Sayı: 1, 223 - 238, 24.03.2026
https://doi.org/10.30622/tarr.1863625
https://izlik.org/JA59TG52MT

Öz

This study argues that, despite their increasing fluency and multimodal capabilities, large language models (LLMs) are structurally incapable of genuine linguistic implication. In terms of methodology, the paper employs a theoretical analysis that synthesizes analytic philosophy of language with recent critiques in AI ethics, applying Speech Act Theory, Gricean pragmatic maxims, Kaplan’s analysis of indexicality, and Wittgenstein’s concept of "forms of life" to the architecture of contemporary LLMs to test the conditions of possibility for indirect communication. The argument proceeds through five distinct analytical stages. First, drawing on speech act theory, it demonstrates that models lack the "intentional agency" required to strategically violate conversational norms, which is a prerequisite for Gricean implicature. Second, it identifies a problem of "referential hollowing," arguing that indexical terms like "I" and "here" in AI outputs fail to anchor to a determinate, embodied perspective, leaving them pragmatically inert. Third, the study investigates the critical function of paralinguistic cues and prosody in determining illocutionary force, noting that symbolic abstractions miss the expressive control essential for implication. Fourth, invoking Wittgenstein’s analysis, the paper argues that genuine implication presupposes participation in a shared "form of life" involving risks and commitments, a domain inaccessible to algorithmic systems. Finally, the study addresses multimodal systems and role-playing objections, concluding that simulating the acoustic artifacts of sarcasm or adopting a persona creates only a sophisticated simulation devoid of the social standing and intentional states necessary to validate such acts. The study concludes that treating these systems as pragmatic agents is a "functionalist temptation" that risks hollowing out the normative concepts essential to human discourse.

Kaynakça

  • Altunya Sayan, H. (2025). Dijital çağda muhakeme ve hikmetten yoksun mantıksal düşünme üzerine bir soruşturma. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  • Attah, N. O. (2025). Do language models lack communicative intentions? Synthese, 205(5), 187. https://doi.org/10.1007/s11229-025-05022-6
  • Austin, J. L. (1962). How to do things with words (M. Sbisá & J. O. Urmson, Eds.). Clarendon Press.
  • Başarslan, B., & Büyükyılmaz, Y. (2025). Is it possible to engage in dialogue with generative AI technologies? Beytulhikme An International Journal of Philosophy, 15(4), 1473–1496. https://doi.org/10.29228/beytulhikme.86607
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
  • Benveniste, E. (1971). Problems in general linguistics (M. E. Meek, Trans.). University of Miami Press.
  • Büyükada, S. (2025). Mantık, akıl yürütme ve yapay zeka. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  • Caucci, G. M., & Kreuz, R. J. (2012). Social and paralinguistic cues to sarcasm. Humor, 25(1), 1–22. https://doi.org/10.1515/humor-2012-0001
  • Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and Semantics, Volume 3: Speech Acts (pp. 41–58). Academic Pr.
  • Grindrod, J. (2024). Large language models and linguistic intentionality. Synthese, 204(2), 71. https://doi.org/10.1007/s11229-024-04723-8
  • Gubelmann, R. (2024). Large language models, agency, and why speech acts are beyond them (for now) – a Kantian-cum-pragmatist case. Philosophy & Technology, 37(1), 32. https://doi.org/10.1007/s13347-024-00696-1
  • Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1), 335–346. https://doi.org/10.1016/0167-2789(90)90087-6
  • Harnad, S. (2025). Language writ large: LLMs, ChatGPT, meaning, and understanding. Frontiers in Artificial Intelligence, 7. https://doi.org/10.3389/frai.2024.1490698
  • Kaplan, D. (1989a). Afterthoughts. In J. Almog, J. Perry, & H. Wettstein (Eds.), Themes From Kaplan (pp. 565–614). Oxford University Press.
  • Kaplan, D. (1989b). Demonstratives. In J. Almog, J. Perry, & H. K. Wettstein (Eds.), Themes from Kaplan (pp. 483–563). Oxford University Press.
  • Kömürcü, K. (2025). Dijital sofizm ve mantık. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  • Li, Z., Zhang, Y., Gao, X., Nayak, S., & Coler, M. (2025). Making machines sound sarcastic: LLM-enhanced and retrieval-guided sarcastic speech synthesis (arXiv:2510.07096; Version 1). arXiv. https://doi.org/10.48550/arXiv.2510.07096
  • Mandelkern, M., & Linzen, T. (2024). Do language models’ words refer? Computational Linguistics, 50(3), 1191–1200. https://doi.org/10.1162/coli_a_00522
  • Mollema, T. (2024). Social AI and the temptation of equating Wittgenstein’s language-user with Calvino’s literature machine. International Review of Literary Studies, 6(1), 39–55. https://doi.org/10.53057/irls/2024.6.1.4
  • Oğuz, M., Bakman, Y. F., & Yaldiz, D. N. (2025). Un-considering contextual information: Assessing LLMs’ understanding of indexical elements. In W. Che, J. Nabende, E. Shutova, & M. T. Pilehvar (Eds.), Findings of the Association for Computational Linguistics: ACL 2025 (pp. 23410–23427). Association for Computational Linguistics. https://doi.org/10.18653/v1/2025.findings-acl.1203
  • Özdil, M. S. (2025). Non-linguistic elements and issues of reference in logic (Y. E. Akbay, Ed.). Palet Publications. Perry, J. (1979). The problem of the essential indexical. Noûs, 13(1), 3–21. https://doi.org/10.2307/2214792
  • Qian, K., Fan, X., Ni, J., Shechtman, S., Hasegawa-Johnson, M. A., Gan, C., & Zhang, Y. (2025). ProsodyLM: Uncovering the emerging prosody processing capabilities in speech language models. Second Conference on Language Modeling. https://openreview.net/forum?id=uBg8PClMUu#discussion
  • Rosen, Z. P., & Dale, R. (2024). LLMs don’t “do things with words” but their lack of illocution can inform the study of human discourse. Proceedings of the Annual Meeting of the Cognitive Science Society, 46(0). https://escholarship.org/uc/item/25k7z0mz
  • Searle, J. R. (1969). Speech acts: An essay in the philosophy of language. Cambridge University Press. Sharon, T. (2025). Technosolutionism and the empathetic medical chatbot. AI & Society. https://doi.org/10.1007/s00146-025-02441-4
  • Startari, A. V. (2025). Indexical collapse: Reference disappears, authority remains in predictive systems. Elsevier BV. https://doi.org/10.5281/zenodo.17228239
  • Wittgenstein, L. (1986). Philosophical investigations (G. E. M. Anscombe, Trans.). Basil Blackwell.
Toplam 26 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Bilişim Felsefesi, Dil Felsefesi, Mantık
Bölüm Araştırma Makalesi
Yazarlar

Mahmut Sami Özdil 0000-0003-3167-9696

Gönderilme Tarihi 14 Ocak 2026
Kabul Tarihi 24 Mart 2026
Yayımlanma Tarihi 24 Mart 2026
DOI https://doi.org/10.30622/tarr.1863625
IZ https://izlik.org/JA59TG52MT
Yayımlandığı Sayı Yıl 2026 Cilt: 11 Sayı: 1

Kaynak Göster

APA Özdil, M. S. (2026). Why Large Language Models Can Say but Cannot Imply. Turkish Academic Research Review, 11(1), 223-238. https://doi.org/10.30622/tarr.1863625

Turkish Academic Research Review 
Creative Commons Lisansı Creative Commons Atıf-GayriTicari 4.0 Uluslararası Lisansı (CC BY-NC 4.0) ile lisanslanmıştır.