Araştırma Makalesi

Why Large Language Models Can Say but Cannot Imply

Cilt: 11 Sayı: 1 24 Mart 2026
PDF İndir
TR EN

Why Large Language Models Can Say but Cannot Imply

Öz

This study argues that, despite their increasing fluency and multimodal capabilities, large language models (LLMs) are structurally incapable of genuine linguistic implication. In terms of methodology, the paper employs a theoretical analysis that synthesizes analytic philosophy of language with recent critiques in AI ethics, applying Speech Act Theory, Gricean pragmatic maxims, Kaplan’s analysis of indexicality, and Wittgenstein’s concept of "forms of life" to the architecture of contemporary LLMs to test the conditions of possibility for indirect communication. The argument proceeds through five distinct analytical stages. First, drawing on speech act theory, it demonstrates that models lack the "intentional agency" required to strategically violate conversational norms, which is a prerequisite for Gricean implicature. Second, it identifies a problem of "referential hollowing," arguing that indexical terms like "I" and "here" in AI outputs fail to anchor to a determinate, embodied perspective, leaving them pragmatically inert. Third, the study investigates the critical function of paralinguistic cues and prosody in determining illocutionary force, noting that symbolic abstractions miss the expressive control essential for implication. Fourth, invoking Wittgenstein’s analysis, the paper argues that genuine implication presupposes participation in a shared "form of life" involving risks and commitments, a domain inaccessible to algorithmic systems. Finally, the study addresses multimodal systems and role-playing objections, concluding that simulating the acoustic artifacts of sarcasm or adopting a persona creates only a sophisticated simulation devoid of the social standing and intentional states necessary to validate such acts. The study concludes that treating these systems as pragmatic agents is a "functionalist temptation" that risks hollowing out the normative concepts essential to human discourse.

Anahtar Kelimeler

Kaynakça

  1. Altunya Sayan, H. (2025). Dijital çağda muhakeme ve hikmetten yoksun mantıksal düşünme üzerine bir soruşturma. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  2. Attah, N. O. (2025). Do language models lack communicative intentions? Synthese, 205(5), 187. https://doi.org/10.1007/s11229-025-05022-6
  3. Austin, J. L. (1962). How to do things with words (M. Sbisá & J. O. Urmson, Eds.). Clarendon Press.
  4. Başarslan, B., & Büyükyılmaz, Y. (2025). Is it possible to engage in dialogue with generative AI technologies? Beytulhikme An International Journal of Philosophy, 15(4), 1473–1496. https://doi.org/10.29228/beytulhikme.86607
  5. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
  6. Benveniste, E. (1971). Problems in general linguistics (M. E. Meek, Trans.). University of Miami Press.
  7. Büyükada, S. (2025). Mantık, akıl yürütme ve yapay zeka. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  8. Caucci, G. M., & Kreuz, R. J. (2012). Social and paralinguistic cues to sarcasm. Humor, 25(1), 1–22. https://doi.org/10.1515/humor-2012-0001

Ayrıntılar

Birincil Dil

İngilizce

Konular

Bilişim Felsefesi, Dil Felsefesi, Mantık

Bölüm

Araştırma Makalesi

Yayımlanma Tarihi

24 Mart 2026

Gönderilme Tarihi

14 Ocak 2026

Kabul Tarihi

24 Mart 2026

Yayımlandığı Sayı

Yıl 2026 Cilt: 11 Sayı: 1

Kaynak Göster

APA
Özdil, M. S. (2026). Why Large Language Models Can Say but Cannot Imply. Turkish Academic Research Review, 11(1), 223-238. https://doi.org/10.30622/tarr.1863625
AMA
1.Özdil MS. Why Large Language Models Can Say but Cannot Imply. tarr. 2026;11(1):223-238. doi:10.30622/tarr.1863625
Chicago
Özdil, Mahmut Sami. 2026. “Why Large Language Models Can Say but Cannot Imply”. Turkish Academic Research Review 11 (1): 223-38. https://doi.org/10.30622/tarr.1863625.
EndNote
Özdil MS (01 Mart 2026) Why Large Language Models Can Say but Cannot Imply. Turkish Academic Research Review 11 1 223–238.
IEEE
[1]M. S. Özdil, “Why Large Language Models Can Say but Cannot Imply”, tarr, c. 11, sy 1, ss. 223–238, Mar. 2026, doi: 10.30622/tarr.1863625.
ISNAD
Özdil, Mahmut Sami. “Why Large Language Models Can Say but Cannot Imply”. Turkish Academic Research Review 11/1 (01 Mart 2026): 223-238. https://doi.org/10.30622/tarr.1863625.
JAMA
1.Özdil MS. Why Large Language Models Can Say but Cannot Imply. tarr. 2026;11:223–238.
MLA
Özdil, Mahmut Sami. “Why Large Language Models Can Say but Cannot Imply”. Turkish Academic Research Review, c. 11, sy 1, Mart 2026, ss. 223-38, doi:10.30622/tarr.1863625.
Vancouver
1.Mahmut Sami Özdil. Why Large Language Models Can Say but Cannot Imply. tarr. 01 Mart 2026;11(1):223-38. doi:10.30622/tarr.1863625

Turkish Academic Research Review 
Creative Commons Lisansı Creative Commons Atıf-GayriTicari 4.0 Uluslararası Lisansı (CC BY-NC 4.0) ile lisanslanmıştır.