Research Article

Why Large Language Models Can Say but Cannot Imply

Volume: 11 Number: 1 March 24, 2026
TR EN

Why Large Language Models Can Say but Cannot Imply

Abstract

This study argues that, despite their increasing fluency and multimodal capabilities, large language models (LLMs) are structurally incapable of genuine linguistic implication. In terms of methodology, the paper employs a theoretical analysis that synthesizes analytic philosophy of language with recent critiques in AI ethics, applying Speech Act Theory, Gricean pragmatic maxims, Kaplan’s analysis of indexicality, and Wittgenstein’s concept of "forms of life" to the architecture of contemporary LLMs to test the conditions of possibility for indirect communication. The argument proceeds through five distinct analytical stages. First, drawing on speech act theory, it demonstrates that models lack the "intentional agency" required to strategically violate conversational norms, which is a prerequisite for Gricean implicature. Second, it identifies a problem of "referential hollowing," arguing that indexical terms like "I" and "here" in AI outputs fail to anchor to a determinate, embodied perspective, leaving them pragmatically inert. Third, the study investigates the critical function of paralinguistic cues and prosody in determining illocutionary force, noting that symbolic abstractions miss the expressive control essential for implication. Fourth, invoking Wittgenstein’s analysis, the paper argues that genuine implication presupposes participation in a shared "form of life" involving risks and commitments, a domain inaccessible to algorithmic systems. Finally, the study addresses multimodal systems and role-playing objections, concluding that simulating the acoustic artifacts of sarcasm or adopting a persona creates only a sophisticated simulation devoid of the social standing and intentional states necessary to validate such acts. The study concludes that treating these systems as pragmatic agents is a "functionalist temptation" that risks hollowing out the normative concepts essential to human discourse.

Keywords

References

  1. Altunya Sayan, H. (2025). Dijital çağda muhakeme ve hikmetten yoksun mantıksal düşünme üzerine bir soruşturma. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  2. Attah, N. O. (2025). Do language models lack communicative intentions? Synthese, 205(5), 187. https://doi.org/10.1007/s11229-025-05022-6
  3. Austin, J. L. (1962). How to do things with words (M. Sbisá & J. O. Urmson, Eds.). Clarendon Press.
  4. Başarslan, B., & Büyükyılmaz, Y. (2025). Is it possible to engage in dialogue with generative AI technologies? Beytulhikme An International Journal of Philosophy, 15(4), 1473–1496. https://doi.org/10.29228/beytulhikme.86607
  5. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
  6. Benveniste, E. (1971). Problems in general linguistics (M. E. Meek, Trans.). University of Miami Press.
  7. Büyükada, S. (2025). Mantık, akıl yürütme ve yapay zeka. In C. Baba (Ed.), Dijital Çağda Mantık ve Düşüncenin Dönüşümü: Yeni Paradigmalar. Palet Yayınları.
  8. Caucci, G. M., & Kreuz, R. J. (2012). Social and paralinguistic cues to sarcasm. Humor, 25(1), 1–22. https://doi.org/10.1515/humor-2012-0001

Details

Primary Language

English

Subjects

Philosophy of Informatics , Philosophy of Language, Logic

Journal Section

Research Article

Publication Date

March 24, 2026

Submission Date

January 14, 2026

Acceptance Date

March 24, 2026

Published in Issue

Year 2026 Volume: 11 Number: 1

APA
Özdil, M. S. (2026). Why Large Language Models Can Say but Cannot Imply. Turkish Academic Research Review, 11(1), 223-238. https://doi.org/10.30622/tarr.1863625
AMA
1.Özdil MS. Why Large Language Models Can Say but Cannot Imply. tarr. 2026;11(1):223-238. doi:10.30622/tarr.1863625
Chicago
Özdil, Mahmut Sami. 2026. “Why Large Language Models Can Say But Cannot Imply”. Turkish Academic Research Review 11 (1): 223-38. https://doi.org/10.30622/tarr.1863625.
EndNote
Özdil MS (March 1, 2026) Why Large Language Models Can Say but Cannot Imply. Turkish Academic Research Review 11 1 223–238.
IEEE
[1]M. S. Özdil, “Why Large Language Models Can Say but Cannot Imply”, tarr, vol. 11, no. 1, pp. 223–238, Mar. 2026, doi: 10.30622/tarr.1863625.
ISNAD
Özdil, Mahmut Sami. “Why Large Language Models Can Say But Cannot Imply”. Turkish Academic Research Review 11/1 (March 1, 2026): 223-238. https://doi.org/10.30622/tarr.1863625.
JAMA
1.Özdil MS. Why Large Language Models Can Say but Cannot Imply. tarr. 2026;11:223–238.
MLA
Özdil, Mahmut Sami. “Why Large Language Models Can Say But Cannot Imply”. Turkish Academic Research Review, vol. 11, no. 1, Mar. 2026, pp. 223-38, doi:10.30622/tarr.1863625.
Vancouver
1.Mahmut Sami Özdil. Why Large Language Models Can Say but Cannot Imply. tarr. 2026 Mar. 1;11(1):223-38. doi:10.30622/tarr.1863625