This paper demonstrates that translations produced by neural networks, including translations by large language models (LLMs) such as ChatGPT and DeepSeek, are ideological in many of the same ways as those produced by human translators. Like human translators, these models are connected to real-world interests and restrictions and a role they are expected to play in society. This embeddedness in the social world gives LLMs their own distinct ‘positionality,’ an ideological ‘place’ from which they enunciate. I argue for the existence of two distinct sources of ideology in the translations of LLMs. The first is the ‘mass ideology’ of the training data, which contains innumerable biases that are widespread among real human language users, in this case translators. The second is the ‘elite ideology’ of the models’ owners and developers, as well as the political and social forces that impose limitations on what is permissible. This ‘elite ideology’ is imposed on the LLM after its initial training by developers, in order to constrain what type of material it is possible for the LLM to produce or reproduce. As this paper makes clear, both forms of ideological influence shape the translations produced by models like ChatGPT and DeepSeek. The result is a clear subjective positionality that can be defined and described and that varies across time and across different political jurisdictions.
translation and ideology positionality AI translation neural machine translation large language model (LLM)
| Primary Language | English |
|---|---|
| Subjects | Translation and Interpretation Studies |
| Journal Section | Research Article |
| Authors | |
| Submission Date | April 21, 2025 |
| Acceptance Date | June 14, 2025 |
| Publication Date | June 30, 2025 |
| Published in Issue | Year 2025 Volume: 8 Issue: 1 |