In both translation literature and the industry, the advancements in artificial intelligence (AI)-driven translation activities have led to a rapidly growing interest in prompt design for large language models (LLMs). Nevertheless, there is still a lot to unfold in terms of the impact of assigning specific roles or ‘personas’ within these prompts, particularly within the context of medical and pharmaceutical texts. This paper intends to respond to this gap by assessing the impact of employing role-based (persona) prompts when engaged in LLMs. The experiment evaluates the translation quality of outputs generated by role-prompted LLMs against two different source texts containing pharmaceutical content. The outputs were systematically compared to those generated by zero-shot prompted models, alongside the outputs obtained from a conventional neural machine translation (NMT) system: Google Translate. To evaluate the machine translation (MT) outputs, the study utilizes quantitative metrics such as BLEU (Bilingual Evaluation Understudy) and COMET (Crosslingual Optimized Metric for Evaluation of Translation). In parallel, a qualitative approach was employed by conducting a Translation Quality Evaluation (TQE) using a customized error-typology tool inspired by the Multidimensional Quality Metrics (MQM) framework. Consequently, the experiment was carried out through both quantitative and qualitative methodology, as a hybrid approach is essential for a solid quality assessment. Finally, the study presents comprehensive discussions concerning widely debated concepts such as domain-specific assessment and the potential for human–machine collaboration.
role-based prompting large language models translation quality evaluation medical texts human–machine interaction
| Primary Language | English |
|---|---|
| Subjects | Translation and Interpretation Studies |
| Journal Section | Research Article |
| Authors | |
| Submission Date | September 21, 2025 |
| Acceptance Date | December 11, 2025 |
| Publication Date | December 31, 2025 |
| Published in Issue | Year 2025 Volume: 8 Issue: 2 |