This study investigates how large language models (LLMs) can serve as dynamic agents in game-based interactions by comparing two prototypes of a color-guessing game. One model (Cohere Command) operates on a zero-shot prompt-based mechanism, while the other (FLAN-T5) is fine-tuned on a semantically structured dataset. A total of 20 participants were divided into two experimental groups to evaluate the models' ability to generate semantically coherent yes/no questions, maintain flow, and perform accurate predictions. Quantitative data, including session durations, number of interactions, and AI outputs, were analyzed, along with a post-game user experience survey grounded in Flow Theory. Results show that while both systems achieved task completion, the fine-tuned FLAN-T5 model significantly outperformed the other models in terms of semantic clarity, user engagement, and perceived fluency. The findings highlight the potential of LLM-based DDA systems in creating meaningful, adaptive player experiences and underscore the importance of semantic alignment and interaction transparency in game-based AI design.
| Primary Language | English |
|---|---|
| Subjects | Information Systems User Experience Design and Development, Graphics, Augmented Reality and Games (Other), Human-Computer Interaction, Machine Learning (Other), Natural Language Processing, Design (Other) |
| Journal Section | Research Article |
| Authors | |
| Submission Date | April 5, 2025 |
| Acceptance Date | May 22, 2025 |
| Publication Date | June 30, 2025 |
| Published in Issue | Year 2025 Volume: 9 Issue: 1 |