This study investigates how large language models (LLMs) can serve as dynamic agents in game-based interactions by comparing two prototypes of a color-guessing game. One model (Cohere Command) operates on a zero-shot prompt-based mechanism, while the other (FLAN-T5) is fine-tuned on a semantically structured dataset. A total of 20 participants were divided into two experimental groups to evaluate the models' ability to generate semantically coherent yes/no questions, maintain flow, and perform accurate predictions. Quantitative data, including session durations, number of interactions, and AI outputs, were analyzed, along with a post-game user experience survey grounded in Flow Theory. Results show that while both systems achieved task completion, the fine-tuned FLAN-T5 model significantly outperformed the other models in terms of semantic clarity, user engagement, and perceived fluency. The findings highlight the potential of LLM-based DDA systems in creating meaningful, adaptive player experiences and underscore the importance of semantic alignment and interaction transparency in game-based AI design.
Artificial Intelligence Machine Learning Digital Game Design Dynamic Difficulty Adjustment
| Birincil Dil | İngilizce |
|---|---|
| Konular | Bilgi Sistemleri Kullanıcı Deneyimi Tasarımı ve Geliştirme, Grafikler, Artırılmış Gerçeklik ve Oyunlar (Diğer), İnsan Bilgisayar Etkileşimi, Makine Öğrenme (Diğer), Doğal Dil İşleme, Tasarım (Diğer) |
| Bölüm | Araştırma Makalesi |
| Yazarlar | |
| Gönderilme Tarihi | 5 Nisan 2025 |
| Kabul Tarihi | 22 Mayıs 2025 |
| Yayımlanma Tarihi | 30 Haziran 2025 |
| Yayımlandığı Sayı | Yıl 2025 Cilt: 9 Sayı: 1 |