Research Article
BibTex RIS Cite

Human-like Competitive Video Game AI Through Reinforcement Learning

Year 2025, Volume: 5 Issue: 1, 96 - 105, 31.12.2025
https://doi.org/10.57020/ject.1757814

Abstract

With the rise of competitive and multiplayer video games, the demand for non-player characters that can provide meaningful training and practice experiences has increased. With the rise of multiplayer games, developers increasingly require AI-controlled opponents that the players can play against to learn the game, to practice, or to just play by themselves. These AI bots are commonly made with state machines that are manually programmed by programmers. Using state machines for AI players is not only laborintensive but also often results in bots that exhibit predictable and rigid behavior, which can reduce the perception of human-like interaction. In this study, an AI agent was trained using reinforcement learning to play a two-player competitive fighting game, and its behavior was evaluated through gameplay sessions against 17 human participants with varying levels of gaming experience. At the end of our study, the results suggest that training AI agents capable of eliciting a perception of human-like gameplay is feasible within the scope of the studied environment and the integration of the said AI agents is possible through the use of portable technologies.

Ethical Statement

In this article, the principles of scientific research and publication ethics were followed. Although a formal ethics committee approval was not obtained for this study, all participants were fully informed about the purpose, scope, and voluntary nature of the research before participation. In accordance with the Personal Data Protection Law (KVKK) and general research ethics principles, informed consent was obtained from all participants. All personal data were anonymized, and no personally identifiable information was collected or stored. The survey data were used solely for academic analysis within the scope of this study. This study did not involve any experiments on animals or human subjects that would require medical or invasive procedures.

Thanks

We thank Ivan Dodic, one of the maintainers of Godot RL Agents framework, for helping with the integration of Sample Factory with Godot.

References

  • Gitlin, J. M. (2020, September 14). War Stories: How Forza learned to love neural nets to train AI drivers. Ars Technica. https://arstechnica.com/gaming/2020/09/warstories-how-forza-learned-to-love-neural-nets-to-train-aidrivers/
  • Herbrich, R., Hatton, M., & Tipping, M. E. (2008). Mixture model for motion lines in a virtual reality environment (United States Patent No. US7358973B2). https://patents.google.com/patent/US7358973B2/en
  • Thompson, T. (n.d.). The Killer Groove: The Shadow AI of Killer Instinct. Retrieved July 21, 2025, from https://www.gamedeveloper.com/programming/thekiller-groove-the-shadow-ai-of-killer-instinct
  • Berner, C., Brockman, G., Chan, B., Cheung, V., Dębiak, P., Dennison, C., Farhi, D., Fischer, Q., Hashme, S., Hesse, C., Józefowicz, R., Gray, S., Olsson, C., Pachocki, ., Petrov, M., Pinto, H. P. d O., Raiman, J., Salimans, T., Schlatter, J., . . . Zhang, S. (2019). Dota 2 with Large Scale Deep Reinforcement Learning (No.arXiv:1912.06680). arXiv. https://doi.org/10.48550/arXiv.1912.06680
  • Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., Oh, J., Horgan, D., Kroiss, M., Danihelka, I., Huang, A., Sifre, L., Cai, T., Agapiou, J. P., Jaderberg, M., . . . Silver, D. (2019). Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 575(7782), 350-354. https://doi.org/10.1038/s41586-019-1724-z
  • Soni, B., & Hingston, P. (2008). Bots trained to play like a human are more fun. 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), 363-369. https://doi.org/10.1109/IJCNN.2008.4633818
  • Renman, C. (2017). Creating Human-like AI Movement in Games Using Imitation Learning. https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210887
  • Ponce, H., & Padilla, R. (2014). A Hierarchical Reinforcement Learning Based Artificial Intelligence for Non-Player Characters in Video Games. In A. Gelbukh, F. C. Espinoza, & S. N. Galicia-Haro (Eds.), Nature-Inspired Computation and Machine Learning (pp. 172-183). Springer International Publishing. https://doi.org/10.1007/978-3-319-13650-9_16
  • Fujii, N., Sato, Y., Wakama, H., Kazai, K., & Katayose, H. (2013). Evaluating Human-like Behaviors of Video-Game Agents Autonomously Acquired with Biological Constraints. In D. Reidsma, H. Katayose, & A. Nijholt (Eds.), Advances in Computer Entertainment (pp. 61-76). Springer International Publishing. https://doi.org/10.1007/978-3-319-03161-3_5
  • Miyashita, S., Lian, X., Zeng, X., Matsubara, T., & Uehara, K. (2017). Developing game AI agent behaving like human by mixing reinforcement learning and supervised learning. 2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 489-494. https://doi.org/10.1109/SNPD.2017.8022767
  • Ogawa, T., Hsueh, C.-H., & Ikeda, K. (2024). More Human-Like Gameplay by Blending Policies From Supervised and Reinforcement Learning. IEEE Transactions on Games, 16(4), 831-843. https://doi.org/10.1109/TG.2024.3424668
  • Hingston, P. (2009). A Turing Test for Computer Game Bots. IEEE Transactions on Computational Intelligence and AI in Games, 1(3), 169-186. https://doi.org/10.1109/TCIAIG.2009.2032534
  • Llargues Asensio, J. M., Peralta, J., Arrabales, R., Bedia, M. G., Cortez, P., & Peña, A. L. (2014). Artificial Intelligence approaches for the generation and assessment of believable human-like behaviour in virtual characters. Expert Systems with Applications, 41(16), 7281-7290. https://doi.org/10.1016/j.eswa.2014.05.004
  • Arrabales, R., Ledezma Espino, A. I., & Sanchis de Miguel, M. A. (n.d.). CERA-CRANIUM: A test bed for machine consciousness research. Retrieved July 27, 2025, from https://earchivo.uc3m.es/entities/publication/0fa292af-12d1-4116-8552-8ae6803ba75a
  • Laird, J. E., Newell, A., & Rosenbloom, P. S. (1987). SOAR: An architecture for general intelligence. Artificial Intelligence, 33(1), 1-64. https://doi.org/10.1016/0004-3702(87)90050-6
  • Lu, F., Yamamoto, K., Nomura, L. H., Mizuno, S., Lee, Y., & Thawonmas, R. (2013). Fighting game artificial intelligence competition platform. 2013 IEEE 2nd Global Conference on Consumer Electronics (GCCE), 320-323. https://doi.org/10.1109/GCCE.2013.6664844
  • hifight. (2024). Hifight/Footsies [C#]. https://github.com/hifight/Footsies (Original work published 2018)
  • Brogolem35. (2025). Brogolem35/botsies [GDScript].https://github.com/Brogolem35/botsies (Original work published 2025)
  • Beeching, E., Debangoye, J., Simonin, O., & Wolf, C. (2021). Godot Reinforcement Learning Agents (No. arXiv:2112.03636). arXiv. https://doi.org/10.48550/arXiv.2112.03636
  • Petrenko, A., Huang, Z., Kumar, T., Sukhatme, G., & Koltun, V. (2020). Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning (No. arXiv:2006.11751). arXiv. https://doi.org/10.48550/arXiv.2006.11751
  • Foerster, F. R., Chidharom, M., Bonnefond, A., & Giersch, A. (2022). Neurocognitive analyses reveal that video game players exhibit enhanced implicit temporal processing. Communications Biology, 5(1), 1-10. https://doi.org/10.1038/s42003-022-04033-0
  • Momennejad, I. (2022). A rubric for human-like agents and NeuroAI. Philosophical Transactions of the Royal Society B, 378(1869), 20210446. https://doi.org/10.1098/rstb.2021.0446
  • Kim, T. H., & Im, H. (2025). Mental and physical humanlikeness in artificial intelligence influencers:Effects on humanness, eeriness, and consumer responses. Journal of Consumer Behaviour, 0(0), 1-16. https://doi.org/10.1002/cb.70060
  • Steam Hardware & Software Survey. (n.d.). Retrieved July 28, 2025, from https://store.steampowered.com/hwsurvey
There are 24 citations in total.

Details

Primary Language English
Subjects Decision Support and Group Support Systems, Computer Gaming and Animation
Journal Section Research Article
Authors

Can Çelenay 0009-0005-3160-1882

Yunus Doğan 0000-0002-0353-5014

Submission Date August 4, 2025
Acceptance Date December 18, 2025
Publication Date December 31, 2025
Published in Issue Year 2025 Volume: 5 Issue: 1

Cite

APA Çelenay, C., & Doğan, Y. (2025). Human-like Competitive Video Game AI Through Reinforcement Learning. Journal of Emerging Computer Technologies, 5(1), 96-105. https://doi.org/10.57020/ject.1757814
Journal of Emerging Computer Technologies
is indexed and abstracted by
Harvard Hollis, Scilit, ROAD, Google Scholar, OpenAIRE

Publisher
Izmir Academy Association

88x31.png