Research Article

Evaluation of the Deep Q-Learning Models for Mobile Robot Path Planning Problem

Volume: 12 Number: 3 September 30, 2024
EN

Evaluation of the Deep Q-Learning Models for Mobile Robot Path Planning Problem

Abstract

Search algorithms such as A* or Dijkstra are generally used to solve the path planning problem for mobile robots. However, these approaches require a map and their performance decreases in dynamic environments. These drawbacks have led researchers to work on dynamic path planning algorithms. Deep reinforcement learning methods have been extensively studied for this purpose and their use is expanding day by day. However, these studies mostly focus on training performance of the models, but not on inference. In this study, we propose an approach to compare the performance of the models in terms of path length, path curvature and journey time. We implemented the approach by using Python programming language two steps: inference and evaluation. Inference step gathers information of path planning performance; evaluation step computes the metrics regarding the information. Our approach can be tailored to many studies to examine the performances of trained models.

Keywords

References

  1. [1] H. Aydemir, M. Tekerek, and M. Gök, “Complete coverage planning with clustering method for autonomous mobile robots”, Concurr. Comput. Pract. Exp., 2023, doi:10.1002/cpe.7830
  2. [2] M. Gök, Ö. Ş. Akçam, and, M. Tekerek, “Performance Analysis of Search Algorithms for Path Planning”, Kahramanmaraş Sütçü İmam University Journal of Engineering Sciences, 26 (2), 379-394., doi:10.17780/ksujes.1171461
  3. [3] T. P. Lillicrap et al., “Continuous control with deep reinforcement learning”, in 4th International Conference on Learning Representations, 2016, pp. 1-14.
  4. [4] Y. Kato, K. Kamiyama, and K. Morioka, “Autonomous robot navigation system with learning based on deep Q-network and topological maps”, in 2017 IEEE/SICE International Symposium on System Integration, 2018, pp. 1040-1046.
  5. [5] A. I. Karoly, P. Galambos, J. Kuti, and I. J. Rudas, “Deep Learning in Robotics: Survey on Model Structures and Training Strategies”, IEEE Trans. on Systems, Man, and Cybernetics: Systems, vol. 51, no. 1, pp. 266–279, 2021.
  6. [6] H. Van Hasselt, “Double Q-learning”, in 24th Annual Conference on Neural Information Processing Systems, 2010, pp. 1–9.
  7. [7] A. Kamalova, S. G. Lee, and S. H. Kwon, “Occupancy Reward-Driven Exploration with Deep Reinforcement Learning for Mobile Robot System”, Applied Sciences (Switzerland), vol. 12, no. 18, 2022.
  8. [8] J. Gao, W. Ye, J. Guo, and Z. Li, “Deep reinforcement learning for indoor mobile robot path planning”, Sensors, vol. 20, no. 19, 2020, pp. 1–15.

Details

Primary Language

English

Subjects

Information Systems (Other) , Assistive Robots and Technology

Journal Section

Research Article

Early Pub Date

September 26, 2024

Publication Date

September 30, 2024

Submission Date

March 20, 2024

Acceptance Date

August 16, 2024

Published in Issue

Year 2024 Volume: 12 Number: 3

APA
Gök, M. (2024). Evaluation of the Deep Q-Learning Models for Mobile Robot Path Planning Problem. Gazi Üniversitesi Fen Bilimleri Dergisi Part C: Tasarım Ve Teknoloji, 12(3), 620-627. https://doi.org/10.29109/gujsc.1455778

                                TRINDEX     16167        16166    21432    logo.png

      

    e-ISSN:2147-9526