Learning how to navigate in unfamiliar environments is a critical skill for AI-powered mobile robots. Traditional methods for robot navigation typically involve three key steps: positioning, mapping, and route planning. However, in unknown environments, these methods can become outdated because route planning requires an obstacle map. Moreover, classical approaches may become trapped at a local maximum as the environment becomes more complex, which can negatively impact the system's success. Therefore, it is crucial to address collision avoidance in autonomous navigation, both in static and dynamic environments, to ensure that the robot reaches the target safely without any collisions. In recent years, heuristic approaches have gained importance due to their proximity to human behavioral learning. In this paper, we will examine the advantages and disadvantages of using Deep Deterministic Policy Gradient Reinforcement learning methods to guide the robot from the starting position to the target position without colliding with static and dynamic obstacles. By using a reinforcement learning method, the robot can learn from its experiences, make informed decisions, and adapt to changes in the environment. We will explore the efficacy of this method and compare it to traditional approaches to determine its potential for real-world applications. Ultimately, this paper aims to develop a robust and efficient navigation system for mobile robots, which can successfully navigate in unknown and dynamic environments. For this purpose, the system was tested for 100 episodes and the results showed a success rate of over 80%.
We would like to thank Fırat University for their contribution to the studies in this research.
Primary Language | English |
---|---|
Subjects | Materials Engineering (Other) |
Journal Section | Research Articles |
Authors | |
Publication Date | June 14, 2023 |
Published in Issue | Year 2023 |
Bu eser Creative Commons Atıf-GayriTicari 4.0 Uluslararası Lisansı (CC BY NC) ile lisanslanmıştır.