Learning how to navigate in unfamiliar environments is a critical skill for AI-powered mobile robots. Traditional methods for robot navigation typically involve three key steps: positioning, mapping, and route planning. However, in unknown environments, these methods can become outdated because route planning requires an obstacle map. Moreover, classical approaches may become trapped at a local maximum as the environment becomes more complex, which can negatively impact the system's success. Therefore, it is crucial to address collision avoidance in autonomous navigation, both in static and dynamic environments, to ensure that the robot reaches the target safely without any collisions. In recent years, heuristic approaches have gained importance due to their proximity to human behavioral learning. In this paper, we will examine the advantages and disadvantages of using Deep Deterministic Policy Gradient Reinforcement learning methods to guide the robot from the starting position to the target position without colliding with static and dynamic obstacles. By using a reinforcement learning method, the robot can learn from its experiences, make informed decisions, and adapt to changes in the environment. We will explore the efficacy of this method and compare it to traditional approaches to determine its potential for real-world applications. Ultimately, this paper aims to develop a robust and efficient navigation system for mobile robots, which can successfully navigate in unknown and dynamic environments. For this purpose, the system was tested for 100 episodes and the results showed a success rate of over 80%.
Mobile robot, machine learning, reinforcement learning, obstacle avoidance rob
We would like to thank Fırat University for their contribution to the studies in this research.
Birincil Dil | İngilizce |
---|---|
Konular | Malzeme Mühendisliği (Diğer) |
Bölüm | Research Articles |
Yazarlar |
|
Yayımlanma Tarihi | 14 Haziran 2023 |
Kabul Tarihi | 15 Mayıs 2023 |
Yayınlandığı Sayı | Yıl 2023 Cilt: 2 Sayı: 2 |
Bibtex | @araştırma makalesi { fujece1316662, journal = {Firat University Journal of Experimental and Computational Engineering}, eissn = {2822-2881}, address = {Fırat Üniversitesi Mühendislik Fakültesi Deneysel ve Hesaplamalı Mühendislik Dergisi Yayın Koordinatörlüğü 23119 Elazığ/TÜRKİYE}, publisher = {Fırat Üniversitesi}, year = {2023}, volume = {2}, number = {2}, pages = {87 - 96}, doi = {10.5505/fujece.2023.85047}, title = {Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments}, key = {cite}, author = {Yılmaz, Taner and Aydogmus, Omur} } |
APA | Yılmaz, T. & Aydogmus, O. (2023). Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments . Firat University Journal of Experimental and Computational Engineering , 2 (2) , 87-96 . DOI: 10.5505/fujece.2023.85047 |
MLA | Yılmaz, T. , Aydogmus, O. "Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments" . Firat University Journal of Experimental and Computational Engineering 2 (2023 ): 87-96 <https://dergipark.org.tr/tr/pub/fujece/issue/78053/1316662> |
Chicago | Yılmaz, T. , Aydogmus, O. "Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments". Firat University Journal of Experimental and Computational Engineering 2 (2023 ): 87-96 |
RIS | TY - JOUR T1 - Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments AU - TanerYılmaz, OmurAydogmus Y1 - 2023 PY - 2023 N1 - doi: 10.5505/fujece.2023.85047 DO - 10.5505/fujece.2023.85047 T2 - Firat University Journal of Experimental and Computational Engineering JF - Journal JO - JOR SP - 87 EP - 96 VL - 2 IS - 2 SN - -2822-2881 M3 - doi: 10.5505/fujece.2023.85047 UR - https://doi.org/10.5505/fujece.2023.85047 Y2 - 2023 ER - |
EndNote | %0 Firat University Journal of Experimental and Computational Engineering Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments %A Taner Yılmaz , Omur Aydogmus %T Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments %D 2023 %J Firat University Journal of Experimental and Computational Engineering %P -2822-2881 %V 2 %N 2 %R doi: 10.5505/fujece.2023.85047 %U 10.5505/fujece.2023.85047 |
ISNAD | Yılmaz, Taner , Aydogmus, Omur . "Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments". Firat University Journal of Experimental and Computational Engineering 2 / 2 (Haziran 2023): 87-96 . https://doi.org/10.5505/fujece.2023.85047 |
AMA | Yılmaz T. , Aydogmus O. Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments. FUJECE. 2023; 2(2): 87-96. |
Vancouver | Yılmaz T. , Aydogmus O. Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments. Firat University Journal of Experimental and Computational Engineering. 2023; 2(2): 87-96. |
IEEE | T. Yılmaz ve O. Aydogmus , "Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments", Firat University Journal of Experimental and Computational Engineering, c. 2, sayı. 2, ss. 87-96, Haz. 2023, doi:10.5505/fujece.2023.85047 |
Bu eser Creative Commons Atıf-GayriTicari 4.0 Uluslararası Lisansı (CC BY NC) ile lisanslanmıştır.