Research Article
BibTex RIS Cite

Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments

Year 2023, Volume: 2 Issue: 2, 87 - 96, 14.06.2023
https://doi.org/10.5505/fujece.2023.85047

Abstract

Learning how to navigate in unfamiliar environments is a critical skill for AI-powered mobile robots. Traditional methods for robot navigation typically involve three key steps: positioning, mapping, and route planning. However, in unknown environments, these methods can become outdated because route planning requires an obstacle map. Moreover, classical approaches may become trapped at a local maximum as the environment becomes more complex, which can negatively impact the system's success. Therefore, it is crucial to address collision avoidance in autonomous navigation, both in static and dynamic environments, to ensure that the robot reaches the target safely without any collisions. In recent years, heuristic approaches have gained importance due to their proximity to human behavioral learning. In this paper, we will examine the advantages and disadvantages of using Deep Deterministic Policy Gradient Reinforcement learning methods to guide the robot from the starting position to the target position without colliding with static and dynamic obstacles. By using a reinforcement learning method, the robot can learn from its experiences, make informed decisions, and adapt to changes in the environment. We will explore the efficacy of this method and compare it to traditional approaches to determine its potential for real-world applications. Ultimately, this paper aims to develop a robust and efficient navigation system for mobile robots, which can successfully navigate in unknown and dynamic environments. For this purpose, the system was tested for 100 episodes and the results showed a success rate of over 80%.

Thanks

We would like to thank Fırat University for their contribution to the studies in this research.

References

  • [1] Tufenkci S, Alagoz BB Kavuran G, Yeroglu C, Herencsar N, Mahata S. “A theoretical demonstration for reinforcement learning of PI control dynamics for optimal speed control of dc motors by using twin delay deep deterministic policy gradient algorithm”. Expert Syst. Appl., 213, 119192, 2023.
  • [2] Sampedro C, Rodriguez-Ramos A, Bavle H, Carrio A, de la Puente P, Campoy P. “A Fully-autonomous aerial robot for search and rescue applications in ındoor environments using learning-based techniques”. J. Intell. Robot. Syst. Theory Appl., 95(2), 601–627, 2019.
  • [3] Alavizadeh H, Alavizadeh H, Jang-Jaccard J, “Deep Q-learning based reinforcement learning approach for network ıntrusion detection”. Computers, 11(3), 1–19, 2022.
  • [4] Uav F, Tan Z, Lyu Y, Lu H, Pan Q. “Motion primitives-based and Two-phase Motion Planning for”. 2022.
  • [5] Krishnan S, Boroujerdian B, Fu W, Faust A, Reddi VJ. "Air Learning: a deep reinforcement learning gym for autonomous aerial robot visual navigation". Machine Learning, 110(9), 2021.
  • [6] Stevsic S. Nageli T. Alonso-Mora J, Hilliges O, “Sample Efficient learning of path following and obstacle avoidance behavior for quadrotors”. IEEE Robot. Autom. Lett., 3,(4), 3852–3859, 2018.
  • [7] Lockwood O, Si M. “A review of uncertainty for deep reinforcement learning”. Proc. AAAI Conf. Artif. Intell. Interact. Digit. Entertain, 18(1), 155–162, 2022.
  • [8] Clifton J, Laber E. “Q-Learning : Theory and Applications”. Annual Review of Statistics and Its Application, 279–303, 2020.
  • [9] Brockman G. et al., “OpenAI Gym”. 1–4, 2016, [Online]. Available: http://arxiv.org/abs/1606.01540.
  • [10] Saglam B, Cicek DC, Mutlu FB, Kozat SS. “Off-Policy correction for actor-critic algorithms in deep reinforcement learning”. 2022.
  • [11] Tsai J, Chang CC, Ou YC, Sieh BH, Ooi YM. “Autonomous driving control based on the perception of a lidar sensor and odometer”. Appl. Sci., 12(15), 2022.
  • [12] Luong NC. et al., “Applications of deep reinforcement learning in communications and networking: a survey”. IEEE Commun. Surv. Tutorials, 21(4), 3133–3174, 2019.
  • [13] Xiang J, Li Q, Dong X, Ren Z, “Continuous control with deep reinforcement learning for mobile robot navigation”. Proc. - 2019 Chinese Autom. Congr. CAC 2019, 1501–1506, 2019.
  • [14] Tsingenopoulos I, Preuveneers D, Joosen W, “AutoAttacker: a reinforcement learning approach for black-box adversarial attacks”. Proc. - 4th IEEE Eur. Symp. Secur. Priv. Work. EUROS PW 2019229–237, 2019.
  • [15] Andrychowicz M. et al., “Hindsight Experience replay (279 cites)”. Adv. Neural Inf. Process. Syst., 2017-Decem, no. Nips, 5049–5059, 2017.
  • [16] Lindner T. Milecki A, Wyrwał D, “Positioning of the robotic arm using different reinforcement learning algorithms”. Int. J. Control. Autom. Syst., 19(4), 1661–1676, 2021.
  • [17] Lucchi M, Zindler F, Muhlbacher-Karrer S, Pichler H. “Robo-gym - an open source toolkit for distributed deep reinforcement learning on real and simulated robots”. IEEE Int. Conf. Intell. Robot. Syst., 5364–5371, 2020.
  • [18] James S, Freese M, Davison AJ. “PyRep: bringing v-rep to deep robot learning”. 1–4, 2019.
  • [19] James S, Davison AJ, Johns E. “Transferring end-to-end visuomotor control from simulation to real world for a multi-stage task”. CoRL, 1–10, 2017.
Year 2023, Volume: 2 Issue: 2, 87 - 96, 14.06.2023
https://doi.org/10.5505/fujece.2023.85047

Abstract

References

  • [1] Tufenkci S, Alagoz BB Kavuran G, Yeroglu C, Herencsar N, Mahata S. “A theoretical demonstration for reinforcement learning of PI control dynamics for optimal speed control of dc motors by using twin delay deep deterministic policy gradient algorithm”. Expert Syst. Appl., 213, 119192, 2023.
  • [2] Sampedro C, Rodriguez-Ramos A, Bavle H, Carrio A, de la Puente P, Campoy P. “A Fully-autonomous aerial robot for search and rescue applications in ındoor environments using learning-based techniques”. J. Intell. Robot. Syst. Theory Appl., 95(2), 601–627, 2019.
  • [3] Alavizadeh H, Alavizadeh H, Jang-Jaccard J, “Deep Q-learning based reinforcement learning approach for network ıntrusion detection”. Computers, 11(3), 1–19, 2022.
  • [4] Uav F, Tan Z, Lyu Y, Lu H, Pan Q. “Motion primitives-based and Two-phase Motion Planning for”. 2022.
  • [5] Krishnan S, Boroujerdian B, Fu W, Faust A, Reddi VJ. "Air Learning: a deep reinforcement learning gym for autonomous aerial robot visual navigation". Machine Learning, 110(9), 2021.
  • [6] Stevsic S. Nageli T. Alonso-Mora J, Hilliges O, “Sample Efficient learning of path following and obstacle avoidance behavior for quadrotors”. IEEE Robot. Autom. Lett., 3,(4), 3852–3859, 2018.
  • [7] Lockwood O, Si M. “A review of uncertainty for deep reinforcement learning”. Proc. AAAI Conf. Artif. Intell. Interact. Digit. Entertain, 18(1), 155–162, 2022.
  • [8] Clifton J, Laber E. “Q-Learning : Theory and Applications”. Annual Review of Statistics and Its Application, 279–303, 2020.
  • [9] Brockman G. et al., “OpenAI Gym”. 1–4, 2016, [Online]. Available: http://arxiv.org/abs/1606.01540.
  • [10] Saglam B, Cicek DC, Mutlu FB, Kozat SS. “Off-Policy correction for actor-critic algorithms in deep reinforcement learning”. 2022.
  • [11] Tsai J, Chang CC, Ou YC, Sieh BH, Ooi YM. “Autonomous driving control based on the perception of a lidar sensor and odometer”. Appl. Sci., 12(15), 2022.
  • [12] Luong NC. et al., “Applications of deep reinforcement learning in communications and networking: a survey”. IEEE Commun. Surv. Tutorials, 21(4), 3133–3174, 2019.
  • [13] Xiang J, Li Q, Dong X, Ren Z, “Continuous control with deep reinforcement learning for mobile robot navigation”. Proc. - 2019 Chinese Autom. Congr. CAC 2019, 1501–1506, 2019.
  • [14] Tsingenopoulos I, Preuveneers D, Joosen W, “AutoAttacker: a reinforcement learning approach for black-box adversarial attacks”. Proc. - 4th IEEE Eur. Symp. Secur. Priv. Work. EUROS PW 2019229–237, 2019.
  • [15] Andrychowicz M. et al., “Hindsight Experience replay (279 cites)”. Adv. Neural Inf. Process. Syst., 2017-Decem, no. Nips, 5049–5059, 2017.
  • [16] Lindner T. Milecki A, Wyrwał D, “Positioning of the robotic arm using different reinforcement learning algorithms”. Int. J. Control. Autom. Syst., 19(4), 1661–1676, 2021.
  • [17] Lucchi M, Zindler F, Muhlbacher-Karrer S, Pichler H. “Robo-gym - an open source toolkit for distributed deep reinforcement learning on real and simulated robots”. IEEE Int. Conf. Intell. Robot. Syst., 5364–5371, 2020.
  • [18] James S, Freese M, Davison AJ. “PyRep: bringing v-rep to deep robot learning”. 1–4, 2019.
  • [19] James S, Davison AJ, Johns E. “Transferring end-to-end visuomotor control from simulation to real world for a multi-stage task”. CoRL, 1–10, 2017.
There are 19 citations in total.

Details

Primary Language English
Subjects Materials Engineering (Other)
Journal Section Research Articles
Authors

Taner Yılmaz This is me 0000-0002-1721-9071

Omur Aydogmus This is me 0000-0001-8142-1146

Publication Date June 14, 2023
Published in Issue Year 2023 Volume: 2 Issue: 2

Cite

APA Yılmaz, T., & Aydogmus, O. (2023). Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments. Firat University Journal of Experimental and Computational Engineering, 2(2), 87-96. https://doi.org/10.5505/fujece.2023.85047
AMA Yılmaz T, Aydogmus O. Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments. FUJECE. June 2023;2(2):87-96. doi:10.5505/fujece.2023.85047
Chicago Yılmaz, Taner, and Omur Aydogmus. “Deep Deterministic Policy Gradient Reinforcement Learning for Collision-Free Navigation of Mobile Robots in Unknown Environments”. Firat University Journal of Experimental and Computational Engineering 2, no. 2 (June 2023): 87-96. https://doi.org/10.5505/fujece.2023.85047.
EndNote Yılmaz T, Aydogmus O (June 1, 2023) Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments. Firat University Journal of Experimental and Computational Engineering 2 2 87–96.
IEEE T. Yılmaz and O. Aydogmus, “Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments”, FUJECE, vol. 2, no. 2, pp. 87–96, 2023, doi: 10.5505/fujece.2023.85047.
ISNAD Yılmaz, Taner - Aydogmus, Omur. “Deep Deterministic Policy Gradient Reinforcement Learning for Collision-Free Navigation of Mobile Robots in Unknown Environments”. Firat University Journal of Experimental and Computational Engineering 2/2 (June 2023), 87-96. https://doi.org/10.5505/fujece.2023.85047.
JAMA Yılmaz T, Aydogmus O. Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments. FUJECE. 2023;2:87–96.
MLA Yılmaz, Taner and Omur Aydogmus. “Deep Deterministic Policy Gradient Reinforcement Learning for Collision-Free Navigation of Mobile Robots in Unknown Environments”. Firat University Journal of Experimental and Computational Engineering, vol. 2, no. 2, 2023, pp. 87-96, doi:10.5505/fujece.2023.85047.
Vancouver Yılmaz T, Aydogmus O. Deep deterministic policy gradient reinforcement learning for collision-free navigation of mobile robots in unknown environments. FUJECE. 2023;2(2):87-96.