Evaluation of the Deep Q-Learning Models for Mobile Robot Path Planning Problem
Abstract
Keywords
Kaynakça
- [1] H. Aydemir, M. Tekerek, and M. Gök, “Complete coverage planning with clustering method for autonomous mobile robots”, Concurr. Comput. Pract. Exp., 2023, doi:10.1002/cpe.7830
- [2] M. Gök, Ö. Ş. Akçam, and, M. Tekerek, “Performance Analysis of Search Algorithms for Path Planning”, Kahramanmaraş Sütçü İmam University Journal of Engineering Sciences, 26 (2), 379-394., doi:10.17780/ksujes.1171461
- [3] T. P. Lillicrap et al., “Continuous control with deep reinforcement learning”, in 4th International Conference on Learning Representations, 2016, pp. 1-14.
- [4] Y. Kato, K. Kamiyama, and K. Morioka, “Autonomous robot navigation system with learning based on deep Q-network and topological maps”, in 2017 IEEE/SICE International Symposium on System Integration, 2018, pp. 1040-1046.
- [5] A. I. Karoly, P. Galambos, J. Kuti, and I. J. Rudas, “Deep Learning in Robotics: Survey on Model Structures and Training Strategies”, IEEE Trans. on Systems, Man, and Cybernetics: Systems, vol. 51, no. 1, pp. 266–279, 2021.
- [6] H. Van Hasselt, “Double Q-learning”, in 24th Annual Conference on Neural Information Processing Systems, 2010, pp. 1–9.
- [7] A. Kamalova, S. G. Lee, and S. H. Kwon, “Occupancy Reward-Driven Exploration with Deep Reinforcement Learning for Mobile Robot System”, Applied Sciences (Switzerland), vol. 12, no. 18, 2022.
- [8] J. Gao, W. Ye, J. Guo, and Z. Li, “Deep reinforcement learning for indoor mobile robot path planning”, Sensors, vol. 20, no. 19, 2020, pp. 1–15.
Ayrıntılar
Birincil Dil
İngilizce
Konular
Bilgi Sistemleri (Diğer) , Yardımcı Robotlar ve Teknoloji
Bölüm
Araştırma Makalesi
Yazarlar
Mehmet Gök
*
0000-0003-1656-5770
Türkiye
Erken Görünüm Tarihi
26 Eylül 2024
Yayımlanma Tarihi
30 Eylül 2024
Gönderilme Tarihi
20 Mart 2024
Kabul Tarihi
16 Ağustos 2024
Yayımlandığı Sayı
Yıl 2024 Cilt: 12 Sayı: 3
