İnsansı robot robotis-op2 için itme kurtarma kontrol yöntemlerinin karşılaştırılması
Yıl 2024,
Cilt: 39 Sayı: 4, 2551 - 2566, 20.05.2024
Emrah Aslan
,
Muhammet Ali Arserim
,
Ayşegül Uçar
Öz
Bu çalışmanın ana hedefi, iki ayaklı insansı robotlar için itme-kurtarma kontrolörleri geliştirmektir. İki ayaklı insansı robotlarda dışardan gelen itmelere karşı denge problemi oluşmaktadır. Bu makalede insansı robotlardaki denge problemlerine çözüm olacak kontrol yöntemleri önerilmiştir. Amacımız, insan gibi davranan iki ayaklı robotların dışardan gelen itmelere karşı denge pozisyonuna gelebilmesini sağlamaktır. İnsanlar dışarda gelen itmeler sonucunda denge problemleri ile karşılaştıklarında oldukça başarılı bir şekilde tepki vermektedirler. İki ayaklı insansı robotlarda ise bu yetenek sınırlıdır. Bunun başlıca sebebi insansı robotların karmaşık yapıları ve kapasitelerinin sınırlı olmasıdır. Gerçek dünyada insanların denge bozukluğu durumunda gösterdikleri tepkiler ele alınarak oluşturulan itme-kurtarma stratejileri bulunmaktadır. Bu stratejiler; ayak bileği, kalça ve adım stratejileridir. Bu çalışmada itme-kurtarma stratejilerinden ayak bileği stratejisini kullanılmıştır. Ayak bileği stratejisini kullanarak, farklı kontrol yöntemleri denenmiştir. Yapılan uygulamalarda üç farklı kontrol yöntemi kullanılmıştır. Bunlar; klasik kontrol yöntemi PID, tahmine dayalı olarak Model Predictive Control (MPC) ve derin pekiştirmeli öğrenme algoritmalarından Deep Q Network (DQN) yöntemleridir. Uygulamalar ROBOTİS-OP2 insansı robotu üzerinde gerçekleştirilmiştir. Simülasyon testleri ise Webots simülatöründe 3 boyutlu olarak yapılmıştır. Her bir yöntem ile insansı robot test edilmiş ve sonuçları karşılaştırılmıştır. Bu yöntemlerden derin pekiştirmeli öğrenme algoritması olan Deep Q Network (DQN) en iyi sonuç verdiği gözlemlenmiştir.
Kaynakça
- 1. Stephens B., Humanoid push recovery In Humanoid Robots, 2007 7th IEEE-RAS International Conference, 589–595, 2007.
- 2. Stephens B., Push recovery control for force-controlled humanoid robots. PhD thesis, Carnegie Mellon University Pittsburgh, Pennsylvania USA, 2011.
- 3. Huang Q., Yokoi K., Kajita S., Kaneko K., Aral H., Koyachi N., Tanie K., Planning walking patterns for a biped robot, IEEE Transactions on Robotics and Automation, 17 (3), 280–289, 2001.
- 4. Takanishi A., Lim H., Biped walking robots created at Waseda University: WL and Wabian family, Philosophical Transactions of the Royal Society Series A, 365 (1850), 49-64, 2007.
- 5. Aftab Z., Robert T., Pierre-Brice W., Ankle, hip and stepping strategies for humanoid balance recovery with a single Model Predictive Control scheme. IEEE-RAS International Conference on Humanoid Robots, Osaka, Japan. 159-164, Nov 2012.
- 6. Ghassemi P., Masouleh M.T., Kalhor A., Push Recovery for NAO Humanoid Robot, 2014 Second RSI/ISM International Conference on Robotic and Mechatronics(ICRoM), IEEE, 175-181, 2014.
- 7. Shafiee-Ashtiani M., Yousefi-Koma M., Shariat-Panahi M., Khadiv M., Push Recovery of a Humanoid Robot Based on Model Predictive Control and Capture Point, Proceedings of the 4th International Conference on Robotics and Mechatronics October 26-28, 2016.
- 8. Shafiee-Ashtiani M., Yousefi-Koma M., Mirjalili A., Maleki R., Karimi H., Push Recovery of a Position-Controlled Humanoid Robot Based on Capture Point Feedback Control, ICRoM, 2017.
- 9. Kim H., Seo D., Kim D., Push Recovery Control for Humanoid Robot Using Reinforcement Learning, 2019 Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 488-492, 2019.
- 10. Missura M., Behnke S., Omnidirectional capture steps for bipedal walking, In Proceedings of Humanoids, 401–408, Atlanta, GA, 2013.
- 11. Missura M., Behnke S., Online learning of foot placement for balanced bipedal walking, IEEE-RAS International Conference on Humanoid Robots, 322–328, 2015.
- 12. Melo D.C., Maximo M., Da Cunha A., Push Recovery Strategies through Deep Reinforcement Learning, 2020 Latin American Robotics Symposium (LARS), 2020 Brazilian Symposium on Robotics (SBR) and 2020 Workshop on Robotics in Education (WRE), Natal, Brazil, 147-154, 2020.
- 13. Li S., Ping-Huan K., Lin-Han C., Chia-Ching H., Po-Chien L., Hao-Ping H., Chien-Hsin C., Yi-Ting H., Wen-Hsun L., Fuzzy Double Deep Q-Network-Based Gait Pattern Controller for Humanoid Robots, IEEE Transactions on Fuzzy Systems, 2022.
- 14. Robert C., Sotiropoulos T., Waeselynck H., Guiochet J., Vernhes S., The virtual lands of Oz: testing an agribot in simulation, Empirical Software Engineering, Springer Verlag, 2020.
- 15. Prakash V., Saran R., An Enhanced Coding Algorithm for Efficient Video Coding, Journal of the Institute of Electronics and Computer, 1, 28-38, 2019.
- 16. Mnih V., Kavukcuoglu K, Silver D., Human-level control through deep reinforcement learning, Nature, 518:529-533 2015.
- 17. Behnke S., Online trajectory generation for omnidirectional biped walking, Proceedings - IEEE International Conference on Robotics and Automation, 214-221, 2006.
- 18. Seung-Joon Y., Online learning of a full body push recovery controller for omnidirectional walking. Humanoid Robots 2011 11th IEEE-RAS International Conference on. IEEE, 2011.
- 19. Seung-Joon Y., Whole-body balancing walk controller for position controlled humanoid robots, International Journal of Humanoid Robotics 13.01, 2016.
- 20. Yang C., Komura T., Li Z., Emergence of humancomparable balancing behaviours by deep reinforcement learning, 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), Birmingham, 372-377, 2017.
- 21. Shu-Yin C., W. Jin-Long W., Posture Control for Humanoid Robot on Uneven Ground and Slopes Using İntertial Sensors, Advences in Mechanical Engineering, 2020.
- 22. Yang S., Chen F., Zhang L., Cao Z., Wensing P., Liu Y., Pang J., Zhang W., Reachability-based Push Recovery for Humanoid Robots with Variable-Height Inverted Pendulum, 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021.
- 23. Schuller R., Messesan G., Englsberger J., Lee J., Ott C., Online Centroidal Angular Momentum Reference Generation and Motion Optimization for Humanoid Push Recovery, IEEE Robotics and Automation Letters, 2021.
- 24. Messesan G., Englsberger J., Ott C., Online DCM Trajectory Adaptation for Push and Stumble Recovery during Humanoid Locomotion, 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021.
- 25. Ong H.Y., Chavez K., Hong A., Distributed Deep Q-Learning, Computer Science, 2015.
- 26. Özüpak Y., Design and Analysis of Different Transformer Models Used for Charging Electric Vehicles with Wireless Power Transmission System by Finite Element Method, Journal of Gazi University Faculty of Engineering and Architecture, 39 (2), 1113-1122, 2023.
- 27. Huang Q., Dong C., Yu Z., Chen X., Li Q., Chen H., Liu H., Resistant Compliance Control for Biped Robot Inspired by Humanlike Behavior, IEEE/ASME Transactions on Mechatronics, 2022.
- 28. Zhu K., Zhang T., Deep reinforcement learning based mobile robot navigation: A review, Tsinghua Science and Technology, 26 (5), 674-691, Oct. 2021.
- 29. Assman T., Nijmeijer H., Takanishi A., Hashimoto K., Biomechanically motivated lateral biped balancing using momentum control, Eindhoven: Eindhoven University of Technology, Traineeship report. - DC 2011.035, 2012.
- 30. Komura T., Leung H., Kudoh S, Kuffner S., A feedback controller for biped humanoids that can counteract large perturbations during gait, in Proceedings of the 2005 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 1989–1995, 2005.
- 31. Özüpak Y., Analysis and experimental verification of efficiency parameters affecting inductively coupled wireless power transfer systems, Heliyon, 10 (5), 2024.
- 32. Pratt J., Carff J., Drakunov S., and Goswami A., Capture point: A step toward humanoid push recovery, in Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, 200–207, 2006.
- 33. Pratt J., Capture point: A step toward humanoid push recovery, 2006 6th IEEE-RAS international conference on humanoid robots. IEEE, 2006.
- 34. Oberdieck R., Pistikopoulos E. N., Explicit hybrid modelpredictive control: The exact solution, Automatica, 58, 152– 159, 2015.
- 35. Stephens B. J., State Estimation for Force-Controlled Humanoid Balance using Simple Models in the Presence of Modeling Error. In IEEE International Conference on Robotics and Automation, 2011.
- 36. Stephens B. J., Atkeson C., Push recovery by stepping for humanoid robots with force controlled joints. 2010 10th IEEE-RAS International Conference on Humanoid Robots, 52–59, 2010.
- 37. Mantilla L.C., Junca M. J., Deep Q-Learning, Universidad de Los Andes, Dogota, Colombia, 2021.