Research Article
BibTex RIS Cite

2B LIDAR ile Yaya ve Mobil Robot Algılama

Year 2021, Issue: 23, 583 - 588, 30.04.2021

Abstract

Robotikte aşılması gereken ilk sorun, robotun ve etrafındaki nesnelerin konumlandırılmasıdır. Robotun etrafındaki hareketli nesnelerin algılanması ve konumlandırılması, kazaları önlemek için önemli bir noktadır. Derin öğrenme ve 3D LIDAR teknolojisi, özellikle yaya tespitinde sıklıkla kullanılır. Bu çalışmalar yüksek performansa sahip olmalarına rağmen yüksek maliyetleri nedeniyle henüz yaygın olarak kullanılmamaktadır. Bu yazıda, daha düşük maliyetli 2D LIDAR'larda kullanılmak üzere bir robot ve insan algılama sistemi önerilmiştir. Sistem, sürgülü pencere (sliding window) ile 2D LIDAR ışını tarayarak robot ve insan ışın modellerini algılar. Sürgülü pencere tekniği sayesinde taradığı kısımda robot mu insan mı olduğunu işaretler. Bu çalışmada, bir simülasyon ortamında toplanan 2D LIDAR verilerine dayanan yaya ve mobil robot tanıma için yeni bir uçtan uca derin sinir ağı mimarisi önerilmiştir. Sistemin robot ve insan modellerini statik ortamda% 91,6 doğrulukla algıladığı görülmüştür.

References

  • Arras, K. O., Mozos, O. M., & Burgard, W. (2007). Using boosted features for the detection of people in 2d range data. Proceedings 2007 IEEE International Conference on Robotics and Automation, 3402–3407.
  • Börcs, A., Nagy, B., & Benedek, C. (2017). Instant object detection in lidar point clouds. IEEE Geoscience and Remote Sensing Letters, 14(7), 992–996.
  • Borenstein, J., Everett, H. R., Feng, L., & Wehe, D. (1997). Mobile robot positioning: Sensors and techniques. Journal of Robotic Systems, 14(4), 231–249.
  • Chen, X., Ma, H., Wan, J., Li, B., & Xia, T. (2017). Multi-view 3d object detection network for autonomous driving. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1907–1915.
  • Chollet, F. (2015). Keras. https://keras.io/
  • Kidono, K., Miyasaka, T., Watanabe, A., Naito, T., & Miura, J. (2011). Pedestrian recognition using high-definition LIDAR. 2011 IEEE Intelligent Vehicles Symposium (IV), 405–410.
  • Lang, A. H., Vora, S., Caesar, H., Zhou, L., Yang, J., & Beijbom, O. (2019). Pointpillars: Fast encoders for object detection from point clouds. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 12697–12705.
  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. Li, D., Li, L., Li, Y., Yang, F., & Zuo, X. (2017). A multi-type features method for leg detection in 2-D laser range data. IEEE Sensors Journal, 18(4), 1675–1684.
  • Lin, B.-Z., & Lin, C.-C. (2016). Pedestrian detection by fusing 3D points and color images. 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), 1–5.
  • Oliveira, L., Nunes, U., Peixoto, P., Silva, M., & Moita, F. (2010). Semantic fusion of laser and vision in pedestrian detection. Pattern Recognition, 43(10), 3648–3659.
  • Premebida, C., Monteiro, G., Nunes, U., & Peixoto, P. (2007). A lidar and vision-based approach for pedestrian and vehicle detection and tracking. 2007 IEEE Intelligent Transportation Systems Conference, 1044–1049.
  • Qi, C. R., Liu, W., Wu, C., Su, H., & Guibas, L. J. (2018). Frustum pointnets for 3d object detection from rgb-d data. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 918–927.
  • Rohmer, E., Singh, S. P., & Freese, M. (2013). Coppeliasim (formerly v-rep): A versatile and scalable robot simulation framework. Proc. of The International Conference on Intelligent Robots and Systems (IROS).
  • Scanning Rangefinder Distance Data Output/URG-04LX Product Details | HOKUYO AUTOMATIC CO., LTD. (n.d.). Retrieved January 11, 2021, from https://www.hokuyo-aut.jp/search/single.php?serial=165
  • Seçkin, A. Ç. (2020). A Natural Navigation Method for Following Path Memories from 2D Maps. Arabian Journal for Science and Engineering, 1–16.
  • Shao, X., Zhao, H., Nakamura, K., Katabira, K., Shibasaki, R., & Nakagawa, Y. (2007). Detection and tracking of multiple pedestrians by using laser range scanners. 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2174–2179.
  • Siciliano, B., & Khatib, O. (2016). Springer Handbook of Robotics. Springer. Spinello, L., Arras, K., Triebel, R., & Siegwart, R. (2010). A layered approach to people detection in 3d range data. Proceedings of the AAAI Conference on Artificial Intelligence, 24(1).
  • Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929–1958.
  • Wang, H., Wang, B., Liu, B., Meng, X., & Yang, G. (2017). Pedestrian recognition and tracking using 3D LiDAR for autonomous vehicle. Robotics and Autonomous Systems, 88, 71–78.

Pedestrian and Mobile Robot Detection with 2D LIDAR

Year 2021, Issue: 23, 583 - 588, 30.04.2021

Abstract

The first problem to be overcome in robotics is the positioning of the robot and surrounding objects. Detection and positioning of moving objects around the robot are an important point to prevent accidents. Deep learning and 3D LIDAR technology are often used, especially in pedestrian detection. Although these studies have high performance, they are not widely used yet due to their high cost. In this paper, a robot and human sensing system is proposed for use in lower cost 2D LIDARs. The system detects robot and human beam patterns by scanning the 2D LIDAR beam with the sliding window. Thanks to the sliding window technique, it marks whether there is a robot or a human in the part it scans. A new end-to-end deep neural network architecture is proposed in this study for pedestrian and mobile robot recognition based on 2D LIDAR data collected in a simulation environment. It has been observed that the system perceives robot and human models in a static environment with 91.6% accuracy.

References

  • Arras, K. O., Mozos, O. M., & Burgard, W. (2007). Using boosted features for the detection of people in 2d range data. Proceedings 2007 IEEE International Conference on Robotics and Automation, 3402–3407.
  • Börcs, A., Nagy, B., & Benedek, C. (2017). Instant object detection in lidar point clouds. IEEE Geoscience and Remote Sensing Letters, 14(7), 992–996.
  • Borenstein, J., Everett, H. R., Feng, L., & Wehe, D. (1997). Mobile robot positioning: Sensors and techniques. Journal of Robotic Systems, 14(4), 231–249.
  • Chen, X., Ma, H., Wan, J., Li, B., & Xia, T. (2017). Multi-view 3d object detection network for autonomous driving. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1907–1915.
  • Chollet, F. (2015). Keras. https://keras.io/
  • Kidono, K., Miyasaka, T., Watanabe, A., Naito, T., & Miura, J. (2011). Pedestrian recognition using high-definition LIDAR. 2011 IEEE Intelligent Vehicles Symposium (IV), 405–410.
  • Lang, A. H., Vora, S., Caesar, H., Zhou, L., Yang, J., & Beijbom, O. (2019). Pointpillars: Fast encoders for object detection from point clouds. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 12697–12705.
  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. Li, D., Li, L., Li, Y., Yang, F., & Zuo, X. (2017). A multi-type features method for leg detection in 2-D laser range data. IEEE Sensors Journal, 18(4), 1675–1684.
  • Lin, B.-Z., & Lin, C.-C. (2016). Pedestrian detection by fusing 3D points and color images. 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), 1–5.
  • Oliveira, L., Nunes, U., Peixoto, P., Silva, M., & Moita, F. (2010). Semantic fusion of laser and vision in pedestrian detection. Pattern Recognition, 43(10), 3648–3659.
  • Premebida, C., Monteiro, G., Nunes, U., & Peixoto, P. (2007). A lidar and vision-based approach for pedestrian and vehicle detection and tracking. 2007 IEEE Intelligent Transportation Systems Conference, 1044–1049.
  • Qi, C. R., Liu, W., Wu, C., Su, H., & Guibas, L. J. (2018). Frustum pointnets for 3d object detection from rgb-d data. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 918–927.
  • Rohmer, E., Singh, S. P., & Freese, M. (2013). Coppeliasim (formerly v-rep): A versatile and scalable robot simulation framework. Proc. of The International Conference on Intelligent Robots and Systems (IROS).
  • Scanning Rangefinder Distance Data Output/URG-04LX Product Details | HOKUYO AUTOMATIC CO., LTD. (n.d.). Retrieved January 11, 2021, from https://www.hokuyo-aut.jp/search/single.php?serial=165
  • Seçkin, A. Ç. (2020). A Natural Navigation Method for Following Path Memories from 2D Maps. Arabian Journal for Science and Engineering, 1–16.
  • Shao, X., Zhao, H., Nakamura, K., Katabira, K., Shibasaki, R., & Nakagawa, Y. (2007). Detection and tracking of multiple pedestrians by using laser range scanners. 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2174–2179.
  • Siciliano, B., & Khatib, O. (2016). Springer Handbook of Robotics. Springer. Spinello, L., Arras, K., Triebel, R., & Siegwart, R. (2010). A layered approach to people detection in 3d range data. Proceedings of the AAAI Conference on Artificial Intelligence, 24(1).
  • Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929–1958.
  • Wang, H., Wang, B., Liu, B., Meng, X., & Yang, G. (2017). Pedestrian recognition and tracking using 3D LiDAR for autonomous vehicle. Robotics and Autonomous Systems, 88, 71–78.
There are 19 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Articles
Authors

Ahmet Çağdaş Seçkin 0000-0002-9849-3338

Publication Date April 30, 2021
Published in Issue Year 2021 Issue: 23

Cite

APA Seçkin, A. Ç. (2021). Pedestrian and Mobile Robot Detection with 2D LIDAR. Avrupa Bilim Ve Teknoloji Dergisi(23), 583-588.