Research Article
BibTex RIS Cite

Arama ve Kurtarma Alanlarında Anlamsal Sınıflandırma için Nokta Tabanlı Derin Öğrenme Tekniklerinin Karşılaştırmalı Bir Çalışması

Year 2021, Volume: 33 - ASYU 2020 Özel Sayısı, 57 - 66, 30.12.2021
https://doi.org/10.7240/jeps.897306

Abstract

Sel baskını, yangın ve zehirli madde yayılımı gibi felaketlerden sonra meydana gelen afet sonrası iç ortamlar, arama ve kurtarma ekipleri için ciddi riskler barındırabilir. Örneğin, binanın yapısal bütünlüğü bozulmuş ve insanlar ve hayvanlar için bazı zararlı maddeler mevcut olabilir. Arama ve kurtarma ekiplerinin bu risklerden korunmasını sağlayabilmek için robotlardan yararlanılabilir. Bununla birlikte, robotların bu zorlu ortamlarda ham algılayıcı verilerinden üst düzey bilgi üretmek için gelişmiş tekniklere sahip olması gerekir. Bu çalışma, Ulusal Standartlar ve Teknoloji Enstitüsü (NIST) tarafından önerilen arama kurtarma test alanlarında bulunan rampaların anlamsal sınıflandırması için nokta tabanlı derin öğrenme mimarilerinin olumlu ve olumsuz yönlerini araştırmayı amaçlamaktadır. Ayrıca robotlar için çok önemli bilgiler sağladıklarından dolayı duvarlar ve zeminde dikkate alınmıştır. Bu çalışmada, afet sonrası ortamlarda sıklıkla karşılaşılan kötü aydınlatma koşullarına karşı dayanıklı olan nokta bulutu verilerini kullanmayı tercih ettik. NIST arama ve kurtarma alanlarına benzer bir ortamdan alınan nokta bulutu verilerini içeren ESOGU RAMPS veri kümesini kullandık. Rampaların, duvarların ve zeminin anlamsal sınıflandırma performanslarını analiz etmek için PointNet, PointNet ++, Dinamik Grafik Evrişimli Sinir Ağı (DGCNN), PointCNN, Point2Sequence, PointConv ve Shellnet nokta tabanlı derin öğrenme mimarilerini seçtik. Test sonuçları, anlamsal sınıflandırma doğruluğunun tüm mimariler için %90'ın üzerinde olduğunu göstermektedir.

Thanks

Bu çalışma ASYU2020_Akıllı Sistemlerde Yenilikler ve Uygulamaları Özel sayısı için değerlendirilmek üzere gönderilmiştir

References

  • Kitano, H. and Tadokoro, S. (2001). RoboCup Rescue: A Grand Challenge for Multiagent and Intelligent Systems. AI Magazine., 22(1), 39-52.
  • Jacoff, A., Messina, E., Weiss, B. A., Tadokoro, S., & Nakagawa, Y. (2003, October). Test arenas and performance metrics for urban search and rescue robots. In Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (pp. 3396-3403).
  • Amigoni, F., Visser, A., & Tsushima, M. (2012, June). Robocup 2012 rescue simulation league winners. In Robot Soccer World Cup (pp. 20-35). Springer, Berlin, Heidelberg.
  • Sheh, R., Schwertfeger, S., & Visser, A. (2016). 16 years of robocup rescue. KI-Künstliche Intelligenz., 30(3), 267-277.
  • Robot Operating System (ROS), Open source robotics foundations (OSRF), https://www.ros.org/, (March,2021).
  • Grisetti, G., Stachniss, C., & Burgard, W. (2007). Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE transactions on Robotics., 23(1), 34-46.
  • Kohlbrecher, S., Von Stryk, O., Meyer, J., & Klingauf, U. (2011, November). A flexible and scalable SLAM system with full 3D motion estimation. In 2011 IEEE international symposium on safety, security, and rescue robotics (pp. 155-160).
  • Hornung, A., Wurm, K. M., Bennewitz, M., Stachniss, C., & Burgard, W. (2013). OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Autonomous robots., 34(3), 189-206.
  • Labbé, M., & Michaud, F. (2011, September). Memory management for real-time appearance-based loop closure detection. In 2011 IEEE/RSJ international conference on intelligent robots and systems (pp. 1271-1276).
  • Example reference test arena, Rescue Robot League https://en.wikipedia.org/wiki/Rescue_Robot_League, (March,2021).
  • Nguyen, A., & Le, B. (2013, November). 3D point cloud segmentation: A survey. In 2013 6th IEEE conference on robotics, automation and mechatronics (RAM) (pp. 225-230).
  • Grilli, E., Menna, F., & Remondino, F. (2017). A review of point clouds segmentation and classification algorithms. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences., XLII-2/W3, 339-344.
  • Xie, Y., Tian, J., & Zhu, X. X. (2020). Linking points with labels in 3D: A review of point cloud semantic segmentation. IEEE Geoscience and Remote Sensing Magazine., 8(4), 38-59.
  • Kaleci, B., & Turgut, K. (2019, October). Plane Segmentation of Point Cloud Data Using Split and Merge Based Method. In 2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT) (pp. 1-7).
  • Rabbani, T., Van Den Heuvel, F., & Vosselmann, G. (2006). Segmentation of point clouds using smoothness constraint. International archives of photogrammetry, remote sensing and spatial information sciences., 36(5), 248-253.
  • Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM., 24(6), 381-395.
  • Ballard, D. H. (1981). Generalizing the Hough transform to detect arbitrary shapes. Pattern recognition., 13(2), 111-122.
  • Melzer, T. (2007). Non-parametric segmentation of ALS point clouds using mean shift. Journal of Applied Geodesy Jag., 1(3), 159-170.
  • Golovinskiy, A., & Funkhouser, T. (2009, September). Min-cut based segmentation of point clouds. In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops (pp. 39-46).
  • Rusu, R. B., & Cousins, S. (2011, May). 3d is here: Point cloud library (pcl). In 2011 IEEE international conference on robotics and automation (pp. 1-4).
  • Eruyar, E. E., Yılmaz, M., Yılmaz, B., Akbulut, O., Turgut, K., & Kaleci, B. A Comparative Study for Indoor Planar Surface Segmentation via 3D Laser Point Cloud Data. Black Sea Journal of Engineering and Science., 3(4), 128-137.
  • Deng, W., Huang, K., Chen, X., Zhou, Z., Shi, C., Guo, R., & Zhang, H. (2020). Semantic RGB-D SLAM for Rescue Robot Navigation. IEEE Access., 8, 221320-221329.
  • Zhang, J., Zhao, X., Chen, Z., & Lu, Z. (2019). A review of deep learning-based semantic segmentation for point cloud. IEEE Access., 7, 179118-179133.
  • Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., & Bennamoun, M. (2020). Deep learning for 3d point clouds: A survey. IEEE transactions on pattern analysis and machine intelligence.
  • Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017). PointNet: Deep learningon point sets for 3D classification and segmentation. InCVPR.
  • Qi, C. R., Yi, L., Su, H. & Guibas, L. J. (2017). PointNet++: Deephierarchical feature learning on point sets in a metric space. InNeurIPS.
  • Zhang, Z., Hua, B. S. & Yeung, S. K.(2019). ShellNet: Efficient Point Cloud Convolutional Neural Networks using Concentric Shells Statistics. International Conference on Computer Vision (ICCV).
  • Zhao, H., Jiang, L., Fu,, C.-W. & Jia, J. (2019). PointWeb: Enhancing lo-cal neighborhood features for point cloud processing, InCVPR.
  • Thomas, H., Qi, C. R., Deschaud,, J.-E., Marcotegui, B., Goulette, F., & Guibas, L. J. (2019). KPConv: Flexible and deformable convolutionfor point clouds. InICCV.
  • Wu, W., Qi, Z., & Fuxin, L. (2019). PointConv: Deep convolutionalnetworks on 3D point clouds. InCVPR.
  • Li, Y., Bu, R., Sun, M., Wu, W., Di, X. & Chen, B. (2018). PointCNN:Convolution on x-transformed points. InNeurIPS.
  • Landrieu , L. & Simonovsky, M. (2018). Large-scale point cloud seman-tic segmentation with superpoint graphs. InCVPR.
  • Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M., & Solomon, J. M. (2019). Dynamic graph CNN for learning on point clouds. ACM Transactions on Graphics.
  • Ye, X., Li, J., Huang, H., Du, L., & Zhang, X. (2018). 3D recurrentneural networks with context fusion for point cloud semanticsegmentation. InECCV.
  • Liu, X., Han, Z., Liu , Y. S., & Zwicker, M. (2019). Point2Sequence:Learning the shape representation of 3D point clouds with anattention-based sequence to sequence network. InAAAI.
  • Huang, Q., Wang, W., & Neumann, U. (2018). Recurrent slice networksfor 3D segmentation of point clouds. InCVPR.
  • Turgut, K., & Kaleci, B. (2019). A PointNet Application for Semantic Classification of Ramps in Search and Rescue Arenas. International Journal of Intelligent Systems and Applications in Engineering., 7(3), 159-165.
  • Turgut, K., & Kaleci, B. (2020, October). Comparison of Deep Learning Techniques for Semantic Classification of Ramps in Search and Rescue Arenas. In 2020 Innovations in Intelligent Systems and Applications Conference (ASYU) (pp. 1-6).
  • The ESOGU RAMPS dataset, ESOGU, https://ai-robotlab.ogu.edu.tr/Sayfa/Index/11, (March,2021).
  • Gazebo, Open source robotics foundations (OSRF), https://gazebosim.org/, (March,2021).
  • Tensor Flow Library, https://www.tensorflow.org/, (March,2021).

A Comparative Study of Point-Based Deep Learning Techniques for Semantic Classification in Search and Rescue Arenas

Year 2021, Volume: 33 - ASYU 2020 Özel Sayısı, 57 - 66, 30.12.2021
https://doi.org/10.7240/jeps.897306

Abstract

Post-disaster indoor environments, which occur after calamities such as floods, fires, and poisonous material spread, could include serious risks for search and rescue teams. For example, the building's structural integrity could be corrupted, and some harmful substances for humans and animals could exist. Exploiting robots could prevent search and rescue teams from these risks. Nevertheless, robots need to possess advanced techniques to produce high-level information from raw sensor data in these harsh environments. This study aims to explore the positive and negative aspects of point-based deep learning architectures for the semantic classification of ramps in search and rescue test arenas, which are proposed by the National Institute of Standards and Technology (NIST). Also, we take into account walls and terrain since they can provide crucial information for robots. In this study, we opted to utilize point cloud data that is robust against lousy illumination conditions, which is frequently encountered in post-disaster environments. We used the ESOGU RAMPS dataset that contains point cloud data captured from a simulated environment similar to NIST search and rescue arenas. We selected PointNet, PointNet++, Dynamic Graph Convolutional Neural Network (DGCNN), PointCNN, Point2Sequence, PointConv, and Shellnet point-based deep learning architectures to analyze their performance for semantic classification of ramps, walls, and terrain. The test results indicate that accuracy of semantic classification is over 90% for all architectures.

References

  • Kitano, H. and Tadokoro, S. (2001). RoboCup Rescue: A Grand Challenge for Multiagent and Intelligent Systems. AI Magazine., 22(1), 39-52.
  • Jacoff, A., Messina, E., Weiss, B. A., Tadokoro, S., & Nakagawa, Y. (2003, October). Test arenas and performance metrics for urban search and rescue robots. In Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (pp. 3396-3403).
  • Amigoni, F., Visser, A., & Tsushima, M. (2012, June). Robocup 2012 rescue simulation league winners. In Robot Soccer World Cup (pp. 20-35). Springer, Berlin, Heidelberg.
  • Sheh, R., Schwertfeger, S., & Visser, A. (2016). 16 years of robocup rescue. KI-Künstliche Intelligenz., 30(3), 267-277.
  • Robot Operating System (ROS), Open source robotics foundations (OSRF), https://www.ros.org/, (March,2021).
  • Grisetti, G., Stachniss, C., & Burgard, W. (2007). Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE transactions on Robotics., 23(1), 34-46.
  • Kohlbrecher, S., Von Stryk, O., Meyer, J., & Klingauf, U. (2011, November). A flexible and scalable SLAM system with full 3D motion estimation. In 2011 IEEE international symposium on safety, security, and rescue robotics (pp. 155-160).
  • Hornung, A., Wurm, K. M., Bennewitz, M., Stachniss, C., & Burgard, W. (2013). OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Autonomous robots., 34(3), 189-206.
  • Labbé, M., & Michaud, F. (2011, September). Memory management for real-time appearance-based loop closure detection. In 2011 IEEE/RSJ international conference on intelligent robots and systems (pp. 1271-1276).
  • Example reference test arena, Rescue Robot League https://en.wikipedia.org/wiki/Rescue_Robot_League, (March,2021).
  • Nguyen, A., & Le, B. (2013, November). 3D point cloud segmentation: A survey. In 2013 6th IEEE conference on robotics, automation and mechatronics (RAM) (pp. 225-230).
  • Grilli, E., Menna, F., & Remondino, F. (2017). A review of point clouds segmentation and classification algorithms. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences., XLII-2/W3, 339-344.
  • Xie, Y., Tian, J., & Zhu, X. X. (2020). Linking points with labels in 3D: A review of point cloud semantic segmentation. IEEE Geoscience and Remote Sensing Magazine., 8(4), 38-59.
  • Kaleci, B., & Turgut, K. (2019, October). Plane Segmentation of Point Cloud Data Using Split and Merge Based Method. In 2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT) (pp. 1-7).
  • Rabbani, T., Van Den Heuvel, F., & Vosselmann, G. (2006). Segmentation of point clouds using smoothness constraint. International archives of photogrammetry, remote sensing and spatial information sciences., 36(5), 248-253.
  • Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM., 24(6), 381-395.
  • Ballard, D. H. (1981). Generalizing the Hough transform to detect arbitrary shapes. Pattern recognition., 13(2), 111-122.
  • Melzer, T. (2007). Non-parametric segmentation of ALS point clouds using mean shift. Journal of Applied Geodesy Jag., 1(3), 159-170.
  • Golovinskiy, A., & Funkhouser, T. (2009, September). Min-cut based segmentation of point clouds. In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops (pp. 39-46).
  • Rusu, R. B., & Cousins, S. (2011, May). 3d is here: Point cloud library (pcl). In 2011 IEEE international conference on robotics and automation (pp. 1-4).
  • Eruyar, E. E., Yılmaz, M., Yılmaz, B., Akbulut, O., Turgut, K., & Kaleci, B. A Comparative Study for Indoor Planar Surface Segmentation via 3D Laser Point Cloud Data. Black Sea Journal of Engineering and Science., 3(4), 128-137.
  • Deng, W., Huang, K., Chen, X., Zhou, Z., Shi, C., Guo, R., & Zhang, H. (2020). Semantic RGB-D SLAM for Rescue Robot Navigation. IEEE Access., 8, 221320-221329.
  • Zhang, J., Zhao, X., Chen, Z., & Lu, Z. (2019). A review of deep learning-based semantic segmentation for point cloud. IEEE Access., 7, 179118-179133.
  • Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., & Bennamoun, M. (2020). Deep learning for 3d point clouds: A survey. IEEE transactions on pattern analysis and machine intelligence.
  • Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017). PointNet: Deep learningon point sets for 3D classification and segmentation. InCVPR.
  • Qi, C. R., Yi, L., Su, H. & Guibas, L. J. (2017). PointNet++: Deephierarchical feature learning on point sets in a metric space. InNeurIPS.
  • Zhang, Z., Hua, B. S. & Yeung, S. K.(2019). ShellNet: Efficient Point Cloud Convolutional Neural Networks using Concentric Shells Statistics. International Conference on Computer Vision (ICCV).
  • Zhao, H., Jiang, L., Fu,, C.-W. & Jia, J. (2019). PointWeb: Enhancing lo-cal neighborhood features for point cloud processing, InCVPR.
  • Thomas, H., Qi, C. R., Deschaud,, J.-E., Marcotegui, B., Goulette, F., & Guibas, L. J. (2019). KPConv: Flexible and deformable convolutionfor point clouds. InICCV.
  • Wu, W., Qi, Z., & Fuxin, L. (2019). PointConv: Deep convolutionalnetworks on 3D point clouds. InCVPR.
  • Li, Y., Bu, R., Sun, M., Wu, W., Di, X. & Chen, B. (2018). PointCNN:Convolution on x-transformed points. InNeurIPS.
  • Landrieu , L. & Simonovsky, M. (2018). Large-scale point cloud seman-tic segmentation with superpoint graphs. InCVPR.
  • Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M., & Solomon, J. M. (2019). Dynamic graph CNN for learning on point clouds. ACM Transactions on Graphics.
  • Ye, X., Li, J., Huang, H., Du, L., & Zhang, X. (2018). 3D recurrentneural networks with context fusion for point cloud semanticsegmentation. InECCV.
  • Liu, X., Han, Z., Liu , Y. S., & Zwicker, M. (2019). Point2Sequence:Learning the shape representation of 3D point clouds with anattention-based sequence to sequence network. InAAAI.
  • Huang, Q., Wang, W., & Neumann, U. (2018). Recurrent slice networksfor 3D segmentation of point clouds. InCVPR.
  • Turgut, K., & Kaleci, B. (2019). A PointNet Application for Semantic Classification of Ramps in Search and Rescue Arenas. International Journal of Intelligent Systems and Applications in Engineering., 7(3), 159-165.
  • Turgut, K., & Kaleci, B. (2020, October). Comparison of Deep Learning Techniques for Semantic Classification of Ramps in Search and Rescue Arenas. In 2020 Innovations in Intelligent Systems and Applications Conference (ASYU) (pp. 1-6).
  • The ESOGU RAMPS dataset, ESOGU, https://ai-robotlab.ogu.edu.tr/Sayfa/Index/11, (March,2021).
  • Gazebo, Open source robotics foundations (OSRF), https://gazebosim.org/, (March,2021).
  • Tensor Flow Library, https://www.tensorflow.org/, (March,2021).
There are 41 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Research Articles
Authors

Kaya Turgut 0000-0003-3345-9339

Burak Kaleci 0000-0002-2001-3381

Publication Date December 30, 2021
Published in Issue Year 2021 Volume: 33 - ASYU 2020 Özel Sayısı

Cite

APA Turgut, K., & Kaleci, B. (2021). A Comparative Study of Point-Based Deep Learning Techniques for Semantic Classification in Search and Rescue Arenas. International Journal of Advances in Engineering and Pure Sciences, 33, 57-66. https://doi.org/10.7240/jeps.897306
AMA Turgut K, Kaleci B. A Comparative Study of Point-Based Deep Learning Techniques for Semantic Classification in Search and Rescue Arenas. JEPS. December 2021;33:57-66. doi:10.7240/jeps.897306
Chicago Turgut, Kaya, and Burak Kaleci. “A Comparative Study of Point-Based Deep Learning Techniques for Semantic Classification in Search and Rescue Arenas”. International Journal of Advances in Engineering and Pure Sciences 33, December (December 2021): 57-66. https://doi.org/10.7240/jeps.897306.
EndNote Turgut K, Kaleci B (December 1, 2021) A Comparative Study of Point-Based Deep Learning Techniques for Semantic Classification in Search and Rescue Arenas. International Journal of Advances in Engineering and Pure Sciences 33 57–66.
IEEE K. Turgut and B. Kaleci, “A Comparative Study of Point-Based Deep Learning Techniques for Semantic Classification in Search and Rescue Arenas”, JEPS, vol. 33, pp. 57–66, 2021, doi: 10.7240/jeps.897306.
ISNAD Turgut, Kaya - Kaleci, Burak. “A Comparative Study of Point-Based Deep Learning Techniques for Semantic Classification in Search and Rescue Arenas”. International Journal of Advances in Engineering and Pure Sciences 33 (December 2021), 57-66. https://doi.org/10.7240/jeps.897306.
JAMA Turgut K, Kaleci B. A Comparative Study of Point-Based Deep Learning Techniques for Semantic Classification in Search and Rescue Arenas. JEPS. 2021;33:57–66.
MLA Turgut, Kaya and Burak Kaleci. “A Comparative Study of Point-Based Deep Learning Techniques for Semantic Classification in Search and Rescue Arenas”. International Journal of Advances in Engineering and Pure Sciences, vol. 33, 2021, pp. 57-66, doi:10.7240/jeps.897306.
Vancouver Turgut K, Kaleci B. A Comparative Study of Point-Based Deep Learning Techniques for Semantic Classification in Search and Rescue Arenas. JEPS. 2021;33:57-66.