Research Article
BibTex RIS Cite

IÇ ORTAMLARDA KAPILARIN TESPİTİ İÇİN DERİN ÖĞRENME TEKNİKLERİNİN KARŞILAŞTIRILMASI

Year 2021, , 396 - 412, 31.12.2021
https://doi.org/10.31796/ogummf.889095

Abstract

İç ortamlarda kapıların (açık, yarı açık ve kapalı) tespit edilmesi robotik, bilgisayarlı görü ve mimari gibi çok çeşitli uygulama alanlarında kritik bir görevdir. Kapı tespiti problemine çözüm bulmaya çalışan çalışmalar üç temel kategoriye ayrılabilir: 1) görsel veri ile kapalı kapılar, 2) mesafe verisi ile açık kapılar ve 3) nokta bulutu verisi ile açık, yarı açık ve kapalı kapılar. Kapıları görsel ve mesafe verisi ile bazı belirli şartlar altında başarılı bir şekilde bulan yöntemler önerilmiş olsa da bu çalışmada sahnelerin 3B karakteristiğini anlatma kabiliyeti sebebiyle nokta bulutu verisi kullanılmıştır. Bu çalışmanın iki temel katkısı bulunmaktadır. Birincisi, kapının tipi ve verinin karakteristiğine bağlı olarak genellikle bir kurallar kümesi tanımlayan önceki çalışmalardan farklı olarak PointNet, PointNet++, Dinamik Çizge Erişimsel Sinir Ağları (DGCNN), PointCNN ve Point2Sequence gibi nokta tabanlı derin öğrenme mimarilerinin potansiyelinin keşfedilmesini amaçlanmıştır. İkincisi, GAZEBO benzetim ortamında farklı robot konum ve yönelimleriden elde edilen nokta bulutlarından oluşan OGUROB DOORS veri kümesi oluşturulmuştur. Bu mimarilerin olumlu ve olumsuz yönlerini analiz etmek için kesinlik, duyarlılık ve F1 skor ölçütlerini kullandık. Buna ek olarak, mimarilerin karakteristiklerini ortaya koymak amacıyla bazı görsel sonuçlar verilmiştir. Test sonuçları bütün mimarilerin açık, yarı açık ve kapalı kapıları %98 üzerinde bir başarı ile sınıflandırabildiğini göstermiştir.

References

  • Andreopoulos, A. & Tsotsos, J. K. (2008). Active vision for door localization and door opening using playbot: A computer controlled wheelchair for people with mobility impairments. In 2008 Canadian Conference on Computer and Robot Vision, 3-10, Windsor, Canada.
  • Arduengo, M., Torras, C. & Sentis, L. (2019). Robust and Adaptive Door Operation with a Mobile Robot, arXiv, 1902.09051.
  • Bayram, K., Kolaylı, B., Solak, A., Tatar, B., Turgut, K. ve Kaleci, B. (2019). 3B Nokta Bulutu Verisi ile Bölge Büyütme Tabanlı Kapı Bulma Uygulaması. Türkiye Robotbilim Konferansı, 139-145, İstanbul, Turkiye.
  • Beraldo G., Termine E., Menegatti E. (2019). Shared-Autonomy Navigation for Mobile Robots Driven by a Door Detection Module. In: Alviano M., Greco G., Scarcello F. (eds) AI*IA 2019 – Advances in Artificial Intelligence. AI*IA 2019. Lecture Notes in Computer Science, vol 11946. Springer, Cham. doi: https://doi.org/10.1007/978-3-030-35166-3_36
  • Bersan, D., Martins, R., Campos M. & Nascimento, E. R. (2018). Semantic Map Augmentation for Robot Navigation: A Learning Approach Based on Visual and Depth Data, 2018 Latin American Robotic Symposium, 2018 Brazilian Symposium on Robotics (SBR) and 2018 Workshop on Robotics in Education (WRE), João Pessoa, Brazil, 45-50. doi: 10.1109/LARS/SBR/WRE.2018.00018
  • Borgsen, S. M. Z., Schöpfer, M., Ziegler, L. & Wachsmuth, S. (2014). Automated Door Detection with a 3D-Sensor. Canadian Conference on Computer and Robot Vision, 276-282, Montreal, QC, Canada.
  • Budroni, A. & Böhm, J. (2010). Automatic 3D modelling of indoor manhattan-world scenes from laser data. Proceedings of the International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 38(Part 5), 115-120.
  • Burhanpurkar, M., Labbe, M., Guan, C., Michaud, F. & Kelly, J. (2017). Cheap or Robust? The practical realization of self-driving wheelchair technology. IEEE International Conference on Rehabilitation Robotics (ICORR), 1079–1086, London, UK.
  • Chao, P., Kao, C. Y., Ruan, Y. S., Huang, C. H. & Lin, Y. L. (2019). Hardnet: A low memory traffic network, ArXiv, vol. abs/1909.00948.
  • Chen, W., Qu, T., Zhou, Y., Weng, K., Wang, G. & Fu, G. (2014). Door recognition and deep learning algorithm for visual based robot navigation. In 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014), 1793-1798, Bali, Indonesia.
  • Cui, Y., Li, Q., Yang, B., Xiao, W., Chen, C. & Dong, Z. (2019). Automatic 3-D reconstruction of indoor environment with mobile laser scanning point clouds. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(8), 3117-3130, doi: 10.1109/JSTARS.2019.2918937
  • Derry, M. & Argall, B. (2013). Automated doorway detection for assistive shared-control wheelchairs. IEEE International Conference on Robotics and Automation, 1254-1259, Karlsruhe, Germany.
  • Díaz-Vilariño, L., Verbree, E., Zlatanova, S. & Diakité, A. (2017). Indoor modelling from SLAM-based laser scanner: Door detection to envelope reconstruction. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., 42, 345–352. doi: 10.5194/isprs-archives-XLII-2-W7-345-2017
  • Ehlers, S. F. G., Stuede, M., Nuelle, K. & Ortmaier, T. (2020). Map Management Approach for SLAM in Large-Scale Indoor and Outdoor Areas. 2020 IEEE International Conference on Robotics and Automation (ICRA), 9652-9658, Paris, France.
  • ElKaissi, M., Elgamel, M., Bayoumi, M. & Zavidovique, B. (2006). SEDLRF: A new door detection system for topological maps. In 2006 international workshop on computer architecture for machine perception and sensing, 75-80, Montreal, QC, Canada.
  • Flikweert, P., Peters, R., Díaz-Vilariño, L., Voûte, R. & Staats, B. (2019). Automatic extraction of a navigation graph intended for IndoorGML from an indoor point cloud. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 4(2/W5), 271-278. doi: https://doi.org/10.5194/isprs-annalsIV-2-W5-271-2019
  • Gazebo (2021). Robot Simulation Open source robotics foundation (OSRF). Web adress: http://gazebosim.org/.
  • Gillham, M., Howells, G., Spurgeon, S., Kelly, S. & Pepper, M. (2013). Real-time doorway detection and alignment determination for improved trajectory generation in assistive mobile robotic wheelchairs. In 2013 fourth international conference on emerging security technologies, 62-65, Cambridge, UK.
  • Guo, Y., Wang, H., Hu, Q., Liu, H., and Bennamoun M. (2019), Deep Learning for 3D Point Clouds: A Survey, arXiv, http://arxiv.org/abs/1912.12033.
  • Jung, J., Stachniss, C., Ju, S. & Heo, J. (2018). Automated 3D volumetric reconstruction of multiple-room building interiors for as-built BIM. Advanced Engineering Informatics, 38, 811-825. doi: https://doi.org/10.1016/j.aei.2018.10.007
  • Hensler, J., Blaich, M. & Bittel, O. (2010). Real-Time Door Detection Based on AdaBoost Learning Algorithm. In: Gottscheber A., Obdržálek D., Schmidt C. (eds). Research and Education in Robotics. Communications in Computer and Information Science, 82, 61-73. doi: https://doi.org/10.1007/978-3-642-16370-8_6
  • Kakillioglu, B., Ozcan, K. & Velipasalar, S. (2016). Doorway detection for autonomous indoor navigation of unmanned vehicles. 2016 IEEE International Conference on Image Processing (ICIP), 3837-3841, Phoenix, AZ, USA.
  • Kaleci, B., Şenler, Ç. M., Dutagaci, H. & Parlaktuna, O. (2015). Rule-Based Door Detection Using Laser Range Data in Indoor Environments. IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), 510-517, Vietri sul Mare, Italy.
  • Kaleci, B. & Turgut, K. (2020). NOKTA BULUTU VERİSİ İLE KURAL TABANLI AÇIK KAPI BULMA YÖNTEMİ. Eskişehir Osmangazi Üniversitesi Mühendislik ve Mimarlık Fakültesi Dergisi, 28(2), 164-173. doi: 10.31796/ogummf.723781
  • Khoshelham, K., Vilariño, L. D., Peter, M., Kang, Z. & Acharya, D. (2017). The ISPRS benchmark on indoor modelling. In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2/W7, 367-372. doi: https://doi.org/10.5194/isprs-archives-XLII-2-W7-367-2017.
  • Kim, S., Cheong, H., Kim, D. H. & Park, S. K. (2011). Context-based object recognition for door detection. In 2011 15th International Conference on Advanced Robotics (ICAR), 155-160, Tallinn, Estonia.
  • Koo, B., Jung, R., Yu, Y. (2021). Automatic classification of wall and door BIM element subtypes using 3D geometric deep neural networks, Advanced Engineering Informatics, 47, ISSN 1474-0346. doi: https://doi.org/10.1016/j.aei.2020.101200.
  • Li, Y., Bu, R., Sun, M., Wu, W., Di, X., & Chen, B. (2018). PointCNN: Convolution on x-transformed points, in Advances in Neural Information Processing Systems, 31.
  • Liu, X., Han, Z., Liu, Y., & Zwicker, M. (2019). Point2Sequence: Learning the Shape Representation of 3D Point Clouds with an Attention-based Sequence to Sequence Network, Thirty-Third AAAI Conference on Artificial Intelligence.
  • Llopart, A., Ravn, O. & Andersen, N. A. (2017). Door and cabinet recognition using Convolutional Neural Nets and real-time method fusion for handle detection and grasping, 2017 3rd International Conference on Control, Automation and Robotics (ICCAR), Nagoya,144-149. doi: 10.1109/ICCAR.2017.7942676
  • Meeussen, W., Wise, M., Glaser, S. & Chitta., S. (2010). Autonomous door opening and plugging in with a personal robot. IEEE International Conference on Robotics and Automation, 729-736, Anchorage, AK, USA.
  • Michailidis, G. T. & Pajarola, R. (2017). Bayesian graph-cut optimization for wall surfaces reconstruction in indoor environments. The Visual Computer, 33(10), 1347-1355. doi: https://doi.org/10.1007/s00371-016-1230-3
  • Murillo, A. C., Košecká, J., Guerrero, J. J. & Sagüés, C. (2008). Visual door detection integrating appearance and shape cues. Robotics and Autonomous Systems, 56(6), 512-521. doi: https://doi.org/10.1016/j.robot.2008.03.003
  • Nagahama, K., Takeshita, K., Yaguchi, H., Yamazaki, K., Yamamoto, T. & Inaba, M. (2018). Estimating door shape and manipulation model for daily assistive robots based on the integration of visual and touch information. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 7660-7666, Madrid, Spain.
  • Nieuwenhuisen, M., Stückler, J. & Behnke, S. (2010). Improving indoor navigation of autonomous robots by an explicit representation of doors. In 2010 IEEE International Conference on Robotics and Automation, 4895-4901, Anchorage, AK, USA.
  • Nikoohemat, S., Peter, M., Elberink, S. O. & Vosselman, G. (2017). Exploiting indoor mobile laser scanner trajectories for semantic interpretation of point clouds. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 355-362, Wuhan, China.
  • OGUROB DOOR Dataset (2021). Web adress: http://www.ai-robotlab.ogu.edu.tr/OGUROB_KAPI.html
  • Othman, K. M. & Rad, A. B. (2020). A Doorway Detection and Direction (3Ds) System for Social Robots via a Monocular Camera, Sensors, 20(9), 2477. doi: https://doi.org/10.3390/s20092477
  • Panzarella, T., Schwesinger, D. & Spletzer, J. (2016) CoPilot: Autonomous Doorway Detection and Traversal for Electric Powered Wheelchairs. In: Wettergreen D., Barfoot T. (eds) Field and Service Robotics. Springer Tracts in Advanced Robotics, 113, 233-248. doi: https://doi.org/10.1007/978-3-319-27702-8_16
  • Pioneer P3-AT (2021). Web address: http://www.ist.tugraz.at/_attach/Publish/Kmr06/pioneer-robot.pdf.
  • Ramôa, J. G., Alexandre, L. A. & Mogo, S. (2020). Real-Time 3D Door Detection and Classification on a Low-Power Device, 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Ponta Delgada, Portugal, 2020, pp. 96-101.
  • Qi, C. R., Su, H., Mo, K. & Guibas, L. J. (2016). PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation, arXiv preprint arXiv:1612.00593.
  • Qi, C. R., Yi, L., Su, H., & Guibas, L. J. (2017), PointNet++: Deep hierarchical feature learning on point sets in a metric space, in NeurIPS., arXiv preprint arXiv:1706.02413.
  • Quijano, A. & Prieto, F. (2016). 3d semantic modeling of indoor environments based on point clouds and contextual relationships. Ingeniería, 21(3), 305-323. doi:http://dx.doi.org/10.14483/udistrital.jour.reving.2016.3.a04
  • Quintana, B., Prieto, S. A., Adán, A. & Bosché, F. (2018). Door detection in 3D coloured point clouds of indoor environments. Automation in Construction, 85, 146-166. doi: 10.1016/j.autcon.2017.10.016
  • Redmon J., Divvala,S., Girshick, R. & Farhadi, A. (2016) You only look once: Unified, real-time object detection, IEEE CVPR.
  • Robot Operating System (ROS) (2021). Open source robotics foundation (OSRF). Web adress: http://ros.org/.
  • Rusu, R. B., Meeussen, W., Chitta, S. & Beetz, M. (2009). Laser-based perception for door and handle identification. International Conference on Advanced Robotics, 1-8, Munich, Germany.
  • Rusu, R. B. & Cousins, S. (2011). 3D is here: Point Cloud Library (PCL). IEEE International Conference on Robotics and Automation, 1-4, Shanghai, China.
  • Sekkal, R., Pasteau, F., Babel, M., Brun, B. & Leplumey, I. (2013). Simple monocular door detection and tracking. In 2013 IEEE International Conference on Image Processing, 3929-3933, Melbourne, VIC, Australia.
  • Staats, B. R., Diakité, A. A., Voûte, R. L. & Zlatanova, S. (2019). Detection of doors in a voxel model, derived from a point cloud and its scanner trajectory, to improve the segmentation of the walkable space. International Journal of Urban Sciences, 23(3), 369-390. doi: https://doi.org/10.1080/12265934.2018.1553685
  • Souto, L. A. V., Castro, A., Gonçalves, L. M. G. & Nascimento, T. P. (2017). Stairs and Doors Recognition as Natural Landmarks Based on Clouds of 3D Edge-Points from RGB-D Sensors for Mobile Robot Localization. Sensors, 17(8). doi: 10.3390/s17081824
  • Su, H., Maji, S., Kalogerakis, E. & Learned-Miller, E. (2015). Multi-view Convolutional Neural Networks for 3D Shape Recognition, Proceedings of ICCV.
  • Tensorflow (2021), Web address: https://www.tensorflow.org/.
  • Wang, R., Xie, L. & Chen, D. (2017). Modeling indoor spaces using decomposition and reconstruction of structural elements. Photogrammetric Engineering & Remote Sensing, 83(12), 827-841. doi: https://doi.org/10.14358/PERS.83.12.827
  • Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M., & Solomon, J. M. (2019). Dynamic graph CNN for learning on point clouds, ACM Transactions on Graphics (TOG).
  • Wu,H., Zhang, J., Huang, K.., Liang, K. & Yizhou,Y. (2019). Fastfcn: Rethinking dilated convolution in the backbone for semantic segmentation, arXiv preprint arXiv:1903.11816.
  • Yang, X. & Tian, Y. (2010). Robust door detection in unfamiliar environments by combining edge and corner features. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 57-64, San Francisco, CA, USA.
  • Ye, C. & Qian, X. (2018). 3-D Object Recognition of a Robotic Navigation Aid for the Visually Impaired. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 26(2), 441-450, doi: 10.1109/TNSRE.2017.2748419
  • Zheng, Y., Peter, M., Zhong, R., Oude Elberink, S. & Zhou, Q. (2018). Space subdivision in indoor mobile laser scanning point clouds based on scanline analysis. Sensors, 18(6), 1838. doi: https://doi.org/10.3390/s18061838

COMPARISON OF DEEP LEARNING TECHNIQUES FOR DETECTION OF DOORS IN INDOOR ENVIRONMENTS

Year 2021, , 396 - 412, 31.12.2021
https://doi.org/10.31796/ogummf.889095

Abstract

In indoor environments, the detection of doors (open, semi-opened, and closed) is a crucial task for a variety of fields such as robotics, computer vision, and architecture. The studies that are addressed the door detection problem can be divided into three major categories: 1) closed doors via visual data, 2) open doors via range data, and 3) open, semi-opened, and closed doors via point cloud data. Although some successful studies have been proposed being detected doors via visual and range data under specific circumstances, in this study, we exploited point cloud data due to its ability to describe the 3D characteristic of scenes. The main contribution of this study is two-fold. Firstly, we mainly intended to discover the potential of point-based deep learning architectures such as PointNet, PointNet++, Dynamic Graph Convolutional Neural Network (DGCNN), PointCNN, and Point2Sequence, in contrast to previous studies that generally defined a set of rules depending on the type of door and characteristics of the data, Secondly, the OGUROB DOORS dataset is constructed, which contains point cloud data captured in the GAZEBO simulation environment in different robot positions and orientations. We used precision, recall, and F1-score metrics to analyze the merit and demerit aspects of these architectures. Also, some visual results were given to describe the characteristics of these architectures. The test results showed that all architectures are capable of classifying open, semi-opened, and closed doors over 98% accuracy.

References

  • Andreopoulos, A. & Tsotsos, J. K. (2008). Active vision for door localization and door opening using playbot: A computer controlled wheelchair for people with mobility impairments. In 2008 Canadian Conference on Computer and Robot Vision, 3-10, Windsor, Canada.
  • Arduengo, M., Torras, C. & Sentis, L. (2019). Robust and Adaptive Door Operation with a Mobile Robot, arXiv, 1902.09051.
  • Bayram, K., Kolaylı, B., Solak, A., Tatar, B., Turgut, K. ve Kaleci, B. (2019). 3B Nokta Bulutu Verisi ile Bölge Büyütme Tabanlı Kapı Bulma Uygulaması. Türkiye Robotbilim Konferansı, 139-145, İstanbul, Turkiye.
  • Beraldo G., Termine E., Menegatti E. (2019). Shared-Autonomy Navigation for Mobile Robots Driven by a Door Detection Module. In: Alviano M., Greco G., Scarcello F. (eds) AI*IA 2019 – Advances in Artificial Intelligence. AI*IA 2019. Lecture Notes in Computer Science, vol 11946. Springer, Cham. doi: https://doi.org/10.1007/978-3-030-35166-3_36
  • Bersan, D., Martins, R., Campos M. & Nascimento, E. R. (2018). Semantic Map Augmentation for Robot Navigation: A Learning Approach Based on Visual and Depth Data, 2018 Latin American Robotic Symposium, 2018 Brazilian Symposium on Robotics (SBR) and 2018 Workshop on Robotics in Education (WRE), João Pessoa, Brazil, 45-50. doi: 10.1109/LARS/SBR/WRE.2018.00018
  • Borgsen, S. M. Z., Schöpfer, M., Ziegler, L. & Wachsmuth, S. (2014). Automated Door Detection with a 3D-Sensor. Canadian Conference on Computer and Robot Vision, 276-282, Montreal, QC, Canada.
  • Budroni, A. & Böhm, J. (2010). Automatic 3D modelling of indoor manhattan-world scenes from laser data. Proceedings of the International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 38(Part 5), 115-120.
  • Burhanpurkar, M., Labbe, M., Guan, C., Michaud, F. & Kelly, J. (2017). Cheap or Robust? The practical realization of self-driving wheelchair technology. IEEE International Conference on Rehabilitation Robotics (ICORR), 1079–1086, London, UK.
  • Chao, P., Kao, C. Y., Ruan, Y. S., Huang, C. H. & Lin, Y. L. (2019). Hardnet: A low memory traffic network, ArXiv, vol. abs/1909.00948.
  • Chen, W., Qu, T., Zhou, Y., Weng, K., Wang, G. & Fu, G. (2014). Door recognition and deep learning algorithm for visual based robot navigation. In 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014), 1793-1798, Bali, Indonesia.
  • Cui, Y., Li, Q., Yang, B., Xiao, W., Chen, C. & Dong, Z. (2019). Automatic 3-D reconstruction of indoor environment with mobile laser scanning point clouds. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(8), 3117-3130, doi: 10.1109/JSTARS.2019.2918937
  • Derry, M. & Argall, B. (2013). Automated doorway detection for assistive shared-control wheelchairs. IEEE International Conference on Robotics and Automation, 1254-1259, Karlsruhe, Germany.
  • Díaz-Vilariño, L., Verbree, E., Zlatanova, S. & Diakité, A. (2017). Indoor modelling from SLAM-based laser scanner: Door detection to envelope reconstruction. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., 42, 345–352. doi: 10.5194/isprs-archives-XLII-2-W7-345-2017
  • Ehlers, S. F. G., Stuede, M., Nuelle, K. & Ortmaier, T. (2020). Map Management Approach for SLAM in Large-Scale Indoor and Outdoor Areas. 2020 IEEE International Conference on Robotics and Automation (ICRA), 9652-9658, Paris, France.
  • ElKaissi, M., Elgamel, M., Bayoumi, M. & Zavidovique, B. (2006). SEDLRF: A new door detection system for topological maps. In 2006 international workshop on computer architecture for machine perception and sensing, 75-80, Montreal, QC, Canada.
  • Flikweert, P., Peters, R., Díaz-Vilariño, L., Voûte, R. & Staats, B. (2019). Automatic extraction of a navigation graph intended for IndoorGML from an indoor point cloud. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 4(2/W5), 271-278. doi: https://doi.org/10.5194/isprs-annalsIV-2-W5-271-2019
  • Gazebo (2021). Robot Simulation Open source robotics foundation (OSRF). Web adress: http://gazebosim.org/.
  • Gillham, M., Howells, G., Spurgeon, S., Kelly, S. & Pepper, M. (2013). Real-time doorway detection and alignment determination for improved trajectory generation in assistive mobile robotic wheelchairs. In 2013 fourth international conference on emerging security technologies, 62-65, Cambridge, UK.
  • Guo, Y., Wang, H., Hu, Q., Liu, H., and Bennamoun M. (2019), Deep Learning for 3D Point Clouds: A Survey, arXiv, http://arxiv.org/abs/1912.12033.
  • Jung, J., Stachniss, C., Ju, S. & Heo, J. (2018). Automated 3D volumetric reconstruction of multiple-room building interiors for as-built BIM. Advanced Engineering Informatics, 38, 811-825. doi: https://doi.org/10.1016/j.aei.2018.10.007
  • Hensler, J., Blaich, M. & Bittel, O. (2010). Real-Time Door Detection Based on AdaBoost Learning Algorithm. In: Gottscheber A., Obdržálek D., Schmidt C. (eds). Research and Education in Robotics. Communications in Computer and Information Science, 82, 61-73. doi: https://doi.org/10.1007/978-3-642-16370-8_6
  • Kakillioglu, B., Ozcan, K. & Velipasalar, S. (2016). Doorway detection for autonomous indoor navigation of unmanned vehicles. 2016 IEEE International Conference on Image Processing (ICIP), 3837-3841, Phoenix, AZ, USA.
  • Kaleci, B., Şenler, Ç. M., Dutagaci, H. & Parlaktuna, O. (2015). Rule-Based Door Detection Using Laser Range Data in Indoor Environments. IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), 510-517, Vietri sul Mare, Italy.
  • Kaleci, B. & Turgut, K. (2020). NOKTA BULUTU VERİSİ İLE KURAL TABANLI AÇIK KAPI BULMA YÖNTEMİ. Eskişehir Osmangazi Üniversitesi Mühendislik ve Mimarlık Fakültesi Dergisi, 28(2), 164-173. doi: 10.31796/ogummf.723781
  • Khoshelham, K., Vilariño, L. D., Peter, M., Kang, Z. & Acharya, D. (2017). The ISPRS benchmark on indoor modelling. In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2/W7, 367-372. doi: https://doi.org/10.5194/isprs-archives-XLII-2-W7-367-2017.
  • Kim, S., Cheong, H., Kim, D. H. & Park, S. K. (2011). Context-based object recognition for door detection. In 2011 15th International Conference on Advanced Robotics (ICAR), 155-160, Tallinn, Estonia.
  • Koo, B., Jung, R., Yu, Y. (2021). Automatic classification of wall and door BIM element subtypes using 3D geometric deep neural networks, Advanced Engineering Informatics, 47, ISSN 1474-0346. doi: https://doi.org/10.1016/j.aei.2020.101200.
  • Li, Y., Bu, R., Sun, M., Wu, W., Di, X., & Chen, B. (2018). PointCNN: Convolution on x-transformed points, in Advances in Neural Information Processing Systems, 31.
  • Liu, X., Han, Z., Liu, Y., & Zwicker, M. (2019). Point2Sequence: Learning the Shape Representation of 3D Point Clouds with an Attention-based Sequence to Sequence Network, Thirty-Third AAAI Conference on Artificial Intelligence.
  • Llopart, A., Ravn, O. & Andersen, N. A. (2017). Door and cabinet recognition using Convolutional Neural Nets and real-time method fusion for handle detection and grasping, 2017 3rd International Conference on Control, Automation and Robotics (ICCAR), Nagoya,144-149. doi: 10.1109/ICCAR.2017.7942676
  • Meeussen, W., Wise, M., Glaser, S. & Chitta., S. (2010). Autonomous door opening and plugging in with a personal robot. IEEE International Conference on Robotics and Automation, 729-736, Anchorage, AK, USA.
  • Michailidis, G. T. & Pajarola, R. (2017). Bayesian graph-cut optimization for wall surfaces reconstruction in indoor environments. The Visual Computer, 33(10), 1347-1355. doi: https://doi.org/10.1007/s00371-016-1230-3
  • Murillo, A. C., Košecká, J., Guerrero, J. J. & Sagüés, C. (2008). Visual door detection integrating appearance and shape cues. Robotics and Autonomous Systems, 56(6), 512-521. doi: https://doi.org/10.1016/j.robot.2008.03.003
  • Nagahama, K., Takeshita, K., Yaguchi, H., Yamazaki, K., Yamamoto, T. & Inaba, M. (2018). Estimating door shape and manipulation model for daily assistive robots based on the integration of visual and touch information. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 7660-7666, Madrid, Spain.
  • Nieuwenhuisen, M., Stückler, J. & Behnke, S. (2010). Improving indoor navigation of autonomous robots by an explicit representation of doors. In 2010 IEEE International Conference on Robotics and Automation, 4895-4901, Anchorage, AK, USA.
  • Nikoohemat, S., Peter, M., Elberink, S. O. & Vosselman, G. (2017). Exploiting indoor mobile laser scanner trajectories for semantic interpretation of point clouds. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 355-362, Wuhan, China.
  • OGUROB DOOR Dataset (2021). Web adress: http://www.ai-robotlab.ogu.edu.tr/OGUROB_KAPI.html
  • Othman, K. M. & Rad, A. B. (2020). A Doorway Detection and Direction (3Ds) System for Social Robots via a Monocular Camera, Sensors, 20(9), 2477. doi: https://doi.org/10.3390/s20092477
  • Panzarella, T., Schwesinger, D. & Spletzer, J. (2016) CoPilot: Autonomous Doorway Detection and Traversal for Electric Powered Wheelchairs. In: Wettergreen D., Barfoot T. (eds) Field and Service Robotics. Springer Tracts in Advanced Robotics, 113, 233-248. doi: https://doi.org/10.1007/978-3-319-27702-8_16
  • Pioneer P3-AT (2021). Web address: http://www.ist.tugraz.at/_attach/Publish/Kmr06/pioneer-robot.pdf.
  • Ramôa, J. G., Alexandre, L. A. & Mogo, S. (2020). Real-Time 3D Door Detection and Classification on a Low-Power Device, 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Ponta Delgada, Portugal, 2020, pp. 96-101.
  • Qi, C. R., Su, H., Mo, K. & Guibas, L. J. (2016). PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation, arXiv preprint arXiv:1612.00593.
  • Qi, C. R., Yi, L., Su, H., & Guibas, L. J. (2017), PointNet++: Deep hierarchical feature learning on point sets in a metric space, in NeurIPS., arXiv preprint arXiv:1706.02413.
  • Quijano, A. & Prieto, F. (2016). 3d semantic modeling of indoor environments based on point clouds and contextual relationships. Ingeniería, 21(3), 305-323. doi:http://dx.doi.org/10.14483/udistrital.jour.reving.2016.3.a04
  • Quintana, B., Prieto, S. A., Adán, A. & Bosché, F. (2018). Door detection in 3D coloured point clouds of indoor environments. Automation in Construction, 85, 146-166. doi: 10.1016/j.autcon.2017.10.016
  • Redmon J., Divvala,S., Girshick, R. & Farhadi, A. (2016) You only look once: Unified, real-time object detection, IEEE CVPR.
  • Robot Operating System (ROS) (2021). Open source robotics foundation (OSRF). Web adress: http://ros.org/.
  • Rusu, R. B., Meeussen, W., Chitta, S. & Beetz, M. (2009). Laser-based perception for door and handle identification. International Conference on Advanced Robotics, 1-8, Munich, Germany.
  • Rusu, R. B. & Cousins, S. (2011). 3D is here: Point Cloud Library (PCL). IEEE International Conference on Robotics and Automation, 1-4, Shanghai, China.
  • Sekkal, R., Pasteau, F., Babel, M., Brun, B. & Leplumey, I. (2013). Simple monocular door detection and tracking. In 2013 IEEE International Conference on Image Processing, 3929-3933, Melbourne, VIC, Australia.
  • Staats, B. R., Diakité, A. A., Voûte, R. L. & Zlatanova, S. (2019). Detection of doors in a voxel model, derived from a point cloud and its scanner trajectory, to improve the segmentation of the walkable space. International Journal of Urban Sciences, 23(3), 369-390. doi: https://doi.org/10.1080/12265934.2018.1553685
  • Souto, L. A. V., Castro, A., Gonçalves, L. M. G. & Nascimento, T. P. (2017). Stairs and Doors Recognition as Natural Landmarks Based on Clouds of 3D Edge-Points from RGB-D Sensors for Mobile Robot Localization. Sensors, 17(8). doi: 10.3390/s17081824
  • Su, H., Maji, S., Kalogerakis, E. & Learned-Miller, E. (2015). Multi-view Convolutional Neural Networks for 3D Shape Recognition, Proceedings of ICCV.
  • Tensorflow (2021), Web address: https://www.tensorflow.org/.
  • Wang, R., Xie, L. & Chen, D. (2017). Modeling indoor spaces using decomposition and reconstruction of structural elements. Photogrammetric Engineering & Remote Sensing, 83(12), 827-841. doi: https://doi.org/10.14358/PERS.83.12.827
  • Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M., & Solomon, J. M. (2019). Dynamic graph CNN for learning on point clouds, ACM Transactions on Graphics (TOG).
  • Wu,H., Zhang, J., Huang, K.., Liang, K. & Yizhou,Y. (2019). Fastfcn: Rethinking dilated convolution in the backbone for semantic segmentation, arXiv preprint arXiv:1903.11816.
  • Yang, X. & Tian, Y. (2010). Robust door detection in unfamiliar environments by combining edge and corner features. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 57-64, San Francisco, CA, USA.
  • Ye, C. & Qian, X. (2018). 3-D Object Recognition of a Robotic Navigation Aid for the Visually Impaired. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 26(2), 441-450, doi: 10.1109/TNSRE.2017.2748419
  • Zheng, Y., Peter, M., Zhong, R., Oude Elberink, S. & Zhou, Q. (2018). Space subdivision in indoor mobile laser scanning point clouds based on scanline analysis. Sensors, 18(6), 1838. doi: https://doi.org/10.3390/s18061838
There are 60 citations in total.

Details

Primary Language English
Subjects Computer Software, Electrical Engineering
Journal Section Research Articles
Authors

Burak Kaleci 0000-0002-2001-3381

Kaya Turgut 0000-0003-3345-9339

Publication Date December 31, 2021
Acceptance Date September 27, 2021
Published in Issue Year 2021

Cite

APA Kaleci, B., & Turgut, K. (2021). COMPARISON OF DEEP LEARNING TECHNIQUES FOR DETECTION OF DOORS IN INDOOR ENVIRONMENTS. Eskişehir Osmangazi Üniversitesi Mühendislik Ve Mimarlık Fakültesi Dergisi, 29(3), 396-412. https://doi.org/10.31796/ogummf.889095
AMA Kaleci B, Turgut K. COMPARISON OF DEEP LEARNING TECHNIQUES FOR DETECTION OF DOORS IN INDOOR ENVIRONMENTS. ESOGÜ Müh Mim Fak Derg. December 2021;29(3):396-412. doi:10.31796/ogummf.889095
Chicago Kaleci, Burak, and Kaya Turgut. “COMPARISON OF DEEP LEARNING TECHNIQUES FOR DETECTION OF DOORS IN INDOOR ENVIRONMENTS”. Eskişehir Osmangazi Üniversitesi Mühendislik Ve Mimarlık Fakültesi Dergisi 29, no. 3 (December 2021): 396-412. https://doi.org/10.31796/ogummf.889095.
EndNote Kaleci B, Turgut K (December 1, 2021) COMPARISON OF DEEP LEARNING TECHNIQUES FOR DETECTION OF DOORS IN INDOOR ENVIRONMENTS. Eskişehir Osmangazi Üniversitesi Mühendislik ve Mimarlık Fakültesi Dergisi 29 3 396–412.
IEEE B. Kaleci and K. Turgut, “COMPARISON OF DEEP LEARNING TECHNIQUES FOR DETECTION OF DOORS IN INDOOR ENVIRONMENTS”, ESOGÜ Müh Mim Fak Derg, vol. 29, no. 3, pp. 396–412, 2021, doi: 10.31796/ogummf.889095.
ISNAD Kaleci, Burak - Turgut, Kaya. “COMPARISON OF DEEP LEARNING TECHNIQUES FOR DETECTION OF DOORS IN INDOOR ENVIRONMENTS”. Eskişehir Osmangazi Üniversitesi Mühendislik ve Mimarlık Fakültesi Dergisi 29/3 (December 2021), 396-412. https://doi.org/10.31796/ogummf.889095.
JAMA Kaleci B, Turgut K. COMPARISON OF DEEP LEARNING TECHNIQUES FOR DETECTION OF DOORS IN INDOOR ENVIRONMENTS. ESOGÜ Müh Mim Fak Derg. 2021;29:396–412.
MLA Kaleci, Burak and Kaya Turgut. “COMPARISON OF DEEP LEARNING TECHNIQUES FOR DETECTION OF DOORS IN INDOOR ENVIRONMENTS”. Eskişehir Osmangazi Üniversitesi Mühendislik Ve Mimarlık Fakültesi Dergisi, vol. 29, no. 3, 2021, pp. 396-12, doi:10.31796/ogummf.889095.
Vancouver Kaleci B, Turgut K. COMPARISON OF DEEP LEARNING TECHNIQUES FOR DETECTION OF DOORS IN INDOOR ENVIRONMENTS. ESOGÜ Müh Mim Fak Derg. 2021;29(3):396-412.

20873 13565 13566 15461 13568  14913