Derleme
BibTex RIS Kaynak Göster

Derin Öğrenme Yöntemleri ile 3B Nokta Bulutlarının Semantik Segmentasyonuna Genel bir Bakış

Yıl 2023, Cilt: 11 Sayı: 1, 342 - 357, 31.01.2023
https://doi.org/10.29130/dubited.1004211

Öz

Semantik segmentasyon, çevredeki nesnelere anlam vermek için etiketlenmiş her pikseli anlamlı bir sınıfa atayan bir veri işleme yöntemidir. Derin öğrenme (DÖ) tabanlı yöntemlerin geliştirilmesi, Nokta Bulutu (NB) ile segmentasyon yöntemlerine olan ilgiyi artırmıştır. 3 Boyutlu (3B) nokta bulutu semantik segmentasyonu, farklı tarama araçları ile elde edilen 3B veri setlerinde aynı bölgede aynı özelliklere sahip noktaları homojen bölgelere ayırmaktadır. 3B nokta bulutları ile 3B nesneleri anlamak için semantik segmentasyonun kullanılması önemli bir başlangıç olmuştur. Özellikle derin öğrenme yöntemlerinin kullanılması bu alanı odak noktası haline getirmiştir. 3B yapılandırılmamış büyük nokta bulutlarını işlerken, derin öğrenmeyi temel alarak geliştirilen yeni yöntemler, yaklaşımlar ve modeller üzerinde benzersiz sorunlarla karşılaşılması bu alanın gelişime açık olduğunu göstermektedir. Bu yeni yöntemlerin başarılarını anlamak için, kıyaslama veri kümeleri: ShapeNet, S3dis, ScanNet, SemanticKITTI üzerindeki performansları değerlendirilmiş. 3B nokta bulutu ile segmentasyon alanına katkıda bulunan dikkate değer araştırmalar incelenmiş, avantajları, dezavantajları ve önerilen yöntemlerin katkıları sunulmuştur. Sunulan tüm yöntemlerin mimari yapısı, yaygın olarak kullanılan veri kümeleri üzerindeki başarıları tartışılmış ve gelecekteki araştırmalara öncülük edecek bilgiler önerilmiştir.

Kaynakça

  • [1]Z. Liang vd., “Stereo matching using multi-level cost volume and multi-scale feature constancy”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no 1, pp. 300–315, 2021. [2]Y. Guo, F. Sohel, M. Bennamoun, M. Lu, ve J. Wan, “Rotational projection statistics for 3D local surface description and object recognition”, Int. J. Comput. Vis., vol. 105, no 1, pp. 63–86, 2013. [3]C. Xiaozhi, M. Huimin, W. Ji, L. Bo, ve X. Tian, “Multi-View 3D Object Detection Network for Autonomous Driving | Spotlight 4-2B - YouTube”, Comput. Videos, pp. 1907–1915, 2017. [4]R. B. Rusu ve S. Cousins, “3D is here: Point Cloud Library (PCL)”, Proc. - IEEE Int. Conf. Robot. Autom., pp. 1–4, 2011. [5]A. Shamir, “Segmentation and shape extraction of 3D boundary meshes”, Eurographics, no September, pp. 137–149, 2006. [6]R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, ve M. Beetz, “Towards 3D Point cloud based object maps for household environments”, Rob. Auton. Syst., vol. 56, no 11, pp. 927–941, 2008. [7]P. J. Besl ve R. C. Jain, “Segmentation Through Variable-Order Surface Fitting”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 10, no 2, pp. 167–192, 1988. [8]S. Pu ve G. Vosselman, “Automatic extraction of building features from terrestrial laser scanning”, Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. - ISPRS Arch., no. 36, 2006. [9]R. B. Rusu, A. Holzbach, N. Blodow, ve M. Beetz, “Fast geometric point labeling using conditional random fields”, 2009 IEEE/RSJ Int. Conf. Intell. Robot. Syst. IROS 2009, pp. 7–12, 2009. [10]J. Shi ve J. Malik, “Normalized cuts and image segmentation”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no 8, pp. 888–905, 2000. [11]R. Leahy, “An Optimal Graph Theoretic Approach to Data Clustering: Theory and Its Application to Image Segmentation”, IEEE Trans. Pattern Anal. Mach. Intell., vol 15, no11, pp. 1101–1113, 1993. [12]Y. Boykov ve G. Funka-Lea, “Graph cuts and efficient N-D image segmentation”, Int. J. Comput. Vis., vol. 70, no 2, pp. 109–131, 2006. [13]D. Anguelov vd., “Discriminative learning of Markov random fields for segmentation of 3D scan data”, Proc. - 2005 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognition, CVPR 2005, vol. II, pp. 169–176, 2005. [14]Y. Zhu vd., “Target-driven visual navigation in indoor scenes using deep reinforcement learning”, Proc. - IEEE Int. Conf. Robot. Autom., vol 1, pp. 3357–3364, 2017. [15]I. Armeni vd., “3D Semantic Parsing of Large-Scale Indoor Spaces Supplementary Material”, 2016 IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1534–1543, 2016. [16]A. X. Chang vd., “ShapeNet: An Information-Rich 3D Model Repository”, 2015. [17]J. Behley vd., “SemanticKITTI”, Iccv, vol iii, 2019. [18]A. Geiger, P. Lenz, ve R. Urtasun, “Are we ready for autonomous driving? the KITTI vision benchmark suite”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., ss. 3354–3361, 2012. [19]A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, ve M. Nießner, “ScanNet: Richly-annotated 3D reconstructions of indoor scenes”, arXiv, 2017. [20]T. Hackel, N. Savinov, L. Ladicky, J. D. Wegner, K. Schindler, ve M. Pollefeys, “Semantic3D.Net: a New Large-Scale Point Cloud Classification Benchmark”, ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci., c. 4, sayı 1W1, ss. 91–98, 2017. [21]K. Mo vd., “PartNet: A Large-scale Benchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding”, arXiv, pp. 909–918, 2018. [22]X. Song vd., “ApolloCar3D: A Large 3D Car Instance Understanding Benchmark for Autonomous Driving”, arXiv, ss. 5452–5462, 2018. [23]M. A. Uy, Q. H. Pham, B. S. Hua, T. Nguyen, ve S. K. Yeung, “Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data”, Proc. IEEE Int. Conf. Comput. Vis., c. 2019-Octob, pp. 1588–1597, 2019. [24]Z. Wu vd., “3D ShapeNets: A deep representation for volumetric shapes”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, pp. 1912–1920, 2015. [25]X. F. Han, H. Laga, ve M. Bennamoun, “Image-based 3D Object Reconstruction: State-of-the-Art and Trends in the Deep Learning Era”, arXiv, vol. 8828, 2019. [26]G. Elbaz, T. Avraham, ve A. Fischer, “3D point cloud registration for localization using a deep neural network auto-encoder”, Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, sayı July, pp. 2472–2481, 2017. [27]A. Nguyen ve B. Le, “3D point cloud segmentation: A survey”, IEEE Conf. Robot. Autom. Mechatronics, RAM - Proc., pp. 225–230, 2013. [28]Y. Ishikawa, R. Hachiuma, N. Ienaga, W. Kuno, Y. Sugiura, ve H. Saito, “Semantic Segmentation of 3D Point Cloud to Virtually Manipulate Real Living Space”, Proc. 2019 12th Asia Pacific Work. Mix. Augment. Reality, APMAR 2019, 2019. [29]B. Liu, S. He, D. He, Y. Zhang, ve M. Guizani, “A Spark-Based Parallel Fuzzy c -Means Segmentation Algorithm for Agricultural Image Big Data”, IEEE Access, vol. 7, pp. 42169–42180, 2019. [30]R. Schnabel, R. Wahl, ve R. Klein, “Efficient RANSAC for point-cloud shape detection”, Comput. Graph. Forum, vol. 26, no 2, pp. 214–226, 2007. [31]B. Yang ve Z. Dong, “A shape-based segmentation method for mobile laser scanning point clouds”, ISPRS J. Photogramm. Remote Sens., vol. 81, pp. 19–30, 2013. [32]J. Yan, J. Shan, ve W. Jiang, “A global optimization approach to roof segmentation from airborne lidar point clouds”, ISPRS J. Photogramm. Remote Sens., vol. 94, pp. 183–193, 2014. [33]A. Golovinskiy ve T. Funkhouser, “Min-cut based segmentation of point clouds”, 2009 IEEE 12th Int. Conf. Comput. Vis. Work. ICCV Work. 2009, pp. 39–46, 2009. [34]T. Rabbani, F. van den Wildenberg, ve G. Vosselman, “Segmentation of point clouds using smoothness constraint”, Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., vol. 36, no 5, pp. 248–253, 2006. [35]A. Boulch, B. Le Saux, ve N. Audebert, “Unstructured point cloud semantic labeling using deep segmentation networks”, Eurographics Work. 3D Object Retrieval, EG 3DOR, c. 2017-April, pp. 17–24, 2017. [36]F. J. Lawin, M. Danelljan, P. Tosteberg, G. Bhat, F. S. Khan, ve M. Felsberg, “Deep projective 3D semantic segmentation”, Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10424 LNCS, pp. 95–107, 2017. [37] B. Wu, A. Wan, X. Yue, ve K. Keutzer, “SqueezeSeg: Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud”, arXiv, pp. 1887–1893, 2017. [38]B. Wu, X. Zhou, S. Zhao, X. Yue, ve K. Keutzer, “SqueezeSegV2: Improved model structure and unsupervised domain adaptation for road-object segmentation from a liDAR point cloud”, arXiv, pp. 4376–4382, 2018. [39]A. Milioto, I. Vizzo, J. Behley, ve C. Stachniss, “RangeNet ++: Fast and Accurate LiDAR Semantic Segmentation”, IEEE Int. Conf. Intell. Robot. Syst., vol i, pp. 4213–4220, 2019. [40]D. Rethage, J. Wald, J. Sturm, N. Navab, ve F. Tombari, “Fully-Convolutional Point Networks for Large-Scale Point Clouds”, Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11208 LNCS, pp. 625–640, 2018. [41]H. Y. Meng, L. Gao, Y. K. Lai, ve D. Manocha, “VV-NET: Voxel VAE net with group convolutions for point cloud segmentation”, arXiv, pp. 8500–8508, 2018. [42]B. Graham, M. Engelcke, ve L. Van Der Maaten, “3D semantic segmentation with submanifold sparse convolutional networks”, arXiv, pp. 9224–9232, 2017. [43]J. Kautz, “Supplementary Material for SPLATNet : Sparse Lattice Networks for Point Cloud Processing”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 2–4, 2018. [44]R. Alexandru Rosu, P. Schütt, J. Quenzel, ve S. Behnke, “LatticeNet: Fast Point Cloud Segmentation Using Permutohedral Lattices”, 2020. [45]Angela Dai ve M. Nießner, “3DMV: Joint 3D-multi-view prediction for 3D semantic scene segmentation”, Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11214 LNCS, pp. 458–474, 2018. [46]M. Jaritz, J. Gu, ve H. Su, “Multi-view pointnet for 3D scene understanding”, arXiv, 2019. [47]F. Engelmann, T. Kontogianni, B. Leibe, ve R. Field, “2020 IEEE International Conference on Robotics and Automation ( ICRA ) Dilated Point Convolutions : On the Receptive Field Size of Point Convolutions on 3D Point Clouds tions ( DPC ). In a thorough ablation study , we show that the ment of 3D scene unders”, vol 4, pp. 9463–9469, 2020. [48]M. Tatarchenko, J. Park, V. Koltun, ve Q. Y. Zhou, “Tangent Convolutions for Dense Prediction in 3D”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 3887–3896, 2018. [49]L. Tchapmi, C. Choy, I. Armeni, J. Gwak, ve S. Savarese, “SEGCloud: Semantic segmentation of 3D point clouds”, Proc. - 2017 Int. Conf. 3D Vision, 3DV 2017, pp. 537–547, 2018. [50]J. Long, E. Shelhamer, T. Darrell, ve U. Berkeley, “Fully Convolutional Networks for Semantic Segmentation”, Proc. - 2019 Int. Conf. Comput. Vis. Work. ICCVW 2019, pp. 847–856, 2019. [51]A. Dai, D. Ritchie, M. Bokeloh, S. Reed, J. Sturm, ve M. Niebner, “ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4578–4587, 2018. [52]C. Choy, J. Y. Gwak, ve S. Savarese, “4D spatio-temporal ConvNets: Minkowski convolutional neural networks”, arXiv, 2019. [53]L. J. (2017). Qi, C. R., Su, H., Mo, K., & Guibas, “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation”, Proc. - 2016 4th Int. Conf. 3D Vision, 3DV 2016, pp. 601–610, 2016. [54]C. R. Qi, L. Yi, H. Su, ve L. J. Guibas, “PointNet++: Deep hierarchical feature learning on point sets in a metric space”, arXiv, vol Nips, 2017. [55]M. Jiang, Y. Wu, T. Zhao, Z. Zhao, ve C. Lu, “PointSIFT: A SIFT-like Network Module for 3D point cloud semantic segmentation”, arXiv, 2018. [56]F. Engelmann, T. Kontogianni, J. Schult, ve B. Leibe, “Know what your neighbors do: 3D semantic segmentation of point clouds”, arXiv, 2018. [57]H. Zhao, L. Jiang, ve C. F. Jiaya, “PointWeb: Enhancing Local Neighborhood Features for Point Cloud Processing”, vol. 1, pp. 5565–5573. [58]Z. Zhang, B. S. Hua, ve S. K. Yeung, “ShellNet: Efficient point cloud convolutional neural networks using concentric shells statistics”, arXiv, pp. 1607–1616, 2019. [59]Q. Hu vd., “RandLA-Net: Efficient semantic segmentation of large-scale point clouds”, arXiv, pp. 11108–11117, 2019. [60]Y. Zhao, H. Deng, F. Tombari, T. Universit, ve S. Ag, “3D Point Capsule Networks Supplementary Material”, Cvpr, pp. 2–6, 2019. [61]Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, ve J. M. Solomon, “Dynamic Graph CNN for Learning on Point Clouds”, ACM Trans. Graph., vol. 38, no 5, s. Article 146, 2019. [62]R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, ve J. Sivic, “NetVLAD: CNN Architecture for Weakly Supervised Place Recognition”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no 6, pp. 1437–1451, 2018. [63]B. S. Hua, M. K. Tran, ve S. K. Yeung, “Pointwise convolutional neural networks”, arXiv, pp. 984–993, 2017. [64]S. Wang, S. Suo, W. C. Ma, A. Pokrovsky, ve R. Urtasun, “Deep Parametric Continuous Convolutional Neural Networks”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 2589–2597, 2018. [65]H. Thomas, C. R. Qi, J. E. Deschaud, B. Marcotegui, F. Goulette, ve L. J. Guibas, “KPConv: Flexible and deformable convolution for point clouds”, arXiv, pp. 6411–6420, 2019. [66]F. Engelmann, T. Kontogianni, A. Hermans, ve B. Leibe, “Exploring spatial context for 3D semantic segmentation of point clouds”, arXiv, 2018. [67]W. Wang, “Supplementary : Recurrent Slice Networks for 3D Segmentation on Point Clouds Qiangui Huang”, vol 1, pp. 2015–2016, 2017. [68]Z. Zhao, M. Liu, ve K. Ramani, “DAR-Net: Dynamic aggregation network for semantic scene segmentation”, arXiv, 2019. [69]F. Liu, S. Li, L. Zhang, ve C. Zhou, “3DCNN-DQN-RNN : A Deep Reinforcement Learning Framework for Semantic”, IEE Int. Conf. Comput. Vision(ICCV)2017, vol July, pp. 5678–5687, 2017. [70]Loic Landrieu1 ve M. Simonovsky, “Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs”, J. Exp. Theor. Phys., vol. 89, no 4, pp. 734–739, 2018. [71]L. Landrieu ve M. Boussaha, “Point cloud oversegmentation with graph-structured deep metric learning”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 7432–7441, 2019. [72]L. Wang, Y. Huang, Y. Hou, S. Zhang, ve J. Shan, “Graph attention convolution for point cloud semantic segmentation”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 10288–10297, 2019. [73]Y. Ma, Y. Guo, H. Liu, Y. Lei, ve G. Wen, “Global context reasoning for semantic segmentation of 3D point clouds”, Proc. - 2020 IEEE Winter Conf. Appl. Comput. Vision, WACV 2020, pp. 2920–2929, 2020. [74]Y. Li, R. Bu, ve X. Di, “PointCNN : Convolution On X -Transformed Points”, sayı NeurIPS, 2018.

Review on Semantic Segmentation of 3D Point Clouds with Deep Learning Methods

Yıl 2023, Cilt: 11 Sayı: 1, 342 - 357, 31.01.2023
https://doi.org/10.29130/dubited.1004211

Öz

Semantic segmentation is a data processing method that assigns each labeled pixel to a meaningful class to give meaning to surrounding objects. The development of deep learning-based methods has increased the interest in point cloud segmentation processes. 3D point cloud semantic segmentation in 3D datasets obtained with different scanning tools divides, points with the same feature in the same region are divided into homogeneous regions. The use of semantic segmentation to understand 3D point clouds with 3D objects has been an important start. In particular, the use of deep learning methods has made this area a focal point. In particular, When processing large unstructured 3D point clouds, encountering unique problems on new methods and approaches developed on the based on deep learning has shown that this area needs to be improved. In order to understand the achievements of these new methods, their performance on 3D benchmark datasets ShapeNet, S3dis, ScanNet, SemanticKITTI was evaluated. Important research contributing to the segmentation field with 3D point cloud is analyzed, the advantages, disadvantages and contributions of the proposed methods are presented. The all the presented methods and their achievements on widely used datasets are discussed and he offered information that would lead to future research. 

Kaynakça

  • [1]Z. Liang vd., “Stereo matching using multi-level cost volume and multi-scale feature constancy”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no 1, pp. 300–315, 2021. [2]Y. Guo, F. Sohel, M. Bennamoun, M. Lu, ve J. Wan, “Rotational projection statistics for 3D local surface description and object recognition”, Int. J. Comput. Vis., vol. 105, no 1, pp. 63–86, 2013. [3]C. Xiaozhi, M. Huimin, W. Ji, L. Bo, ve X. Tian, “Multi-View 3D Object Detection Network for Autonomous Driving | Spotlight 4-2B - YouTube”, Comput. Videos, pp. 1907–1915, 2017. [4]R. B. Rusu ve S. Cousins, “3D is here: Point Cloud Library (PCL)”, Proc. - IEEE Int. Conf. Robot. Autom., pp. 1–4, 2011. [5]A. Shamir, “Segmentation and shape extraction of 3D boundary meshes”, Eurographics, no September, pp. 137–149, 2006. [6]R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, ve M. Beetz, “Towards 3D Point cloud based object maps for household environments”, Rob. Auton. Syst., vol. 56, no 11, pp. 927–941, 2008. [7]P. J. Besl ve R. C. Jain, “Segmentation Through Variable-Order Surface Fitting”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 10, no 2, pp. 167–192, 1988. [8]S. Pu ve G. Vosselman, “Automatic extraction of building features from terrestrial laser scanning”, Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. - ISPRS Arch., no. 36, 2006. [9]R. B. Rusu, A. Holzbach, N. Blodow, ve M. Beetz, “Fast geometric point labeling using conditional random fields”, 2009 IEEE/RSJ Int. Conf. Intell. Robot. Syst. IROS 2009, pp. 7–12, 2009. [10]J. Shi ve J. Malik, “Normalized cuts and image segmentation”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no 8, pp. 888–905, 2000. [11]R. Leahy, “An Optimal Graph Theoretic Approach to Data Clustering: Theory and Its Application to Image Segmentation”, IEEE Trans. Pattern Anal. Mach. Intell., vol 15, no11, pp. 1101–1113, 1993. [12]Y. Boykov ve G. Funka-Lea, “Graph cuts and efficient N-D image segmentation”, Int. J. Comput. Vis., vol. 70, no 2, pp. 109–131, 2006. [13]D. Anguelov vd., “Discriminative learning of Markov random fields for segmentation of 3D scan data”, Proc. - 2005 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognition, CVPR 2005, vol. II, pp. 169–176, 2005. [14]Y. Zhu vd., “Target-driven visual navigation in indoor scenes using deep reinforcement learning”, Proc. - IEEE Int. Conf. Robot. Autom., vol 1, pp. 3357–3364, 2017. [15]I. Armeni vd., “3D Semantic Parsing of Large-Scale Indoor Spaces Supplementary Material”, 2016 IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1534–1543, 2016. [16]A. X. Chang vd., “ShapeNet: An Information-Rich 3D Model Repository”, 2015. [17]J. Behley vd., “SemanticKITTI”, Iccv, vol iii, 2019. [18]A. Geiger, P. Lenz, ve R. Urtasun, “Are we ready for autonomous driving? the KITTI vision benchmark suite”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., ss. 3354–3361, 2012. [19]A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, ve M. Nießner, “ScanNet: Richly-annotated 3D reconstructions of indoor scenes”, arXiv, 2017. [20]T. Hackel, N. Savinov, L. Ladicky, J. D. Wegner, K. Schindler, ve M. Pollefeys, “Semantic3D.Net: a New Large-Scale Point Cloud Classification Benchmark”, ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci., c. 4, sayı 1W1, ss. 91–98, 2017. [21]K. Mo vd., “PartNet: A Large-scale Benchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding”, arXiv, pp. 909–918, 2018. [22]X. Song vd., “ApolloCar3D: A Large 3D Car Instance Understanding Benchmark for Autonomous Driving”, arXiv, ss. 5452–5462, 2018. [23]M. A. Uy, Q. H. Pham, B. S. Hua, T. Nguyen, ve S. K. Yeung, “Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data”, Proc. IEEE Int. Conf. Comput. Vis., c. 2019-Octob, pp. 1588–1597, 2019. [24]Z. Wu vd., “3D ShapeNets: A deep representation for volumetric shapes”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, pp. 1912–1920, 2015. [25]X. F. Han, H. Laga, ve M. Bennamoun, “Image-based 3D Object Reconstruction: State-of-the-Art and Trends in the Deep Learning Era”, arXiv, vol. 8828, 2019. [26]G. Elbaz, T. Avraham, ve A. Fischer, “3D point cloud registration for localization using a deep neural network auto-encoder”, Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, sayı July, pp. 2472–2481, 2017. [27]A. Nguyen ve B. Le, “3D point cloud segmentation: A survey”, IEEE Conf. Robot. Autom. Mechatronics, RAM - Proc., pp. 225–230, 2013. [28]Y. Ishikawa, R. Hachiuma, N. Ienaga, W. Kuno, Y. Sugiura, ve H. Saito, “Semantic Segmentation of 3D Point Cloud to Virtually Manipulate Real Living Space”, Proc. 2019 12th Asia Pacific Work. Mix. Augment. Reality, APMAR 2019, 2019. [29]B. Liu, S. He, D. He, Y. Zhang, ve M. Guizani, “A Spark-Based Parallel Fuzzy c -Means Segmentation Algorithm for Agricultural Image Big Data”, IEEE Access, vol. 7, pp. 42169–42180, 2019. [30]R. Schnabel, R. Wahl, ve R. Klein, “Efficient RANSAC for point-cloud shape detection”, Comput. Graph. Forum, vol. 26, no 2, pp. 214–226, 2007. [31]B. Yang ve Z. Dong, “A shape-based segmentation method for mobile laser scanning point clouds”, ISPRS J. Photogramm. Remote Sens., vol. 81, pp. 19–30, 2013. [32]J. Yan, J. Shan, ve W. Jiang, “A global optimization approach to roof segmentation from airborne lidar point clouds”, ISPRS J. Photogramm. Remote Sens., vol. 94, pp. 183–193, 2014. [33]A. Golovinskiy ve T. Funkhouser, “Min-cut based segmentation of point clouds”, 2009 IEEE 12th Int. Conf. Comput. Vis. Work. ICCV Work. 2009, pp. 39–46, 2009. [34]T. Rabbani, F. van den Wildenberg, ve G. Vosselman, “Segmentation of point clouds using smoothness constraint”, Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., vol. 36, no 5, pp. 248–253, 2006. [35]A. Boulch, B. Le Saux, ve N. Audebert, “Unstructured point cloud semantic labeling using deep segmentation networks”, Eurographics Work. 3D Object Retrieval, EG 3DOR, c. 2017-April, pp. 17–24, 2017. [36]F. J. Lawin, M. Danelljan, P. Tosteberg, G. Bhat, F. S. Khan, ve M. Felsberg, “Deep projective 3D semantic segmentation”, Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10424 LNCS, pp. 95–107, 2017. [37] B. Wu, A. Wan, X. Yue, ve K. Keutzer, “SqueezeSeg: Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud”, arXiv, pp. 1887–1893, 2017. [38]B. Wu, X. Zhou, S. Zhao, X. Yue, ve K. Keutzer, “SqueezeSegV2: Improved model structure and unsupervised domain adaptation for road-object segmentation from a liDAR point cloud”, arXiv, pp. 4376–4382, 2018. [39]A. Milioto, I. Vizzo, J. Behley, ve C. Stachniss, “RangeNet ++: Fast and Accurate LiDAR Semantic Segmentation”, IEEE Int. Conf. Intell. Robot. Syst., vol i, pp. 4213–4220, 2019. [40]D. Rethage, J. Wald, J. Sturm, N. Navab, ve F. Tombari, “Fully-Convolutional Point Networks for Large-Scale Point Clouds”, Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11208 LNCS, pp. 625–640, 2018. [41]H. Y. Meng, L. Gao, Y. K. Lai, ve D. Manocha, “VV-NET: Voxel VAE net with group convolutions for point cloud segmentation”, arXiv, pp. 8500–8508, 2018. [42]B. Graham, M. Engelcke, ve L. Van Der Maaten, “3D semantic segmentation with submanifold sparse convolutional networks”, arXiv, pp. 9224–9232, 2017. [43]J. Kautz, “Supplementary Material for SPLATNet : Sparse Lattice Networks for Point Cloud Processing”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 2–4, 2018. [44]R. Alexandru Rosu, P. Schütt, J. Quenzel, ve S. Behnke, “LatticeNet: Fast Point Cloud Segmentation Using Permutohedral Lattices”, 2020. [45]Angela Dai ve M. Nießner, “3DMV: Joint 3D-multi-view prediction for 3D semantic scene segmentation”, Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11214 LNCS, pp. 458–474, 2018. [46]M. Jaritz, J. Gu, ve H. Su, “Multi-view pointnet for 3D scene understanding”, arXiv, 2019. [47]F. Engelmann, T. Kontogianni, B. Leibe, ve R. Field, “2020 IEEE International Conference on Robotics and Automation ( ICRA ) Dilated Point Convolutions : On the Receptive Field Size of Point Convolutions on 3D Point Clouds tions ( DPC ). In a thorough ablation study , we show that the ment of 3D scene unders”, vol 4, pp. 9463–9469, 2020. [48]M. Tatarchenko, J. Park, V. Koltun, ve Q. Y. Zhou, “Tangent Convolutions for Dense Prediction in 3D”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 3887–3896, 2018. [49]L. Tchapmi, C. Choy, I. Armeni, J. Gwak, ve S. Savarese, “SEGCloud: Semantic segmentation of 3D point clouds”, Proc. - 2017 Int. Conf. 3D Vision, 3DV 2017, pp. 537–547, 2018. [50]J. Long, E. Shelhamer, T. Darrell, ve U. Berkeley, “Fully Convolutional Networks for Semantic Segmentation”, Proc. - 2019 Int. Conf. Comput. Vis. Work. ICCVW 2019, pp. 847–856, 2019. [51]A. Dai, D. Ritchie, M. Bokeloh, S. Reed, J. Sturm, ve M. Niebner, “ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4578–4587, 2018. [52]C. Choy, J. Y. Gwak, ve S. Savarese, “4D spatio-temporal ConvNets: Minkowski convolutional neural networks”, arXiv, 2019. [53]L. J. (2017). Qi, C. R., Su, H., Mo, K., & Guibas, “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation”, Proc. - 2016 4th Int. Conf. 3D Vision, 3DV 2016, pp. 601–610, 2016. [54]C. R. Qi, L. Yi, H. Su, ve L. J. Guibas, “PointNet++: Deep hierarchical feature learning on point sets in a metric space”, arXiv, vol Nips, 2017. [55]M. Jiang, Y. Wu, T. Zhao, Z. Zhao, ve C. Lu, “PointSIFT: A SIFT-like Network Module for 3D point cloud semantic segmentation”, arXiv, 2018. [56]F. Engelmann, T. Kontogianni, J. Schult, ve B. Leibe, “Know what your neighbors do: 3D semantic segmentation of point clouds”, arXiv, 2018. [57]H. Zhao, L. Jiang, ve C. F. Jiaya, “PointWeb: Enhancing Local Neighborhood Features for Point Cloud Processing”, vol. 1, pp. 5565–5573. [58]Z. Zhang, B. S. Hua, ve S. K. Yeung, “ShellNet: Efficient point cloud convolutional neural networks using concentric shells statistics”, arXiv, pp. 1607–1616, 2019. [59]Q. Hu vd., “RandLA-Net: Efficient semantic segmentation of large-scale point clouds”, arXiv, pp. 11108–11117, 2019. [60]Y. Zhao, H. Deng, F. Tombari, T. Universit, ve S. Ag, “3D Point Capsule Networks Supplementary Material”, Cvpr, pp. 2–6, 2019. [61]Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, ve J. M. Solomon, “Dynamic Graph CNN for Learning on Point Clouds”, ACM Trans. Graph., vol. 38, no 5, s. Article 146, 2019. [62]R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, ve J. Sivic, “NetVLAD: CNN Architecture for Weakly Supervised Place Recognition”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no 6, pp. 1437–1451, 2018. [63]B. S. Hua, M. K. Tran, ve S. K. Yeung, “Pointwise convolutional neural networks”, arXiv, pp. 984–993, 2017. [64]S. Wang, S. Suo, W. C. Ma, A. Pokrovsky, ve R. Urtasun, “Deep Parametric Continuous Convolutional Neural Networks”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 2589–2597, 2018. [65]H. Thomas, C. R. Qi, J. E. Deschaud, B. Marcotegui, F. Goulette, ve L. J. Guibas, “KPConv: Flexible and deformable convolution for point clouds”, arXiv, pp. 6411–6420, 2019. [66]F. Engelmann, T. Kontogianni, A. Hermans, ve B. Leibe, “Exploring spatial context for 3D semantic segmentation of point clouds”, arXiv, 2018. [67]W. Wang, “Supplementary : Recurrent Slice Networks for 3D Segmentation on Point Clouds Qiangui Huang”, vol 1, pp. 2015–2016, 2017. [68]Z. Zhao, M. Liu, ve K. Ramani, “DAR-Net: Dynamic aggregation network for semantic scene segmentation”, arXiv, 2019. [69]F. Liu, S. Li, L. Zhang, ve C. Zhou, “3DCNN-DQN-RNN : A Deep Reinforcement Learning Framework for Semantic”, IEE Int. Conf. Comput. Vision(ICCV)2017, vol July, pp. 5678–5687, 2017. [70]Loic Landrieu1 ve M. Simonovsky, “Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs”, J. Exp. Theor. Phys., vol. 89, no 4, pp. 734–739, 2018. [71]L. Landrieu ve M. Boussaha, “Point cloud oversegmentation with graph-structured deep metric learning”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 7432–7441, 2019. [72]L. Wang, Y. Huang, Y. Hou, S. Zhang, ve J. Shan, “Graph attention convolution for point cloud semantic segmentation”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 10288–10297, 2019. [73]Y. Ma, Y. Guo, H. Liu, Y. Lei, ve G. Wen, “Global context reasoning for semantic segmentation of 3D point clouds”, Proc. - 2020 IEEE Winter Conf. Appl. Comput. Vision, WACV 2020, pp. 2920–2929, 2020. [74]Y. Li, R. Bu, ve X. Di, “PointCNN : Convolution On X -Transformed Points”, sayı NeurIPS, 2018.
Toplam 1 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Muhammed Ahmet Demirtaş 0000-0003-4092-7284

Yayımlanma Tarihi 31 Ocak 2023
Yayımlandığı Sayı Yıl 2023 Cilt: 11 Sayı: 1

Kaynak Göster

APA Demirtaş, M. A. (2023). Derin Öğrenme Yöntemleri ile 3B Nokta Bulutlarının Semantik Segmentasyonuna Genel bir Bakış. Düzce Üniversitesi Bilim Ve Teknoloji Dergisi, 11(1), 342-357. https://doi.org/10.29130/dubited.1004211
AMA Demirtaş MA. Derin Öğrenme Yöntemleri ile 3B Nokta Bulutlarının Semantik Segmentasyonuna Genel bir Bakış. DÜBİTED. Ocak 2023;11(1):342-357. doi:10.29130/dubited.1004211
Chicago Demirtaş, Muhammed Ahmet. “Derin Öğrenme Yöntemleri Ile 3B Nokta Bulutlarının Semantik Segmentasyonuna Genel Bir Bakış”. Düzce Üniversitesi Bilim Ve Teknoloji Dergisi 11, sy. 1 (Ocak 2023): 342-57. https://doi.org/10.29130/dubited.1004211.
EndNote Demirtaş MA (01 Ocak 2023) Derin Öğrenme Yöntemleri ile 3B Nokta Bulutlarının Semantik Segmentasyonuna Genel bir Bakış. Düzce Üniversitesi Bilim ve Teknoloji Dergisi 11 1 342–357.
IEEE M. A. Demirtaş, “Derin Öğrenme Yöntemleri ile 3B Nokta Bulutlarının Semantik Segmentasyonuna Genel bir Bakış”, DÜBİTED, c. 11, sy. 1, ss. 342–357, 2023, doi: 10.29130/dubited.1004211.
ISNAD Demirtaş, Muhammed Ahmet. “Derin Öğrenme Yöntemleri Ile 3B Nokta Bulutlarının Semantik Segmentasyonuna Genel Bir Bakış”. Düzce Üniversitesi Bilim ve Teknoloji Dergisi 11/1 (Ocak 2023), 342-357. https://doi.org/10.29130/dubited.1004211.
JAMA Demirtaş MA. Derin Öğrenme Yöntemleri ile 3B Nokta Bulutlarının Semantik Segmentasyonuna Genel bir Bakış. DÜBİTED. 2023;11:342–357.
MLA Demirtaş, Muhammed Ahmet. “Derin Öğrenme Yöntemleri Ile 3B Nokta Bulutlarının Semantik Segmentasyonuna Genel Bir Bakış”. Düzce Üniversitesi Bilim Ve Teknoloji Dergisi, c. 11, sy. 1, 2023, ss. 342-57, doi:10.29130/dubited.1004211.
Vancouver Demirtaş MA. Derin Öğrenme Yöntemleri ile 3B Nokta Bulutlarının Semantik Segmentasyonuna Genel bir Bakış. DÜBİTED. 2023;11(1):342-57.