Araştırma Makalesi
BibTex RIS Kaynak Göster

Human Instance Segmentation Based on Omega- Shape Using Deep Learning

Yıl 2025, Cilt: 6 Sayı: 1, 31 - 41
https://doi.org/10.53525/jster.1469697

Öz

Human detection and segmentation in an unconstrained environment is a very important and difficult task, having many important applications including tracking human beings, pedestrian detection, head count etc. Human detection in a single object environment is quite easy, but the problem becomes quite cumbersome in a crowded and cluttered environment. Most of the existing algorithms work on the detection and segmentation of the whole person in a scene. However, the performance of such algorithms degrades in case of occlusion and cluttered environment. To increase the performance in such an environment a technique is available that detects a distinct part of the human body which is “omega shape” instead of the full human body. However, the detection of bounding box also includes background pixels which limit the performance of the high-end applications such as tracking. Therefore, the objective of this research is to accurately segment the omega shape, so that the high-end applications have no background clutter in the human appearance model. We have trained and evaluated Mask R-CNN and YOLO+UNET and got a trade-off between accuracy and computation cost. The testing accuracy of Mask R- CNN and YOLO+UNET is 92.6% and 88.4%, while the computation cost is 6fps and 29fps, respectively.

Kaynakça

  • [1]. M. Harville, “Stereo person tracking with adaptive plan-view templates of height and occupancy statistics,” Image and Vision Computing, vol. 22, no. 2, pp. 127–142, 2004.
  • [2]. A. Senior et al., “Tracking people with probabilistic appearance mod- els,” in ECCV workshop on Performance Evaluation of Tracking and Surveillance Systems. Citeseer, 2002, pp. 48–55.
  • [3]. A. Elgammal and L. S. Davis, “Probabilistic framework for segmenting people under occlusion,” in Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, vol. 2. IEEE, 2001, pp. 145–152.
  • [4]. W. Hu, M. Hu, X. Zhou, T. Tan, J. Lou, and S. Maybank, “Principal axis-based correspondence between multiple cameras for people track- ing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 4, pp. 663–671, 2006.
  • [5]. J. Rittscher, P. H. Tu, and N. Krahnstoever, “Simultaneous estimation of segmentation and shape,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2. IEEE, 2005, pp. 486–493.
  • [6]. C.-J. Pai, H.-R. Tyan, Y.-M. Liang, H.-Y. M. Liao, and S.-W. Chen, “Pedestrian detection and tracking at crossroads,” Pattern Recognition, vol. 37, no. 5, pp. 1025–1034, 2004.
  • [7]. B. Heisele and C. Woehler, “Motion-based recognition of pedestrians,” in Proceedings. Fourteenth International Conference on Pattern Recog- nition (Cat. No. 98EX170), vol. 2. IEEE, 1998, pp. 1325–1330.
  • [8]. R. Cutler and L. S. Davis, “Robust real-time periodic motion detection, analysis, and applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 781–796, 2000.
  • [9]. S. A. Niyogi, E. H. Adelson et al., “Analyzing and recognizing walking figures in xyt,” in CVPR, vol. 94, 1994, pp. 469–474.
  • [10]. ——, “Analyzing and recognizing walking figures in xyt,” in CVPR, vol. 94, 1994, pp. 469–474.
  • [11]. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), vol. 1. IEEE, 2005, pp. 886–893.
  • [12]. P. Dolla´r, R. Appel, S. Belongie, and P. Perona, “Fast feature pyramids for object detection,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 8, pp. 1532–1545, 2014.
  • [13]. X. Wang, T. X. Han, and S. Yan, “An hog-lbp human detector with partial occlusion handling,” in 2009 IEEE 12th international conference on computer vision. IEEE, 2009, pp. 32–39.
  • [14]. S. Paisitkriangkrai, C. Shen, and A. van den Hengel, “Pedestrian detec- tion with spatially pooled features and structured ensemble learning,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 6, pp. 1243–1257, 2015.
  • [15]. T. Watanabe and S. Ito, “Two co-occurrence histogram features using gradient orientations and local binary patterns for pedestrian detection,” in 2013 2nd IAPR Asian Conference on Pattern Recognition. IEEE, 2013, pp. 415–419.
  • [16]. S. Maji, A. C. Berg, and J. Malik, “Classification using intersection kernel support vector machines is efficient,” in 2008 IEEE conference on computer vision and pattern recognition. IEEE, 2008, pp. 1–8.
  • [17]. R. Benenson, M. Mathias, T. Tuytelaars, and L. Van Gool, “Seeking the strongest rigid detector,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3666–3673.
  • [18]. S. Zhang, R. Benenson, B. Schiele et al., “Filtered channel features for pedestrian detection.” in CVPR, vol. 1, no. 2, 2015, p. 4.
  • [19]. M. Li, Z. Zhang, K. Huang, and T. Tan, “Rapid and robust human detection and tracking based on omega-shape features,” in 2009 16th IEEE International Conference on Image Processing (ICIP). IEEE, 2009, pp. 2545–2548.
  • [20]. J. Hosang, M. Omran, R. Benenson, and B. Schiele, “Taking a deeper look at pedestrians,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 4073–4082.
  • [21]. S. Zhang, R. Benenson, M. Omran, J. Hosang, and B. Schiele, “How far are we from solving pedestrian detection?” in Proceedings of the iEEE conference on computer vision and pattern recognition, 2016, pp. 1259–1267.
  • [22]. Y. Tian, P. Luo, X. Wang, and X. Tang, “Deep learning strong parts for pedestrian detection,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1904–1912.
  • [23]. ——, “Pedestrian detection aided by deep learning semantic tasks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 5079–5087.
  • [24]. D. Ribeiro, J. C. Nascimento, A. Bernardino, and G. Carneiro, “Im- proving the performance of pedestrian detectors using convolutional learning,” Pattern Recognition, vol. 61, pp. 641–649, 2017.
  • [25]. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.
  • [26]. P. Dolla´r, Z. Tu, P. Perona, and S. Belongie, “Integral channel features,” 2009.
  • [27]. P. Dolla´r, R. Appel, S. Belongie, and P. Perona, “Fast feature pyramids for object detection,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 8, pp. 1532–1545, 2014.
  • [28]. W. Nam, P. Dolla´r, and J. H. Han, “Local decorrelation for improved pedestrian detection,” Advances in neural information processing sys- tems, vol. 27, pp. 424–432, 2014.
  • [29]. R. Benenson, M. Mathias, T. Tuytelaars, and L. Van Gool, “Seeking the strongest rigid detector,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3666–3673.
  • [30]. P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. LeCun, “Pedestrian detection with unsupervised multi-stage feature learning,” in Proceed- ings of the IEEE conference on computer vision and pattern recognition, 2013, pp. 3626–3633.
  • [31]. A. Angelova, A. Krizhevsky, V. Vanhoucke, A. Ogale, and D. Ferguson, “Real-time pedestrian detection with deep network cascades,” 2015.
  • [32]. X. Du, M. El-Khamy, J. Lee, and L. Davis, “Fused dnn: A deep neural network fusion approach to fast and robust pedestrian detection,” in 2017 IEEE winter conference on applications of computer vision (WACV). IEEE, 2017, pp. 953–961.
  • [33]. S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 6, pp. 1137–1149, 2016.
  • [34]. L. Zhang, L. Lin, X. Liang, and K. He, “Is faster r-cnn doing well for pedestrian detection?” in European conference on computer vision. Springer, 2016, pp. 443–457.
  • [35]. Q. Hu, P. Wang, C. Shen, A. van den Hengel, and F. Porikli, “Pushing the limits of deep cnns for pedestrian detection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 6, pp. 1358– 1368, 2017.
  • [36]. Z. Cai, M. J. Saberian, and N. Vasconcelos, “Learning complexity-aware cascades for pedestrian detection,” IEEE transactions on pattern analysis and machine intelligence, 2019.
  • [37]. J. Li, X. Liang, S. Shen, T. Xu, J. Feng, and S. Yan, “Scale-aware fast r- cnn for pedestrian detection,” IEEE transactions on Multimedia, vol. 20, no. 4, pp. 985–996, 2017.
  • [38]. Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos, “A unified multi-scale deep convolutional neural network for fast object detection,” in European conference on computer vision. Springer, 2016, pp. 354–370.
  • [39]. S. Zhang, R. Benenson, and B. Schiele, “Citypersons: A diverse dataset for pedestrian detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3213–3221.
  • [40]. Y. Xu, X. Zhou, P. Liu, and H. Xu, “Rapid pedestrian detection based on deep omega-shape features with partial occlusion handing,” Neural Processing Letters, vol. 49, no. 3, pp. 923–937, 2019.
  • [41]. K. Chen, X. Song, X. Zhai, B. Zhang, B. Hou, and Y. Wang, “An integrated deep learning framework for occluded pedestrian tracking,” IEEE Access, vol. 7, pp. 26 060–26 072, 2019.
  • [42]. Pascal voc 2010. [Online]. Available: http://roozbehm.info/pascal-parts/ pascal-parts.html
  • [43]. “Pascal voc 2010.” [Online]. Available: http://host.robots.ox.ac.uk/ pascal/VOC/voc2010/index.html
  • [44]. J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,”arXiv preprint arXiv:1804.02767, 2018. [45]. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
  • [46]. Hnatushenko, V., & Zhernovyi, V. (2020, August). Method of improving instance segmentation for very high resolution remote sensing imagery using deep learning. In International Conference on Data Stream Mining and Processing (pp. 323-333). Cham: Springer International Publishing.
  • [47]. Anh, T. T., Nguyen-Tuan, K., Quan, T. M., & Jeong, W. K. (2020). Reinforced coloring for end-to-end instance segmentation. arXiv preprint arXiv:2005.07058.
  • [48]. Hurtik, P., Molek, V., Hula, J., Vajgl, M., Vlasanek, P., & Nejezchleba, T. (2022). Poly-YOLO: higher speed, more precise detection and instance segmentation for YOLOv3. Neural Computing and Applications, 34(10), 8275-8290.
  • [49]. Wu, J., Jiang, Y., Bai, S., Zhang, W., & Bai, X. (2022, October). Seqformer: Sequential transformer for video instance segmentation. In European Conference on Computer Vision (pp. 553-569). Cham: Springer Nature Switzerland.
  • [50]. Pei, J., Cheng, T., Fan, D. P., Tang, H., Chen, C., & Van Gool, L. (2022, October). Osformer: One-stage camouflaged instance segmentation with transformers. In European Conference on Computer Vision (pp. 19-37). Cham: Springer Nature Switzerland.

Human Instance Segmentation Based on Omega- Shape Using Deep Learning

Yıl 2025, Cilt: 6 Sayı: 1, 31 - 41
https://doi.org/10.53525/jster.1469697

Öz

Human detection and segmentation in an unconstrained environment is a very important and difficult task, having many important applications including tracking human beings, pedestrian detection, head count etc. Human detection in a single object environment is quite easy, but the problem becomes quite cumbersome in a crowded and cluttered environment. Most of the existing algorithms work on the detection and segmentation of the whole person in a scene. However, the performance of such algorithms degrades in case of occlusion and cluttered environment. To increase the performance in such an environment a technique is available that detects a distinct part of the human body which is “omega shape” instead of the full human body. However, the detection of bounding box also includes background pixels which limit the performance of the high-end applications such as tracking. Therefore, the objective of this research is to accurately segment the omega shape, so that the high-end applications have no background clutter in the human appearance model. We have trained and evaluated Mask R-CNN and YOLO+UNET and got a trade-off between accuracy and computation cost. The testing accuracy of Mask R- CNN and YOLO+UNET is 92.6% and 88.4%, while the computation cost is 6fps and 29fps, respectively.

Kaynakça

  • [1]. M. Harville, “Stereo person tracking with adaptive plan-view templates of height and occupancy statistics,” Image and Vision Computing, vol. 22, no. 2, pp. 127–142, 2004.
  • [2]. A. Senior et al., “Tracking people with probabilistic appearance mod- els,” in ECCV workshop on Performance Evaluation of Tracking and Surveillance Systems. Citeseer, 2002, pp. 48–55.
  • [3]. A. Elgammal and L. S. Davis, “Probabilistic framework for segmenting people under occlusion,” in Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, vol. 2. IEEE, 2001, pp. 145–152.
  • [4]. W. Hu, M. Hu, X. Zhou, T. Tan, J. Lou, and S. Maybank, “Principal axis-based correspondence between multiple cameras for people track- ing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 4, pp. 663–671, 2006.
  • [5]. J. Rittscher, P. H. Tu, and N. Krahnstoever, “Simultaneous estimation of segmentation and shape,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2. IEEE, 2005, pp. 486–493.
  • [6]. C.-J. Pai, H.-R. Tyan, Y.-M. Liang, H.-Y. M. Liao, and S.-W. Chen, “Pedestrian detection and tracking at crossroads,” Pattern Recognition, vol. 37, no. 5, pp. 1025–1034, 2004.
  • [7]. B. Heisele and C. Woehler, “Motion-based recognition of pedestrians,” in Proceedings. Fourteenth International Conference on Pattern Recog- nition (Cat. No. 98EX170), vol. 2. IEEE, 1998, pp. 1325–1330.
  • [8]. R. Cutler and L. S. Davis, “Robust real-time periodic motion detection, analysis, and applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 781–796, 2000.
  • [9]. S. A. Niyogi, E. H. Adelson et al., “Analyzing and recognizing walking figures in xyt,” in CVPR, vol. 94, 1994, pp. 469–474.
  • [10]. ——, “Analyzing and recognizing walking figures in xyt,” in CVPR, vol. 94, 1994, pp. 469–474.
  • [11]. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), vol. 1. IEEE, 2005, pp. 886–893.
  • [12]. P. Dolla´r, R. Appel, S. Belongie, and P. Perona, “Fast feature pyramids for object detection,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 8, pp. 1532–1545, 2014.
  • [13]. X. Wang, T. X. Han, and S. Yan, “An hog-lbp human detector with partial occlusion handling,” in 2009 IEEE 12th international conference on computer vision. IEEE, 2009, pp. 32–39.
  • [14]. S. Paisitkriangkrai, C. Shen, and A. van den Hengel, “Pedestrian detec- tion with spatially pooled features and structured ensemble learning,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 6, pp. 1243–1257, 2015.
  • [15]. T. Watanabe and S. Ito, “Two co-occurrence histogram features using gradient orientations and local binary patterns for pedestrian detection,” in 2013 2nd IAPR Asian Conference on Pattern Recognition. IEEE, 2013, pp. 415–419.
  • [16]. S. Maji, A. C. Berg, and J. Malik, “Classification using intersection kernel support vector machines is efficient,” in 2008 IEEE conference on computer vision and pattern recognition. IEEE, 2008, pp. 1–8.
  • [17]. R. Benenson, M. Mathias, T. Tuytelaars, and L. Van Gool, “Seeking the strongest rigid detector,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3666–3673.
  • [18]. S. Zhang, R. Benenson, B. Schiele et al., “Filtered channel features for pedestrian detection.” in CVPR, vol. 1, no. 2, 2015, p. 4.
  • [19]. M. Li, Z. Zhang, K. Huang, and T. Tan, “Rapid and robust human detection and tracking based on omega-shape features,” in 2009 16th IEEE International Conference on Image Processing (ICIP). IEEE, 2009, pp. 2545–2548.
  • [20]. J. Hosang, M. Omran, R. Benenson, and B. Schiele, “Taking a deeper look at pedestrians,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 4073–4082.
  • [21]. S. Zhang, R. Benenson, M. Omran, J. Hosang, and B. Schiele, “How far are we from solving pedestrian detection?” in Proceedings of the iEEE conference on computer vision and pattern recognition, 2016, pp. 1259–1267.
  • [22]. Y. Tian, P. Luo, X. Wang, and X. Tang, “Deep learning strong parts for pedestrian detection,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1904–1912.
  • [23]. ——, “Pedestrian detection aided by deep learning semantic tasks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 5079–5087.
  • [24]. D. Ribeiro, J. C. Nascimento, A. Bernardino, and G. Carneiro, “Im- proving the performance of pedestrian detectors using convolutional learning,” Pattern Recognition, vol. 61, pp. 641–649, 2017.
  • [25]. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.
  • [26]. P. Dolla´r, Z. Tu, P. Perona, and S. Belongie, “Integral channel features,” 2009.
  • [27]. P. Dolla´r, R. Appel, S. Belongie, and P. Perona, “Fast feature pyramids for object detection,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 8, pp. 1532–1545, 2014.
  • [28]. W. Nam, P. Dolla´r, and J. H. Han, “Local decorrelation for improved pedestrian detection,” Advances in neural information processing sys- tems, vol. 27, pp. 424–432, 2014.
  • [29]. R. Benenson, M. Mathias, T. Tuytelaars, and L. Van Gool, “Seeking the strongest rigid detector,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3666–3673.
  • [30]. P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. LeCun, “Pedestrian detection with unsupervised multi-stage feature learning,” in Proceed- ings of the IEEE conference on computer vision and pattern recognition, 2013, pp. 3626–3633.
  • [31]. A. Angelova, A. Krizhevsky, V. Vanhoucke, A. Ogale, and D. Ferguson, “Real-time pedestrian detection with deep network cascades,” 2015.
  • [32]. X. Du, M. El-Khamy, J. Lee, and L. Davis, “Fused dnn: A deep neural network fusion approach to fast and robust pedestrian detection,” in 2017 IEEE winter conference on applications of computer vision (WACV). IEEE, 2017, pp. 953–961.
  • [33]. S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 6, pp. 1137–1149, 2016.
  • [34]. L. Zhang, L. Lin, X. Liang, and K. He, “Is faster r-cnn doing well for pedestrian detection?” in European conference on computer vision. Springer, 2016, pp. 443–457.
  • [35]. Q. Hu, P. Wang, C. Shen, A. van den Hengel, and F. Porikli, “Pushing the limits of deep cnns for pedestrian detection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 6, pp. 1358– 1368, 2017.
  • [36]. Z. Cai, M. J. Saberian, and N. Vasconcelos, “Learning complexity-aware cascades for pedestrian detection,” IEEE transactions on pattern analysis and machine intelligence, 2019.
  • [37]. J. Li, X. Liang, S. Shen, T. Xu, J. Feng, and S. Yan, “Scale-aware fast r- cnn for pedestrian detection,” IEEE transactions on Multimedia, vol. 20, no. 4, pp. 985–996, 2017.
  • [38]. Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos, “A unified multi-scale deep convolutional neural network for fast object detection,” in European conference on computer vision. Springer, 2016, pp. 354–370.
  • [39]. S. Zhang, R. Benenson, and B. Schiele, “Citypersons: A diverse dataset for pedestrian detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3213–3221.
  • [40]. Y. Xu, X. Zhou, P. Liu, and H. Xu, “Rapid pedestrian detection based on deep omega-shape features with partial occlusion handing,” Neural Processing Letters, vol. 49, no. 3, pp. 923–937, 2019.
  • [41]. K. Chen, X. Song, X. Zhai, B. Zhang, B. Hou, and Y. Wang, “An integrated deep learning framework for occluded pedestrian tracking,” IEEE Access, vol. 7, pp. 26 060–26 072, 2019.
  • [42]. Pascal voc 2010. [Online]. Available: http://roozbehm.info/pascal-parts/ pascal-parts.html
  • [43]. “Pascal voc 2010.” [Online]. Available: http://host.robots.ox.ac.uk/ pascal/VOC/voc2010/index.html
  • [44]. J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,”arXiv preprint arXiv:1804.02767, 2018. [45]. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
  • [46]. Hnatushenko, V., & Zhernovyi, V. (2020, August). Method of improving instance segmentation for very high resolution remote sensing imagery using deep learning. In International Conference on Data Stream Mining and Processing (pp. 323-333). Cham: Springer International Publishing.
  • [47]. Anh, T. T., Nguyen-Tuan, K., Quan, T. M., & Jeong, W. K. (2020). Reinforced coloring for end-to-end instance segmentation. arXiv preprint arXiv:2005.07058.
  • [48]. Hurtik, P., Molek, V., Hula, J., Vajgl, M., Vlasanek, P., & Nejezchleba, T. (2022). Poly-YOLO: higher speed, more precise detection and instance segmentation for YOLOv3. Neural Computing and Applications, 34(10), 8275-8290.
  • [49]. Wu, J., Jiang, Y., Bai, S., Zhang, W., & Bai, X. (2022, October). Seqformer: Sequential transformer for video instance segmentation. In European Conference on Computer Vision (pp. 553-569). Cham: Springer Nature Switzerland.
  • [50]. Pei, J., Cheng, T., Fan, D. P., Tang, H., Chen, C., & Van Gool, L. (2022, October). Osformer: One-stage camouflaged instance segmentation with transformers. In European Conference on Computer Vision (pp. 19-37). Cham: Springer Nature Switzerland.
Toplam 49 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Bilgisayar Yazılımı
Bölüm Araştırma Makaleleri
Yazarlar

Huma Sheraz 0009-0002-8562-7947

Zuhaib Ahmed Khan 0009-0007-4849-1303

Muhammad Awais 0009-0006-3101-524X

Yayımlanma Tarihi
Gönderilme Tarihi 17 Nisan 2024
Kabul Tarihi 7 Şubat 2025
Yayımlandığı Sayı Yıl 2025 Cilt: 6 Sayı: 1

Kaynak Göster

APA Sheraz, H., Ahmed Khan, Z., & Awais, M. (t.y.). Human Instance Segmentation Based on Omega- Shape Using Deep Learning. Journal of Science, Technology and Engineering Research, 6(1), 31-41. https://doi.org/10.53525/jster.1469697
AMA Sheraz H, Ahmed Khan Z, Awais M. Human Instance Segmentation Based on Omega- Shape Using Deep Learning. Journal of Science, Technology and Engineering Research. 6(1):31-41. doi:10.53525/jster.1469697
Chicago Sheraz, Huma, Zuhaib Ahmed Khan, ve Muhammad Awais. “Human Instance Segmentation Based on Omega- Shape Using Deep Learning”. Journal of Science, Technology and Engineering Research 6, sy. 1 t.y.: 31-41. https://doi.org/10.53525/jster.1469697.
EndNote Sheraz H, Ahmed Khan Z, Awais M Human Instance Segmentation Based on Omega- Shape Using Deep Learning. Journal of Science, Technology and Engineering Research 6 1 31–41.
IEEE H. Sheraz, Z. Ahmed Khan, ve M. Awais, “Human Instance Segmentation Based on Omega- Shape Using Deep Learning”, Journal of Science, Technology and Engineering Research, c. 6, sy. 1, ss. 31–41, doi: 10.53525/jster.1469697.
ISNAD Sheraz, Huma vd. “Human Instance Segmentation Based on Omega- Shape Using Deep Learning”. Journal of Science, Technology and Engineering Research 6/1 (t.y.), 31-41. https://doi.org/10.53525/jster.1469697.
JAMA Sheraz H, Ahmed Khan Z, Awais M. Human Instance Segmentation Based on Omega- Shape Using Deep Learning. Journal of Science, Technology and Engineering Research.;6:31–41.
MLA Sheraz, Huma vd. “Human Instance Segmentation Based on Omega- Shape Using Deep Learning”. Journal of Science, Technology and Engineering Research, c. 6, sy. 1, ss. 31-41, doi:10.53525/jster.1469697.
Vancouver Sheraz H, Ahmed Khan Z, Awais M. Human Instance Segmentation Based on Omega- Shape Using Deep Learning. Journal of Science, Technology and Engineering Research. 6(1):31-4.
Dergide yayınlanan çalışmalar
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 (CC BY-NC-ND 4.0) Uluslararası Lisansı ile lisanslanmıştır.
by-nc-nd.png

Free counters!