Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2023, Cilt: 13 Sayı: 2, 133 - 158, 31.12.2023
https://doi.org/10.17678/beuscitech.1330481

Öz

Kaynakça

  • [1] Ö. Çevik, Oğuzhanoglu, N. K., & Gülbas, Z., "Spor Yaralanmaları: Önleme ve Tedavi," Türk fiziksel tıp ve rehabilitasyon dergisi, vol. 64(3), pp. 266-273, 2018.
  • [2] I. Shrier, "Strategic Assessment of Risk and Risk Tolerance (StARRT) framework for return-to-play decision-making," British journal of sports medicine, vol. 49(19), pp. 1311-1315, 2015.
  • [3] R. Bahr, & Holme, I, "Risk factors for sports injuries-a methodological approach," British journal of sports medicine, vol. 37(5), pp. 384-392, 2003.
  • [4] Y. Çakır, "Biyomekaniksel Analizler," İstanbul Medipol Üniversitesi, Sağlık Bilimleri Enstitüsü, Fizyoterapi ve Rehabilitasyon Anabilim Dalı, 2019.
  • [5] İ. Sarı, "Biyomekanik ve Fiziksel Performansın Biyomekanik Analizi," Hacettepe Spor Bilimleri Dergisi, vol. 25(4), pp. 153-167, 2014.
  • [6] S. Mülayim, "Spor Biyomekaniği," Ankara Üniversitesi Beden Eğitimi ve Spor Yüksekokulu Dergisi, vol. 19(3), pp. 183-192, 2019.
  • [7] Y. Güneri, "Biyomekanik, sağlık bilimleri ve spor bilimleri açısından önemi," Türkiye Klinikleri Journal of Sports Sciences, vol. 11(1), pp. 16-22, 2019.
  • [8] D. A. Winter, "Biomechanics and motor control of human movement," John Wiley & Sons, 2019.
  • [9] V. M. Zatsiorsky, & Seluyanov, V. N, "The mass and inertia characteristics of the main segments of the human body," Biomechanics VIII-B, pp. 115-122, 1983.
  • [10] F. Muradlı, "Derin Öğrenme Kullanılarak Görüntülerden İnsan Duruş Tespiti," Sakarya Üniversitesi Fen Bilimleri Enstitüsü Yüksek Lisans Tezi, 2021.
  • [11] A. Özdemir, & Özdemir, A, "MediaPipe Pose ile Evde Egzersiz Yaparken Duruş Tespiti ve Rehberlik Etme," International Journal of Informatics Technologies, vol. 14(2), pp. 123-132, 2021.
  • [12] M. Dersuneli, T. Gündüz, and Y. Kutlu, "Bul-Tak Oyuncağı Şekillerinin Klasik Görüntü İşleme ve Derin Öğrenme Yöntemleri ile Tespiti," Bitlis Eren Üniversitesi Fen Bilimleri Dergisi, vol. 10, no. 4, pp. 1290-1303, 2021.
  • [13] L. Deng and D. Yu, "Deep learning: methods and applications," Foundations and trends® in signal processing, vol. 7, no. 3–4, pp. 197-387, 2014.
  • [14] A. Yaman, S. Abdulkadir, B. Ümit, and E. Sami, "Deep learning-based face liveness detection in videos," in Proceedings of the IEEEInternational Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 2017, pp. 16-17.
  • [15] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, pp. 436-444, 2015.
  • [16] C.-H. Chen and D. Ramanan, "3d human pose estimation= 2d pose estimation+ matching," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7035-7043.
  • [17] C. Zheng et al., "Deep Learning-Based Human Pose Estimation: A Survey," ACM Computing Surveys, 2019.
  • [18] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime multi-person 2d pose estimation using part affinity fields," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7291-7299.
  • [19] M. H. BC, R. Prathibha, and S. Kumari, "YOGA AI TRAINER USING TENSORFLOW. JS POSENET."
  • [20] R. Guler, N. Neverova, and I. DensePose, "Dense human pose estimation in the wild," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 18-23.
  • [21] H.-S. Fang et al., "Alphapose: Whole-body regional multi-person pose estimation and tracking in real-time," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  • [22] K. Sun, B. Xiao, D. Liu, and J. Wang, "Deep high-resolution representation learning for human pose estimation," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 5693-5703.
  • [23] S. Garg, A. Saxena, and R. Gupta, "Yoga pose classification: a CNN and MediaPipe inspired deep learning approach for real-world application," Journal of Ambient Intelligence and Humanized Computing, pp. 1-12, 2022.
  • [24] H. H. Pham, H. Salmane, L. Khoudour, A. Crouzil, S. A. Velastin, and P. Zegers, "A unified deep framework for joint 3d pose estimation and action recognition from a single rgb camera," Sensors, vol. 20, no. 7, p. 1825, 2020.
  • [25] A. Howard et al., "Searching for mobilenetv3," in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1314-1324.
  • [26] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime multi-person 2d pose estimation using part affinity fields," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1302-1310.
  • [27] K. Su, D. Yu, Z. Xu, X. Geng, and C. Wang, "Multi-person pose estimation with enhanced channel-wise and spatial information," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 5674-5682.
  • [28] M. Orescanin, L. N. Smith, S. Sahu, P. Goyal, and S. R. Chhetri, "Editorial: Deep learning with limited labeled data for vision, audio, and text," (in English), Frontiers in Artificial Intelligence, Editorial vol. 6, 2023-June-13 2023, doi: 10.3389/frai.2023.1213419.
  • [29] A. C. o. S. Medicine, ACSM's guidelines for exercise testing and prescription. Lippincott williams & wilkins, 2013.
  • [30] R. F. Escamilla, G. S. Fleisig, T. M. Lowry, S. W. Barrentine, and J. R. Andrews, "A three-dimensional biomechanical analysis of the squat during varying stance widths," Medicine and science in sports and exercise, vol. 33, no. 6, pp. 984-998, 2001.
  • [31] R. F. Escamilla, A. C. Francisco, A. V. Kayes, K. P. Speer, and C. T. Moorman 3rd, "An electromyographic analysis of sumo and conventional style deadlifts," Medicine and science in sports and exercise, vol. 34, no. 4, pp. 682-688, 2002.
  • [32] Z. Cömert and A. KOCAMAZ, "A study of artificial neural network training algorithms for classification of cardiotocography signals," Bitlis Eren University journal of science and technology, vol. 7, no. 2, pp. 93-103, 2017.
  • [33] M. Pilgrim, "Serializing Python Objects," in Dive Into Python 3: Springer, 2009, pp. 205-223.
  • [34] N. Çetin, Biyomekanik, Setma Baskı Ankara, vol. 1, pp.4-41, 1997.
  • [35] C. Açıkada, H. Demirel, Biyomekanik ve Hareket Bilgisi, AÜAÖFEskişehir, pp. 15,1993.
  • [36] G. Yavuzer, The use of computerized gait analysis in the assessment of neuromusculoskeletal disorders, Journal Of Physical Medicine And Rehabilitation Sciences, vol. 10, no: 2, pp. 043-045, 2007.
  • [37] DA Winter, Biomechanics And Motor Control Of Human Movement, 2nd Edition, John Wiley & Sons Canada, 1990.
  • [38] W. Braüne, O. Fischer, The Human Gait (Ceviri: Maquet P Furlong R), Springer-Verlag Heidelberg Almanya, 1987.
  • [39] YI Abdel-Aziz, HM Karara, Direct linear transformation from comparator coordinates ınto object space coordinates in close-range photogrammetry proceedings of the asp/ui, Symposium on Close-range Photogrammetry American Society of Photogrammetry, Falls Church VA, s 1-18, 1971.
  • [40] E. Civek, Comparison of kinematic results between Metu-kiss & Ankara University-vicon gait analysis systems, Y Lisans Tezi, Odtü Makina Mühendisliği Bölümü, 2006.
  • [41] M. Orescanin, L. N. Smith, S. Sahu, P. Goyal, and S. R. Chhetri, "Editorial: Deep learning with limited labeled data for vision, audio, and text," (in English), Frontiers in Artificial Intelligence, Editorial vol. 6, 2023-June-13 2023, doi: 10.3389/frai.2023.1213419.
  • [42] Ö. Çokluk, "Lojistik regresyon analizi: Kavram ve uygulama.", Kuram ve uygulamada eğitim bilimleri, vol.10, no: 3, pp. 1357-1407, 2010.
  • [43] C.F. İşçen et al, "Su Kalitesi Değişimine Etki Eden Değişkenlerin Lojistik Regresyon, Lojistik-Ridge Ve Lojistik-Lasso Yöntemleri İle Tespiti.", Biyoloji Bilimleri Araştırma Dergisi, vol. 14, no: 1, pp. 1-12.
  • [44] F. Erdem, et al., "Rastgele orman yöntemi kullanılarak kıyı çizgisi çıkarımı İstanbul örneği.", Geomatik 3.2,pp. 100-107, 2018.
  • [45] N. Chakrabarty, et al., "Flight arrival delay prediction using gradient boosting classifier.", Emerging Technologies in Data Mining and Information Security: Proceedings of IEMIS 2018, Volume 2. Springer Singapore, 2019, 2018.
  • [46] J.-L. Chung, L.-Y. Ong, and M.-C. Leow, "Comparative Analysis of Skeleton-Based Human Pose Estimation," Future Internet, vol. 14, no. 12, p. 380, 2022. [Online]. Available: https://www.mdpi.com/1999-5903/14/12/380.
  • [47] C. Lugaresi et al., "Mediapipe: A framework for building perception pipelines," arXiv preprint arXiv:1906.08172, 2019.
  • [48] G. Papandreou et al., "Towards accurate multi-person pose estimation in the wild," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4903-4911.
  • [49] B. Jo and S. Kim, "Comparative analysis of OpenPose, PoseNet, and MoveNet models for pose estimation in mobile devices," Traitement du Signal, vol. 39, no. 1, p. 119, 2022.
  • [50] F. Duman, T. D. İpek, and M. Saraçlar, "Unsupervised Discovery of Fingerspelled Letters in Sign Language Videos," in 2021 29th Signal Processing and Communications Applications Conference (SIU), 2021: IEEE, pp. 1-4.
  • [51] M. Mundt, Z. Born, M. Goldacre, and J. Alderson, "Estimating Ground Reaction Forces from Two-Dimensional Pose Data: A Biomechanics-Based Comparison of AlphaPose, BlazePose, and OpenPose," Sensors, vol. 23, no. 1, p. 78, 2022.
  • [52] L. Song, G. Yu, J. Yuan, and Z. Liu, "Human pose estimation and its application to action recognition: A survey," Journal of Visual Communication and Image Representation, vol. 76, p. 103055, 2021.
  • [53] L. Pishchulin et al., "Deepcut: Joint subset partition and labeling for multi person pose estimation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4929-4937.
  • [54] T.-Y. Lin et al., "Microsoft coco: Common objects in context," in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 2014: Springer, pp. 740-755.
  • [55] D. C. Luvizon, D. Picard, and H. Tabia, "2d/3d pose estimation and action recognition using multitask deep learning," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5137-5146.

Accuracy Detection in Some Sports Training Using Computer Vision and Deep Learning Techniques

Yıl 2023, Cilt: 13 Sayı: 2, 133 - 158, 31.12.2023
https://doi.org/10.17678/beuscitech.1330481

Öz

In this study, the performance of the MediaPipe Pose Estimation model in estimating body position in different sports activities was investigated in the light of biomechanical parameters. Additionally, the performance of the model was evaluated by comparing the real-time data obtained from the camera with different machine learning algorithms (regression, classification, etc.). The results showed that the MediaPipe Pose Estimation model is a suitable and effective tool for sports biomechanics. The model was able to estimate body position with high accuracy in different sports activities. Additionally, the performance of the model was improved by using different machine learning algorithms. This study is a pioneer research on the applicability of computer vision-supported deep learning techniques in sports training and pose estimation. The model has been developed into an application that can be used to improve the performance of athletes.

Kaynakça

  • [1] Ö. Çevik, Oğuzhanoglu, N. K., & Gülbas, Z., "Spor Yaralanmaları: Önleme ve Tedavi," Türk fiziksel tıp ve rehabilitasyon dergisi, vol. 64(3), pp. 266-273, 2018.
  • [2] I. Shrier, "Strategic Assessment of Risk and Risk Tolerance (StARRT) framework for return-to-play decision-making," British journal of sports medicine, vol. 49(19), pp. 1311-1315, 2015.
  • [3] R. Bahr, & Holme, I, "Risk factors for sports injuries-a methodological approach," British journal of sports medicine, vol. 37(5), pp. 384-392, 2003.
  • [4] Y. Çakır, "Biyomekaniksel Analizler," İstanbul Medipol Üniversitesi, Sağlık Bilimleri Enstitüsü, Fizyoterapi ve Rehabilitasyon Anabilim Dalı, 2019.
  • [5] İ. Sarı, "Biyomekanik ve Fiziksel Performansın Biyomekanik Analizi," Hacettepe Spor Bilimleri Dergisi, vol. 25(4), pp. 153-167, 2014.
  • [6] S. Mülayim, "Spor Biyomekaniği," Ankara Üniversitesi Beden Eğitimi ve Spor Yüksekokulu Dergisi, vol. 19(3), pp. 183-192, 2019.
  • [7] Y. Güneri, "Biyomekanik, sağlık bilimleri ve spor bilimleri açısından önemi," Türkiye Klinikleri Journal of Sports Sciences, vol. 11(1), pp. 16-22, 2019.
  • [8] D. A. Winter, "Biomechanics and motor control of human movement," John Wiley & Sons, 2019.
  • [9] V. M. Zatsiorsky, & Seluyanov, V. N, "The mass and inertia characteristics of the main segments of the human body," Biomechanics VIII-B, pp. 115-122, 1983.
  • [10] F. Muradlı, "Derin Öğrenme Kullanılarak Görüntülerden İnsan Duruş Tespiti," Sakarya Üniversitesi Fen Bilimleri Enstitüsü Yüksek Lisans Tezi, 2021.
  • [11] A. Özdemir, & Özdemir, A, "MediaPipe Pose ile Evde Egzersiz Yaparken Duruş Tespiti ve Rehberlik Etme," International Journal of Informatics Technologies, vol. 14(2), pp. 123-132, 2021.
  • [12] M. Dersuneli, T. Gündüz, and Y. Kutlu, "Bul-Tak Oyuncağı Şekillerinin Klasik Görüntü İşleme ve Derin Öğrenme Yöntemleri ile Tespiti," Bitlis Eren Üniversitesi Fen Bilimleri Dergisi, vol. 10, no. 4, pp. 1290-1303, 2021.
  • [13] L. Deng and D. Yu, "Deep learning: methods and applications," Foundations and trends® in signal processing, vol. 7, no. 3–4, pp. 197-387, 2014.
  • [14] A. Yaman, S. Abdulkadir, B. Ümit, and E. Sami, "Deep learning-based face liveness detection in videos," in Proceedings of the IEEEInternational Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 2017, pp. 16-17.
  • [15] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, pp. 436-444, 2015.
  • [16] C.-H. Chen and D. Ramanan, "3d human pose estimation= 2d pose estimation+ matching," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7035-7043.
  • [17] C. Zheng et al., "Deep Learning-Based Human Pose Estimation: A Survey," ACM Computing Surveys, 2019.
  • [18] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime multi-person 2d pose estimation using part affinity fields," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7291-7299.
  • [19] M. H. BC, R. Prathibha, and S. Kumari, "YOGA AI TRAINER USING TENSORFLOW. JS POSENET."
  • [20] R. Guler, N. Neverova, and I. DensePose, "Dense human pose estimation in the wild," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 18-23.
  • [21] H.-S. Fang et al., "Alphapose: Whole-body regional multi-person pose estimation and tracking in real-time," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  • [22] K. Sun, B. Xiao, D. Liu, and J. Wang, "Deep high-resolution representation learning for human pose estimation," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 5693-5703.
  • [23] S. Garg, A. Saxena, and R. Gupta, "Yoga pose classification: a CNN and MediaPipe inspired deep learning approach for real-world application," Journal of Ambient Intelligence and Humanized Computing, pp. 1-12, 2022.
  • [24] H. H. Pham, H. Salmane, L. Khoudour, A. Crouzil, S. A. Velastin, and P. Zegers, "A unified deep framework for joint 3d pose estimation and action recognition from a single rgb camera," Sensors, vol. 20, no. 7, p. 1825, 2020.
  • [25] A. Howard et al., "Searching for mobilenetv3," in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1314-1324.
  • [26] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime multi-person 2d pose estimation using part affinity fields," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1302-1310.
  • [27] K. Su, D. Yu, Z. Xu, X. Geng, and C. Wang, "Multi-person pose estimation with enhanced channel-wise and spatial information," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 5674-5682.
  • [28] M. Orescanin, L. N. Smith, S. Sahu, P. Goyal, and S. R. Chhetri, "Editorial: Deep learning with limited labeled data for vision, audio, and text," (in English), Frontiers in Artificial Intelligence, Editorial vol. 6, 2023-June-13 2023, doi: 10.3389/frai.2023.1213419.
  • [29] A. C. o. S. Medicine, ACSM's guidelines for exercise testing and prescription. Lippincott williams & wilkins, 2013.
  • [30] R. F. Escamilla, G. S. Fleisig, T. M. Lowry, S. W. Barrentine, and J. R. Andrews, "A three-dimensional biomechanical analysis of the squat during varying stance widths," Medicine and science in sports and exercise, vol. 33, no. 6, pp. 984-998, 2001.
  • [31] R. F. Escamilla, A. C. Francisco, A. V. Kayes, K. P. Speer, and C. T. Moorman 3rd, "An electromyographic analysis of sumo and conventional style deadlifts," Medicine and science in sports and exercise, vol. 34, no. 4, pp. 682-688, 2002.
  • [32] Z. Cömert and A. KOCAMAZ, "A study of artificial neural network training algorithms for classification of cardiotocography signals," Bitlis Eren University journal of science and technology, vol. 7, no. 2, pp. 93-103, 2017.
  • [33] M. Pilgrim, "Serializing Python Objects," in Dive Into Python 3: Springer, 2009, pp. 205-223.
  • [34] N. Çetin, Biyomekanik, Setma Baskı Ankara, vol. 1, pp.4-41, 1997.
  • [35] C. Açıkada, H. Demirel, Biyomekanik ve Hareket Bilgisi, AÜAÖFEskişehir, pp. 15,1993.
  • [36] G. Yavuzer, The use of computerized gait analysis in the assessment of neuromusculoskeletal disorders, Journal Of Physical Medicine And Rehabilitation Sciences, vol. 10, no: 2, pp. 043-045, 2007.
  • [37] DA Winter, Biomechanics And Motor Control Of Human Movement, 2nd Edition, John Wiley & Sons Canada, 1990.
  • [38] W. Braüne, O. Fischer, The Human Gait (Ceviri: Maquet P Furlong R), Springer-Verlag Heidelberg Almanya, 1987.
  • [39] YI Abdel-Aziz, HM Karara, Direct linear transformation from comparator coordinates ınto object space coordinates in close-range photogrammetry proceedings of the asp/ui, Symposium on Close-range Photogrammetry American Society of Photogrammetry, Falls Church VA, s 1-18, 1971.
  • [40] E. Civek, Comparison of kinematic results between Metu-kiss & Ankara University-vicon gait analysis systems, Y Lisans Tezi, Odtü Makina Mühendisliği Bölümü, 2006.
  • [41] M. Orescanin, L. N. Smith, S. Sahu, P. Goyal, and S. R. Chhetri, "Editorial: Deep learning with limited labeled data for vision, audio, and text," (in English), Frontiers in Artificial Intelligence, Editorial vol. 6, 2023-June-13 2023, doi: 10.3389/frai.2023.1213419.
  • [42] Ö. Çokluk, "Lojistik regresyon analizi: Kavram ve uygulama.", Kuram ve uygulamada eğitim bilimleri, vol.10, no: 3, pp. 1357-1407, 2010.
  • [43] C.F. İşçen et al, "Su Kalitesi Değişimine Etki Eden Değişkenlerin Lojistik Regresyon, Lojistik-Ridge Ve Lojistik-Lasso Yöntemleri İle Tespiti.", Biyoloji Bilimleri Araştırma Dergisi, vol. 14, no: 1, pp. 1-12.
  • [44] F. Erdem, et al., "Rastgele orman yöntemi kullanılarak kıyı çizgisi çıkarımı İstanbul örneği.", Geomatik 3.2,pp. 100-107, 2018.
  • [45] N. Chakrabarty, et al., "Flight arrival delay prediction using gradient boosting classifier.", Emerging Technologies in Data Mining and Information Security: Proceedings of IEMIS 2018, Volume 2. Springer Singapore, 2019, 2018.
  • [46] J.-L. Chung, L.-Y. Ong, and M.-C. Leow, "Comparative Analysis of Skeleton-Based Human Pose Estimation," Future Internet, vol. 14, no. 12, p. 380, 2022. [Online]. Available: https://www.mdpi.com/1999-5903/14/12/380.
  • [47] C. Lugaresi et al., "Mediapipe: A framework for building perception pipelines," arXiv preprint arXiv:1906.08172, 2019.
  • [48] G. Papandreou et al., "Towards accurate multi-person pose estimation in the wild," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4903-4911.
  • [49] B. Jo and S. Kim, "Comparative analysis of OpenPose, PoseNet, and MoveNet models for pose estimation in mobile devices," Traitement du Signal, vol. 39, no. 1, p. 119, 2022.
  • [50] F. Duman, T. D. İpek, and M. Saraçlar, "Unsupervised Discovery of Fingerspelled Letters in Sign Language Videos," in 2021 29th Signal Processing and Communications Applications Conference (SIU), 2021: IEEE, pp. 1-4.
  • [51] M. Mundt, Z. Born, M. Goldacre, and J. Alderson, "Estimating Ground Reaction Forces from Two-Dimensional Pose Data: A Biomechanics-Based Comparison of AlphaPose, BlazePose, and OpenPose," Sensors, vol. 23, no. 1, p. 78, 2022.
  • [52] L. Song, G. Yu, J. Yuan, and Z. Liu, "Human pose estimation and its application to action recognition: A survey," Journal of Visual Communication and Image Representation, vol. 76, p. 103055, 2021.
  • [53] L. Pishchulin et al., "Deepcut: Joint subset partition and labeling for multi person pose estimation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4929-4937.
  • [54] T.-Y. Lin et al., "Microsoft coco: Common objects in context," in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 2014: Springer, pp. 740-755.
  • [55] D. C. Luvizon, D. Picard, and H. Tabia, "2d/3d pose estimation and action recognition using multitask deep learning," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5137-5146.
Toplam 55 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Görüntü İşleme
Bölüm Araştırma Makalesi
Yazarlar

Nurettin Acı 0009-0002-2534-2288

Muhammed Fatih Kuluöztürk 0000-0001-8581-2179

Yayımlanma Tarihi 31 Aralık 2023
Gönderilme Tarihi 20 Temmuz 2023
Yayımlandığı Sayı Yıl 2023 Cilt: 13 Sayı: 2

Kaynak Göster

IEEE N. Acı ve M. F. Kuluöztürk, “Accuracy Detection in Some Sports Training Using Computer Vision and Deep Learning Techniques”, Bitlis Eren University Journal of Science and Technology, c. 13, sy. 2, ss. 133–158, 2023, doi: 10.17678/beuscitech.1330481.