Konferans Bildirisi
BibTex RIS Kaynak Göster

Object Detection for Safe Working Environments using YOLOv4 Deep Learning Model

Yıl 2021, Sayı: 26 - Ejosat Özel Sayı 2021 (HORA), 343 - 351, 31.07.2021
https://doi.org/10.31590/ejosat.951733

Öz

The health and safety of employees in workplaces maintains its importance since the concept of production emerged. Recent developments in computer vision and deep learning have made it widespread to be used in work environments as a secondary tool in ensuring occupational safety from surveillance videos. Thus, an important performance is achieved by minimizing human-induced errors in working environments. In this study, a method based on the YOLOv4 deep learning model is proposed to control the use of personal protective equipment from videos and to detect unsafe movements in the working environments of facilities operating in the field of industrial production. In the study, a dataset is created with videos collected from different working environments. In the study, later, on the prepared video dataset, the detection of personal protective equipment such as helmets, vests, masks, gloves, eyeglasses used by workers in factories operating in industrial areas and whether they use the appropriate equipment correctly is determined using the YOLOv4 framework. In the experimental studies conducted within the scope of the study, the mean average precision (mAP) value is achieved as 91.18% as a result of the training performed in the YOLOv4 network. In addition, results of 0.89, 0.91, 0.90, 70.35 and 1.1147 are obtained for other measurement metrics such as precision, recall, F1-score, intersection over union (IoU), and average loss, respectively. As a result, in the proposed study, instant inspection of the videos collected from the cameras installed in the factories, the meaning of the scene and the control of safe working environments are successfully achieved.

Destekleyen Kurum

Administration of Scientific Research Projects of Bilecik Şeyh Edebali University

Proje Numarası

2019-02.BŞEÜ.01-03

Teşekkür

We would like to thank “Kafaoğlu Metal Plastik Makine San. ve Tic. A.Ş.”, “Tek Metal ve Plastik Endüstriyel Mamulleri San. Tic. Ltd. Şti.” and “Vocational School of Bilecik Şeyh Edebali University” for allowing us to use the image / video data used in this study. In addition, this study is financially supported by the Administration of Scientific Research Projects of Bilecik Şeyh Edebali University with the project number 2019-02.BŞEÜ.01-03.

Kaynakça

  • Aybek, A., Güvercin, Ö., & Hurşitoğlu, Ç. (2003). Teknik personelin iş kazalarının nedenleri ve önlenmesine yönelik görüşlerinin belirlenmesi üzerine bir araştırma. KSÜ Fen ve Mühendislik Dergisi, 6(2), 91-100.
  • Balakreshnan, B., Richards, G., Nanda, G., Mao, H., Athinarayanan, R., & Zaccaria, J. (2020). PPE Compliance Detection using Artificial Intelligence in Learning Factories. Procedia Manufacturing, 45, 277-282.
  • Barro-Torres, S., Fernández-Caramés, T. M., Pérez-Iglesias, H. J., & Escudero, C. J. (2012). Real-time personal protective equipment monitoring system. Computer Communications, 36(1), 42-50.
  • Bochkovskiy, A., Wang, C.-Y., & Liao, H.-Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
  • Ceylan, H., & Ceylan, H. (2012). Analysis of occupational accidents according to the sectors in Turkey. Gazi University Journal of Science, 25(4), 909-918.
  • Chen, S., & Demachi, K. (2021). Towards on-site hazards identification of improper use of personal protective equipment using deep learning-based geometric relationships and hierarchical scene graph. Automation in construction, 125, 103619.
  • Ding, L., Fang, W., Luo, H., Love, P. E., Zhong, B., & Ouyang, X. (2018). A deep hybrid learning model to detect unsafe behavior: Integrating convolution neural networks and long short-term memory. Automation in construction, 86, 118-124.
  • Guo, B. H., Zou, Y., Fang, Y., Goh, Y. M., & Zou, P. X. (2021). Computer vision technologies for safety science and management in construction: A critical review and future research directions. Safety science, 135, 105130.
  • Guo, F., Qian, Y., & Shi, Y. (2021). Real-time railroad track components inspection based on the improved YOLOv4 framework. Automation in construction, 125, 103596.
  • Heinrich, H. W., & Granniss, E. (1959). Industrial Accident Prevention: McGraw-Hill Book Company.
  • Kelm, A., Laußat, L., Meins-Becker, A., Platz, D., Khazaee, M. J., Costin, A. M., Helmus, M., & Teizer, J. (2013). Mobile passive Radio Frequency Identification (RFID) portal for automated and rapid control of Personal Protective Equipment (PPE) on construction sites. Automation in construction, 36, 38-52.
  • Lee, H.-S., Lee, K.-P., Park, M., Baek, Y., & Lee, S. (2012). RFID-based real-time locating system for construction safety management. Journal of Computing in Civil Engineering, 26(3), 366-377.
  • Long, X., Deng, K., Wang, G., Zhang, Y., Dang, Q., Gao, Y., Shen, H., Ren, J., Han, S., & Ding, E. (2020). PP-YOLO: An effective and efficient implementation of object detector. arXiv preprint arXiv:2007.12099.
  • Nath, N. D., Behzadan, A. H., & Paal, S. G. (2020). Deep learning for site safety: Real-time detection of personal protective equipment. Automation in construction, 112, 103085.
  • Nill, R. J. (2019). How to Select and Use Personal Protective Equipment. Handbook of Occupational Safety and Health, 469-494.
  • Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 779-788.
  • Ruser, J., & Butler, R. (2010). The economics of occupational safety and health: Now Publishers Inc.
  • Wu, D., Lv, S., Jiang, M., & Song, H. (2020). Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments. Computers and Electronics in Agriculture, 178, 105742.
  • Wu, H., & Zhao, J. (2018). An intelligent vision-based approach for helmet identification for work safety. Computers in Industry, 100, 267-277.
  • Wu, J., Cai, N., Chen, W., Wang, H., & Wang, G. (2019). Automatic detection of hardhats worn by construction personnel: A deep learning approach and benchmark dataset. Automation in construction, 106, 102894.
  • Yu, W.-D., Liao, H.-C., Hsiao, W.-T., Chang, H.-K., Tsai, C.-K., & Lin, C.-C. (2020). Automatic Safety Monitoring of Construction Hazard Working Zone: A Semantic Segmentation based Deep Learning Approach. Paper presented at the Proceedings of the 2020 the 7th International Conference on Automation and Logistics (ICAL). pp. 54-59.

Güvenli İş Ortamı İçin YOLOv4 Derin Öğrenme Modeli Kullanarak Nesne Tanıma

Yıl 2021, Sayı: 26 - Ejosat Özel Sayı 2021 (HORA), 343 - 351, 31.07.2021
https://doi.org/10.31590/ejosat.951733

Öz

İşyerlerinde çalışanların sağlığı ve güvenliği üretim kavramı ortaya çıktığından bu yana önemini korumaktadır. Bilgisayarlı görü ve derin öğrenme konusunda son yıllarda kaydedilen gelişmeler, çalışma ortamlarında gözetim videolarından iş güvenliğinin sağlanmasında ikincil bir araç olarak kullanılmaya başlamıştır. Böylece çalışma ortamlarında insandan kaynaklı hataların minumuma indirilerek önemli bir başarım elde edilmesi sağlanmaktadır. Bu çalışmada, endüstriyel üretim alanında faaliyet gösteren tesislerin çalışma ortamlarında, videolardan kişisel koruyucu donanımların kullanımın denetlenmesi ve güvensiz hareketlerin tespiti için YOLOv4 derin öğrenme modeli tabanlı bir yöntem önerilmektedir. Çalışmada, öncelikle farklı çalışma ortamlarından toplanan videolar ile bir veriseti oluşturulmuştur. Çalışmada daha sonra, hazırlanan video veriseti üzerinde, sanayi bölgelerinde faaliyet gösteren fabrikalarda işçilerin kullandığı baret, yelek, maske, eldiven, gözlük gibi kişisel koruyucu ekipmanların tanınması ve uygun donanımları doğru kullanıp kullanmadıkları YOLOv4 altyapısı kullanılarak tespit edilmiştir. Çalışma kapsamında yürütülen deneysel çalışmalarda, YOLOv4 ağında yapılan eğitim sonucunda mean average precision (mAP) değeri %91.18 olarak başarılmıştır. Ayrıca, diğer ölçüm metrikleri kesinlik, duyarlılık, F1-skoru, kesiştirilmiş bölgeler (IoU) ve ortalama kayıp için sırasıyla 0.89, 0.91, 0.90, 70.35 ve 1.1147 sonuçları elde edilmiştir. Sonuç olarak, önerilen çalışmada, fabrikalarda tesis edilmiş kameralardan gelen videoların anlık olarak denetlenmesi ve sahnenin anlamlandırılması sağlanarak, güvenli çalışma ortamlarının kontrolü başarılı bir şekilde sağlanmıştır.

Proje Numarası

2019-02.BŞEÜ.01-03

Kaynakça

  • Aybek, A., Güvercin, Ö., & Hurşitoğlu, Ç. (2003). Teknik personelin iş kazalarının nedenleri ve önlenmesine yönelik görüşlerinin belirlenmesi üzerine bir araştırma. KSÜ Fen ve Mühendislik Dergisi, 6(2), 91-100.
  • Balakreshnan, B., Richards, G., Nanda, G., Mao, H., Athinarayanan, R., & Zaccaria, J. (2020). PPE Compliance Detection using Artificial Intelligence in Learning Factories. Procedia Manufacturing, 45, 277-282.
  • Barro-Torres, S., Fernández-Caramés, T. M., Pérez-Iglesias, H. J., & Escudero, C. J. (2012). Real-time personal protective equipment monitoring system. Computer Communications, 36(1), 42-50.
  • Bochkovskiy, A., Wang, C.-Y., & Liao, H.-Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
  • Ceylan, H., & Ceylan, H. (2012). Analysis of occupational accidents according to the sectors in Turkey. Gazi University Journal of Science, 25(4), 909-918.
  • Chen, S., & Demachi, K. (2021). Towards on-site hazards identification of improper use of personal protective equipment using deep learning-based geometric relationships and hierarchical scene graph. Automation in construction, 125, 103619.
  • Ding, L., Fang, W., Luo, H., Love, P. E., Zhong, B., & Ouyang, X. (2018). A deep hybrid learning model to detect unsafe behavior: Integrating convolution neural networks and long short-term memory. Automation in construction, 86, 118-124.
  • Guo, B. H., Zou, Y., Fang, Y., Goh, Y. M., & Zou, P. X. (2021). Computer vision technologies for safety science and management in construction: A critical review and future research directions. Safety science, 135, 105130.
  • Guo, F., Qian, Y., & Shi, Y. (2021). Real-time railroad track components inspection based on the improved YOLOv4 framework. Automation in construction, 125, 103596.
  • Heinrich, H. W., & Granniss, E. (1959). Industrial Accident Prevention: McGraw-Hill Book Company.
  • Kelm, A., Laußat, L., Meins-Becker, A., Platz, D., Khazaee, M. J., Costin, A. M., Helmus, M., & Teizer, J. (2013). Mobile passive Radio Frequency Identification (RFID) portal for automated and rapid control of Personal Protective Equipment (PPE) on construction sites. Automation in construction, 36, 38-52.
  • Lee, H.-S., Lee, K.-P., Park, M., Baek, Y., & Lee, S. (2012). RFID-based real-time locating system for construction safety management. Journal of Computing in Civil Engineering, 26(3), 366-377.
  • Long, X., Deng, K., Wang, G., Zhang, Y., Dang, Q., Gao, Y., Shen, H., Ren, J., Han, S., & Ding, E. (2020). PP-YOLO: An effective and efficient implementation of object detector. arXiv preprint arXiv:2007.12099.
  • Nath, N. D., Behzadan, A. H., & Paal, S. G. (2020). Deep learning for site safety: Real-time detection of personal protective equipment. Automation in construction, 112, 103085.
  • Nill, R. J. (2019). How to Select and Use Personal Protective Equipment. Handbook of Occupational Safety and Health, 469-494.
  • Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 779-788.
  • Ruser, J., & Butler, R. (2010). The economics of occupational safety and health: Now Publishers Inc.
  • Wu, D., Lv, S., Jiang, M., & Song, H. (2020). Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments. Computers and Electronics in Agriculture, 178, 105742.
  • Wu, H., & Zhao, J. (2018). An intelligent vision-based approach for helmet identification for work safety. Computers in Industry, 100, 267-277.
  • Wu, J., Cai, N., Chen, W., Wang, H., & Wang, G. (2019). Automatic detection of hardhats worn by construction personnel: A deep learning approach and benchmark dataset. Automation in construction, 106, 102894.
  • Yu, W.-D., Liao, H.-C., Hsiao, W.-T., Chang, H.-K., Tsai, C.-K., & Lin, C.-C. (2020). Automatic Safety Monitoring of Construction Hazard Working Zone: A Semantic Segmentation based Deep Learning Approach. Paper presented at the Proceedings of the 2020 the 7th International Conference on Automation and Logistics (ICAL). pp. 54-59.
Toplam 21 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Oğuzhan Önal 0000-0002-4336-5064

Emre Dandıl 0000-0001-6559-1399

Proje Numarası 2019-02.BŞEÜ.01-03
Yayımlanma Tarihi 31 Temmuz 2021
Yayımlandığı Sayı Yıl 2021 Sayı: 26 - Ejosat Özel Sayı 2021 (HORA)

Kaynak Göster

APA Önal, O., & Dandıl, E. (2021). Object Detection for Safe Working Environments using YOLOv4 Deep Learning Model. Avrupa Bilim Ve Teknoloji Dergisi(26), 343-351. https://doi.org/10.31590/ejosat.951733