Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2023, Cilt: 11 Sayı: 4, 306 - 315, 22.12.2023
https://doi.org/10.17694/bajece.1326072

Öz

Kaynakça

  • [1] A. Dobra, “General classification of robots. Size criteria,” in 2014 23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD), IEEE, 2014, pp. 1–6.
  • [2] “Defining The Industrial Robot Industry and All It Entails.” https://www.robotics.org/robotics/industrial-robot-industry-and-all-it-entails (accessed Sep. 28, 2020).
  • [3] IFR, World Robotics 2022. 2022. [Online]. Available: https://ifr.org/downloads/press2018/2022_WR_extended_version.pdf
  • [4] S. M. M. Rahman, Z. Liao, L. Jiang, and Y. Wang, “A regret-based autonomy allocation scheme for human-robot shared vision systems in collaborative assembly in manufacturing,” in IEEE International Conference on Automation Science and Engineering, 2016, pp. 897–902. doi: 10.1109/COASE.2016.7743497.
  • [5] H. Ding, M. Schipper, and B. Matthias, “Collaborative behavior design of industrial robots for multiple human-robot collaboration,” in 2013 44th International Symposium on Robotics, ISR 2013, 2013. doi: 10.1109/ISR.2013.6695707.
  • [6] S. M. M. Rahman, Y. Wang, I. D. Walker, L. Mears, R. Pak, and S. Remy, “Trust-based compliant robot-human handovers of payloads in collaborative assembly in flexible manufacturing,” in IEEE International Conference on Automation Science and Engineering, 2016, pp. 355–360. doi: 10.1109/COASE.2016.7743428.
  • [7] T. Hamabe, H. Goto, and J. Miura, “A programming by demonstration system for human-robot collaborative assembly tasks,” in 2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015, 2015, pp. 1195–1201. doi: 10.1109/ROBIO.2015.7418934.
  • [8] H. Ding, J. Heyn, B. Matthias, and H. Staab, “Structured collaborative behavior of industrial robots in mixed human-robot environments,” in IEEE International Conference on Automation Science and Engineering, 2013, pp. 1101–1106. doi: 10.1109/CoASE.2013.6653962.
  • [9] J. L. Raheja, K. Das, and A. Chaudhary, “Fingertip Detection: A Fast Method with Natural Hand,” Int. J. Embed. Syst. Comput. Eng. Local Copy, vol. 3, no. 2, pp. 85–88, 2012, [Online]. Available: http://arxiv.org/abs/1212.0134
  • [10] S. Mukherjee, S. A. Ahmed, D. P. Dogra, S. Kar, and P. P. Roy, “Fingertip detection and tracking for recognition of air-writing in videos,” Expert Syst. Appl., vol. 136, pp. 217–229, 2019, doi: 10.1016/j.eswa.2019.06.034.
  • [11] S. K. Kang, M. Y. Nam, and P. K. Rhee, “Color based hand and finger detection technology for user interaction,” in 2008 International Conference on Convergence and Hybrid Information Technology, ICHIT 2008, 2008, pp. 229–236. doi: 10.1109/ICHIT.2008.292.
  • [12] G. Wu and W. Kang, “Vision-Based Fingertip Tracking Utilizing Curvature Points Clustering and Hash Model Representation,” IEEE Trans. Multimed., vol. 19, no. 8, pp. 1730–1741, 2017, doi: 10.1109/TMM.2017.2691538.
  • [13] G. Wu and W. Kang, “Robust Fingertip Detection in a Complex Environment,” IEEE Trans. Multimed., vol. 18, no. 6, pp. 978–987, 2016, doi: 10.1109/TMM.2016.2545401.
  • [14] J. Yang, X. Ma, Y. Sun, and X. Lin, “LPPM-Net: Local-aware point processing module based 3D hand pose estimation for point cloud,” Signal Processing: Image Communication, vol. 90. p. 116036, 2021. doi: 10.1016/j.image.2020.116036.
  • [15] C. Wang, Z. Liu, M. Zhu, J. Zhao, and S. C. Chan, “A hand gesture recognition system based on canonical superpixel-graph,” Signal Processing: Image Communication, vol. 58. pp. 87–98, 2017. doi: 10.1016/j.image.2017.06.015.
  • [16] J. Shin and C. M. Kim, “Non-Touch Character Input System Based on Hand Tapping Gestures Using Kinect Sensor,” IEEE Access, vol. 5, pp. 10496–10505, 2017, doi: 10.1109/ACCESS.2017.2703783.
  • [17] J. L. Raheja, A. Chaudhary, and K. Singal, “Tracking of fingertips and centers of palm using KINECT,” in Proceedings - CIMSim 2011: 3rd International Conference on Computational Intelligence, Modelling and Simulation, 2011, pp. 248–252. doi: 10.1109/CIMSim.2011.51.
  • [18] Y. Huang, X. Liu, L. Jin, and X. Zhang, “DeepFinger: A Cascade Convolutional Neuron Network Approach to Finger Key Point Detection in Egocentric Vision with Mobile Camera,” in 2015 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2015, 2016, pp. 2944–2949. doi: 10.1109/SMC.2015.512.
  • [19] F. Chen et al., “WristCam: A Wearable Sensor for Hand Trajectory Gesture Recognition and Intelligent Human-Robot Interaction,” IEEE Sens. J., vol. 19, no. 19, pp. 8441–8451, 2019, doi: 10.1109/JSEN.2018.2877978.
  • [20] G. Shi, C. S. Chan, W. J. Li, K. S. Leung, Y. Zou, and Y. Jin, “Mobile human airbag system for fall protection using mems sensors and embedded SVM classifier,” IEEE Sens. J., vol. 9, no. 5, pp. 495–503, 2009, doi: 10.1109/JSEN.2008.2012212.
  • [21] L. Peternel, N. Tsagarakis, and A. Ajoudani, “A human-robot co-manipulation approach based on human sensorimotor information,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 25, no. 7, pp. 811–822, 2017, doi: 10.1109/TNSRE.2017.2694553.
  • [22] C. Li, S. Zhang, Y. Qin, and E. Estupinan, “A systematic review of deep transfer learning for machinery fault diagnosis,” Neurocomputing, vol. 407, pp. 121–135, 2020, doi: 10.1016/j.neucom.2020.04.045.
  • [23] R. Ye and Q. Dai, “Implementing transfer learning across different datasets for time series forecasting,” Pattern Recognit., vol. 109, 2021, doi: 10.1016/j.patcog.2020.107617.
  • [24] Z. Li, B. Liu, and Y. Xiao, “Cluster and dynamic-TrAdaBoost-based transfer learning for text classification,” in ICNC-FSKD 2017 - 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, 2018, pp. 2291–2295. doi: 10.1109/FSKD.2017.8393128.
  • [25] S. Mei, X. Liu, G. Zhang, and Q. Du, “Sensor-specific Transfer Learning for Hyperspectral Image Processing,” in 2019 10th International Workshop on the Analysis of Multitemporal Remote Sensing Images, MultiTemp 2019, 2019. doi: 10.1109/Multi-Temp.2019.8866896.
  • [26] S. Hou, B. Dong, H. Wang, and G. Wu, “Inspection of surface defects on stay cables using a robot and transfer learning,” Autom. Constr., vol. 119, 2020, doi: 10.1016/j.autcon.2020.103382.
  • [27] G. A. Atkinson, W. Zhang, M. F. Hansen, M. L. Holloway, and A. A. Napier, “Image segmentation of underfloor scenes using a mask regions convolutional neural network with two-stage transfer learning,” Autom. Constr., vol. 113, 2020, doi: 10.1016/j.autcon.2020.103118.
  • [28] M. C. Bingol and O. Aydogmus, “Practical application of a safe human-robot interaction software,” Ind. Rob., vol. 47, no. 3, pp. 359–368, 2020, doi: 10.1108/IR-09-2019-0180.
  • [29] M. C. Bingol and O. Aydogmus, “Performing predefined tasks using the human–robot interaction on speech recognition for an industrial robot,” Eng. Appl. Artif. Intell., vol. 95, 2020, doi: 10.1016/j.engappai.2020.103903.
  • [30] M. C. Bingol and Ö. Aydoğmuş, “İnsan-Robot Etkileşiminde İnsan Güvenliği için Çok Kanallı İletişim Kullanarak Evrişimli Sinir Ağı Tabanlı Bir Yazılımının Geliştirilmesi ve Uygulaması,” Fırat Üniversitesi Müh. Bil. Derg., vol. 31, no. 2, pp. 489–495, 2019, doi: 10.35234/fumbd.557590.
  • [31] OSRF, “SolidWorks to URDF Exporter,” 2020. http://wiki.ros.org/sw_urdf_exporter (accessed Oct. 17, 2020).
  • [32] “ManyCam Main Page.” https://manycam.com/ (accessed Oct. 17, 2020).
  • [33] TensorFlow, “Keras Applications,” 2020. https://keras.io/api/applications/%0A (accessed Oct. 15, 2020).
  • [34] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” 2015.
  • [35] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 2014.
  • [36] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” 2016.
  • [37] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning,” 2016.
  • [38] M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” 2019.
  • [39] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” 2018.
  • [40] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” 2015.
  • [41] F. Chollet, “Xception: Deep Learning with Depthwise Separable Convolutions,” 2016.
  • [42] M. C. Bingol and O. Aydogmus, “Sealing Process,” 2020. https://drive.google.com/file/d/1BbK_wv2Q76ItVLSbViC1Vb7YpxlN-tEa/view?usp=sharing (accessed Nov. 16, 2020).
  • [43] M. C. Bingol and O. Aydogmus, “Welding Process,” 2020. https://drive.google.com/file/d/1Y5jE8uPoiJMKLEHeG-wj1OqVRs1bO48-/view?usp=sharing (accessed Nov. 16, 2020).

Development of a Human-Robot Interaction System for Industrial Applications

Yıl 2023, Cilt: 11 Sayı: 4, 306 - 315, 22.12.2023
https://doi.org/10.17694/bajece.1326072

Öz

The use of robots is increasing day by day. In this study, it was aimed to develop manufacturing-assistant robot software for small production plants involving non-mass production. The main purpose of this study is to eliminate the difficulty of recruiting an expert robot operator thanks to the developed software and to facilitate the use of robots for non-experts. The developed software consists of three parts: the convolutional neural network (CNN), process selection-trajectory generation, and trajectory regulation modules. Before the operations in these modules are executed, operators record the desired process and the trajectory of the process in the video by hand gestures and index finger. Then recorded video is separated into images. The separated images are classified by the CNN module and the positions of landmarks (joint and fingernail of index finger) were calculated by the same module and using the images. Eight different pre-trained CNN structures were tested in the CNN module, and the best result Xception structure (test loss = 0.0051) was used. The desired process was determined and the trajectory of the process was created with the CNN output data. The connection of the generated trajectory with the object was detected by the trajectory regulation module, and unnecessary trajectory parts were cleaned. Regulated trajectory and desired tasks such as welding or sealing were simulated via an industrial robot in a simulation environment. As a result, an industrial robot could be programmed by non-expert operators for companies whose production line is not standard by using the developed software.

Kaynakça

  • [1] A. Dobra, “General classification of robots. Size criteria,” in 2014 23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD), IEEE, 2014, pp. 1–6.
  • [2] “Defining The Industrial Robot Industry and All It Entails.” https://www.robotics.org/robotics/industrial-robot-industry-and-all-it-entails (accessed Sep. 28, 2020).
  • [3] IFR, World Robotics 2022. 2022. [Online]. Available: https://ifr.org/downloads/press2018/2022_WR_extended_version.pdf
  • [4] S. M. M. Rahman, Z. Liao, L. Jiang, and Y. Wang, “A regret-based autonomy allocation scheme for human-robot shared vision systems in collaborative assembly in manufacturing,” in IEEE International Conference on Automation Science and Engineering, 2016, pp. 897–902. doi: 10.1109/COASE.2016.7743497.
  • [5] H. Ding, M. Schipper, and B. Matthias, “Collaborative behavior design of industrial robots for multiple human-robot collaboration,” in 2013 44th International Symposium on Robotics, ISR 2013, 2013. doi: 10.1109/ISR.2013.6695707.
  • [6] S. M. M. Rahman, Y. Wang, I. D. Walker, L. Mears, R. Pak, and S. Remy, “Trust-based compliant robot-human handovers of payloads in collaborative assembly in flexible manufacturing,” in IEEE International Conference on Automation Science and Engineering, 2016, pp. 355–360. doi: 10.1109/COASE.2016.7743428.
  • [7] T. Hamabe, H. Goto, and J. Miura, “A programming by demonstration system for human-robot collaborative assembly tasks,” in 2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015, 2015, pp. 1195–1201. doi: 10.1109/ROBIO.2015.7418934.
  • [8] H. Ding, J. Heyn, B. Matthias, and H. Staab, “Structured collaborative behavior of industrial robots in mixed human-robot environments,” in IEEE International Conference on Automation Science and Engineering, 2013, pp. 1101–1106. doi: 10.1109/CoASE.2013.6653962.
  • [9] J. L. Raheja, K. Das, and A. Chaudhary, “Fingertip Detection: A Fast Method with Natural Hand,” Int. J. Embed. Syst. Comput. Eng. Local Copy, vol. 3, no. 2, pp. 85–88, 2012, [Online]. Available: http://arxiv.org/abs/1212.0134
  • [10] S. Mukherjee, S. A. Ahmed, D. P. Dogra, S. Kar, and P. P. Roy, “Fingertip detection and tracking for recognition of air-writing in videos,” Expert Syst. Appl., vol. 136, pp. 217–229, 2019, doi: 10.1016/j.eswa.2019.06.034.
  • [11] S. K. Kang, M. Y. Nam, and P. K. Rhee, “Color based hand and finger detection technology for user interaction,” in 2008 International Conference on Convergence and Hybrid Information Technology, ICHIT 2008, 2008, pp. 229–236. doi: 10.1109/ICHIT.2008.292.
  • [12] G. Wu and W. Kang, “Vision-Based Fingertip Tracking Utilizing Curvature Points Clustering and Hash Model Representation,” IEEE Trans. Multimed., vol. 19, no. 8, pp. 1730–1741, 2017, doi: 10.1109/TMM.2017.2691538.
  • [13] G. Wu and W. Kang, “Robust Fingertip Detection in a Complex Environment,” IEEE Trans. Multimed., vol. 18, no. 6, pp. 978–987, 2016, doi: 10.1109/TMM.2016.2545401.
  • [14] J. Yang, X. Ma, Y. Sun, and X. Lin, “LPPM-Net: Local-aware point processing module based 3D hand pose estimation for point cloud,” Signal Processing: Image Communication, vol. 90. p. 116036, 2021. doi: 10.1016/j.image.2020.116036.
  • [15] C. Wang, Z. Liu, M. Zhu, J. Zhao, and S. C. Chan, “A hand gesture recognition system based on canonical superpixel-graph,” Signal Processing: Image Communication, vol. 58. pp. 87–98, 2017. doi: 10.1016/j.image.2017.06.015.
  • [16] J. Shin and C. M. Kim, “Non-Touch Character Input System Based on Hand Tapping Gestures Using Kinect Sensor,” IEEE Access, vol. 5, pp. 10496–10505, 2017, doi: 10.1109/ACCESS.2017.2703783.
  • [17] J. L. Raheja, A. Chaudhary, and K. Singal, “Tracking of fingertips and centers of palm using KINECT,” in Proceedings - CIMSim 2011: 3rd International Conference on Computational Intelligence, Modelling and Simulation, 2011, pp. 248–252. doi: 10.1109/CIMSim.2011.51.
  • [18] Y. Huang, X. Liu, L. Jin, and X. Zhang, “DeepFinger: A Cascade Convolutional Neuron Network Approach to Finger Key Point Detection in Egocentric Vision with Mobile Camera,” in 2015 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2015, 2016, pp. 2944–2949. doi: 10.1109/SMC.2015.512.
  • [19] F. Chen et al., “WristCam: A Wearable Sensor for Hand Trajectory Gesture Recognition and Intelligent Human-Robot Interaction,” IEEE Sens. J., vol. 19, no. 19, pp. 8441–8451, 2019, doi: 10.1109/JSEN.2018.2877978.
  • [20] G. Shi, C. S. Chan, W. J. Li, K. S. Leung, Y. Zou, and Y. Jin, “Mobile human airbag system for fall protection using mems sensors and embedded SVM classifier,” IEEE Sens. J., vol. 9, no. 5, pp. 495–503, 2009, doi: 10.1109/JSEN.2008.2012212.
  • [21] L. Peternel, N. Tsagarakis, and A. Ajoudani, “A human-robot co-manipulation approach based on human sensorimotor information,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 25, no. 7, pp. 811–822, 2017, doi: 10.1109/TNSRE.2017.2694553.
  • [22] C. Li, S. Zhang, Y. Qin, and E. Estupinan, “A systematic review of deep transfer learning for machinery fault diagnosis,” Neurocomputing, vol. 407, pp. 121–135, 2020, doi: 10.1016/j.neucom.2020.04.045.
  • [23] R. Ye and Q. Dai, “Implementing transfer learning across different datasets for time series forecasting,” Pattern Recognit., vol. 109, 2021, doi: 10.1016/j.patcog.2020.107617.
  • [24] Z. Li, B. Liu, and Y. Xiao, “Cluster and dynamic-TrAdaBoost-based transfer learning for text classification,” in ICNC-FSKD 2017 - 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, 2018, pp. 2291–2295. doi: 10.1109/FSKD.2017.8393128.
  • [25] S. Mei, X. Liu, G. Zhang, and Q. Du, “Sensor-specific Transfer Learning for Hyperspectral Image Processing,” in 2019 10th International Workshop on the Analysis of Multitemporal Remote Sensing Images, MultiTemp 2019, 2019. doi: 10.1109/Multi-Temp.2019.8866896.
  • [26] S. Hou, B. Dong, H. Wang, and G. Wu, “Inspection of surface defects on stay cables using a robot and transfer learning,” Autom. Constr., vol. 119, 2020, doi: 10.1016/j.autcon.2020.103382.
  • [27] G. A. Atkinson, W. Zhang, M. F. Hansen, M. L. Holloway, and A. A. Napier, “Image segmentation of underfloor scenes using a mask regions convolutional neural network with two-stage transfer learning,” Autom. Constr., vol. 113, 2020, doi: 10.1016/j.autcon.2020.103118.
  • [28] M. C. Bingol and O. Aydogmus, “Practical application of a safe human-robot interaction software,” Ind. Rob., vol. 47, no. 3, pp. 359–368, 2020, doi: 10.1108/IR-09-2019-0180.
  • [29] M. C. Bingol and O. Aydogmus, “Performing predefined tasks using the human–robot interaction on speech recognition for an industrial robot,” Eng. Appl. Artif. Intell., vol. 95, 2020, doi: 10.1016/j.engappai.2020.103903.
  • [30] M. C. Bingol and Ö. Aydoğmuş, “İnsan-Robot Etkileşiminde İnsan Güvenliği için Çok Kanallı İletişim Kullanarak Evrişimli Sinir Ağı Tabanlı Bir Yazılımının Geliştirilmesi ve Uygulaması,” Fırat Üniversitesi Müh. Bil. Derg., vol. 31, no. 2, pp. 489–495, 2019, doi: 10.35234/fumbd.557590.
  • [31] OSRF, “SolidWorks to URDF Exporter,” 2020. http://wiki.ros.org/sw_urdf_exporter (accessed Oct. 17, 2020).
  • [32] “ManyCam Main Page.” https://manycam.com/ (accessed Oct. 17, 2020).
  • [33] TensorFlow, “Keras Applications,” 2020. https://keras.io/api/applications/%0A (accessed Oct. 15, 2020).
  • [34] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” 2015.
  • [35] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 2014.
  • [36] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” 2016.
  • [37] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning,” 2016.
  • [38] M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” 2019.
  • [39] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” 2018.
  • [40] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” 2015.
  • [41] F. Chollet, “Xception: Deep Learning with Depthwise Separable Convolutions,” 2016.
  • [42] M. C. Bingol and O. Aydogmus, “Sealing Process,” 2020. https://drive.google.com/file/d/1BbK_wv2Q76ItVLSbViC1Vb7YpxlN-tEa/view?usp=sharing (accessed Nov. 16, 2020).
  • [43] M. C. Bingol and O. Aydogmus, “Welding Process,” 2020. https://drive.google.com/file/d/1Y5jE8uPoiJMKLEHeG-wj1OqVRs1bO48-/view?usp=sharing (accessed Nov. 16, 2020).
Toplam 43 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yazılım Mühendisliği (Diğer)
Bölüm Araştırma Makalesi
Yazarlar

Mustafa Can Bıngol 0000-0001-5448-8281

Ömür Aydoğmuş 0000-0001-8142-1146

Erken Görünüm Tarihi 10 Ocak 2024
Yayımlanma Tarihi 22 Aralık 2023
Yayımlandığı Sayı Yıl 2023 Cilt: 11 Sayı: 4

Kaynak Göster

APA Bıngol, M. C., & Aydoğmuş, Ö. (2023). Development of a Human-Robot Interaction System for Industrial Applications. Balkan Journal of Electrical and Computer Engineering, 11(4), 306-315. https://doi.org/10.17694/bajece.1326072

Cited By

All articles published by BAJECE are licensed under the Creative Commons Attribution 4.0 International License. This permits anyone to copy, redistribute, remix, transmit and adapt the work provided the original work and source is appropriately cited.Creative Commons Lisansı