Research Article
BibTex RIS Cite
Year 2023, Issue: 055, 50 - 59, 31.12.2023
https://doi.org/10.59313/jsr-a.1367212

Abstract

References

  • [1] A. Mittal, P. Kumar, P. P. Roy, R. Balasubramanian and B. B. Chaudhuri, “A modified LSTM model for continuous sign language recognition using leap motion” IEEE Sensors Journal, vol. 19, no. 16, pp. 7056-7063, Apr. 2019.
  • [2] S. Aly and W. Aly, “DeepArSLR: A novel signer-independent deep learning framework for isolated arabic sign language gestures recognition”, IEEE Access, vol. 8, pp. 83199-83212, Apr. 2020.
  • [3] O. M. Sincan and H. Y. Keles, “Autsl: A large scale multi-modal Turkish sign language dataset and baseline methods”, IEEE Access, vol. 8, pp. 181340-181355, Aug. 2020.
  • [4] R. Rastgoo, K. Kiani and S. Escalera, “Sign language recognition: A deep survey” Expert Systems with Applications, vol. 164, pp. 113794, Feb. 2021.
  • [5] N. Aloysius, & M. Geetha, “Understanding vision-based continuous sign language recognition” Multimedia Tools and Applications, vol. 79, no. (31-32), pp. 22177-22209, May 2020.
  • [6] A. Wadhawan and P. Kumar, “Sign language recognition systems: A decade systematic literature review” Archives of Computational Methods in Engineering, vol. 28, pp. 785-813, Dec. 2021.
  • [7] M. De Coster, M. Van Herreweghe and J. Dambre, “Sign language recognition with transformer networks” in 12th international conference on language resources and evaluation, May 2020, pp. 6018-6024.
  • [8] R. Rastgoo, K. Kiani and S. Escalera, “Video-based isolated hand sign language recognition using a deep cascaded model” Multimedia Tools and Applications, vol. 79, pp. 22965-22987, Jun. 2020.
  • [9] R. Rastgoo, K. Kiani and S. Escalera, “Hand pose aware multimodal isolated sign language recognition” Multimedia Tools and Applications, vol. 80, pp. 127-163, Sep. 2021.
  • [10] S. Sharma, R. Gupta and A. Kumar “Continuous sign language recognition using isolated signs data and deep transfer learning” Journal of Ambient Intelligence and Humanized Computing, vol. 14, pp. 1-12, Aug. 2021.
  • [11] H. Hu, W. Zhou and H. Li, “Hand-model-aware sign language recognition” in Proc. AAAI conference on artificial intelligence, May 2021, vol. 35, no. 2, pp. 1558-1566.
  • [12] Z. Zhou, K. S. Kui, V. W. Tam and E. Y. Lam, “Applying (3+ 2+ 1) D residual neural network with frame selection for Hong Kong sign language recognition” in 2020 25th International Conference on Pattern Recognition (ICPR), Jan. 2021, pp. 4296-4302.
  • [13] S. Yang, S. Jung, H. Kang and C. Kim, “The Korean sign language dataset for action recognition” in International conference on multimedia modelling, Dec. 2019, pp. 532-542.
  • [14] Q. Zhang, D. Wang, R. Zhao, & Y. Yu, “MyoSign: enabling end-to-end sign language recognition with wearables” in Proc. of the 24th international conference on intelligent user interfaces, Mar. 2019, pp. 650-660.
  • [15] B. Saunders, N. C. Camgoz and R. Bowden, “Continuous 3d multi-channel sign language production via progressive transformers and mixture density networks” International journal of computer vision, vol. 129, no. 7, pp. 2113-2135, Mar. 2021.
  • [16] J. Fink, B. Frénay, L. Meurant and A. Cleve, “LSFB-CONT and LSFB-ISOL: Two new datasets for vision-based sign language recognition” in 2021 International Joint Conference on Neural Networks (IJCNN), Jul. 2021, pp. 1-8.
  • [17] S. Das, M. S. Imtiaz, N. H. Neom, N. Siddique, and H. Wang, “A hybrid approach for Bangla sign language recognition using deep transfer learning model with random forest classifier” Expert Systems with Applications, vol. 213, pp. 118914, Mar. 2023.
  • [18] E. Rajalakshmi, R. Elakkiya, A. L. Prikhodko, M. G. Grif, M. A. Bakaev, J. R. Saini, ... and V. Subramaniyaswamy, “Static and dynamic isolated Indian and Russian sign language recognition with spatial and temporal feature detection using hybrid neural network” ACM Transactions on Asian and Low-Resource Language Information Processing, vol. 22, no.1, pp. 1-23, Nov. 2022.
  • [19] S. Fakhfakh and Y. B. Jemaa, “Deep Learning Shape Trajectories for Isolated Word Sign Language Recognition” Int. Arab J. Inf. Technol., vol. 19, no. 4, pp. 660-666, Jul. 2022.
  • [20] Y. Fang, Z. Xiao, S. Cai and Ni L., “Adversarial multi-task deep learning for signer-independent feature representation” Applied Intelligence, vol. 53, no. 4, pp. 4380-4392, Jun. 2023.
  • [21] H. Luqman, “An Efficient Two-Stream Network for Isolated Sign Language Recognition Using Accumulative Video Motion” IEEE Access, vol. 10, pp. 93785-93798, Sep. 2022.
  • [22] N. Sarhan and S. Frintrop, “Sign, Attend and Tell: Spatial Attention for Sign Language Recognition” in 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), Dec. 2021, pp. 1-8.
  • [23] N. Takayama, G. Benitez-Garcia and H. Takahashi “Masked batch normalization to improve tracking-based sign language recognition using graph convolutional networks” in 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), Dec. 2021, pp. 1-5.
  • [24] J. Wang, J. Chen and Y. Cai, “A framework for multimodal sign language recognition under small sample based on key-frame sampling” In Fifth International Workshop on Pattern Recognition, vol. 11526, pp. 46-52, Jun. 2020.
  • [25] A. Boukdir, M. Benaddy, A. Ellahyani, O. E. Meslouhi and M. Kardouchi, “Isolated video-based Arabic sign language recognition using convolutional and recursive neural networks” Arabian Journal for Science and Engineering, pp. 1-13, Sep. 2021.
  • [26] T. Pariwat and P. Seresangtakul , “Multi-stroke thai finger-spelling sign language recognition system with deep learning” Symmetry, vol. 13, no.2, pp. 262, Feb. 2021.
  • [27] E. Rajalakshmi, R. Elakkiya, V. Subramaniyaswamy, L. P. Alexey, G. Mikhail, M. Bakaev, ... and A. Abraham, “Multi-Semantic Discriminative Feature Learning for Sign Gesture Recognition Using Hybrid Deep Neural Architecture” IEEE Access, vol. 11, pp. 2226-2238, Jan. 2023.
  • [28] Deaf Professional Arts Network and the Georgia Institute of Technology, Kaggle ASL dataset, https://www.kaggle.com/competitions/asl-signs/overview (accessed June 12, 2023).
  • [29] MediaPipe Solutions, Mediapipe hand landmarks, (n.d.). https://developers.google.com/mediapipe/solutions/vision/hand_landmarker (accessed June 12, 2023).
  • [30] N. N. Arslan, D. Ozdemir and H. Temurtas, “ECG heartbeats classification with dilated convolutional autoencoder” Signal, Image and Video Processing, pp. 1-10, Sep. 2023.
  • [31] R. Llugsi, S. El Yacoubi, A. Fontaine and P. Lupera, “Comparison between Adam, AdaMax and Adam W optimizers to implement a Weather Forecast based on Neural Networks for the Andean city of Quito” in 2021 IEEE Fifth Ecuador Technical Chapters Meeting (ETCM), Oct. 2021, pp. 1-6.
  • [32] T. Andrei-Alexandru and D. E. Henrietta, “Low-cost defect detection using a deep convolutional neural network” in 2020 IEEE International conference on automation, quality and testing, robotics (AQTR), May 2020, pp. 1-5.

Deep learning-based isolated sign language recognition: a novel approach to tackling communication barriers for individuals with hearing impairments

Year 2023, Issue: 055, 50 - 59, 31.12.2023
https://doi.org/10.59313/jsr-a.1367212

Abstract

Sign language is a primary and widely used means of communication for individuals with hearing impairments. Current sign language recognition techniques need to be improved and need further development. In this research, we present a novel deep learning architecture for achieving significant advancements in sign language recognition by recognizing isolated signs. The study utilizes the Isolated Sign Language Recognition (ISLR) dataset from 21 hard-of-hearing participants. This dataset comprises 250 isolated signs and the x, y, and z coordinates of 543 hand gestures obtained using MediaPipe Holistic Solution. With approximately 100,000 videos, this dataset presents an essential opportunity for applying deep learning methods in sign language recognition. We present the comparative results of our experiments, where we explored different batch sizes, kernel sizes, frame sizes, and different convolutional layers. We achieve an accuracy rate of 83.32% on the test set.

References

  • [1] A. Mittal, P. Kumar, P. P. Roy, R. Balasubramanian and B. B. Chaudhuri, “A modified LSTM model for continuous sign language recognition using leap motion” IEEE Sensors Journal, vol. 19, no. 16, pp. 7056-7063, Apr. 2019.
  • [2] S. Aly and W. Aly, “DeepArSLR: A novel signer-independent deep learning framework for isolated arabic sign language gestures recognition”, IEEE Access, vol. 8, pp. 83199-83212, Apr. 2020.
  • [3] O. M. Sincan and H. Y. Keles, “Autsl: A large scale multi-modal Turkish sign language dataset and baseline methods”, IEEE Access, vol. 8, pp. 181340-181355, Aug. 2020.
  • [4] R. Rastgoo, K. Kiani and S. Escalera, “Sign language recognition: A deep survey” Expert Systems with Applications, vol. 164, pp. 113794, Feb. 2021.
  • [5] N. Aloysius, & M. Geetha, “Understanding vision-based continuous sign language recognition” Multimedia Tools and Applications, vol. 79, no. (31-32), pp. 22177-22209, May 2020.
  • [6] A. Wadhawan and P. Kumar, “Sign language recognition systems: A decade systematic literature review” Archives of Computational Methods in Engineering, vol. 28, pp. 785-813, Dec. 2021.
  • [7] M. De Coster, M. Van Herreweghe and J. Dambre, “Sign language recognition with transformer networks” in 12th international conference on language resources and evaluation, May 2020, pp. 6018-6024.
  • [8] R. Rastgoo, K. Kiani and S. Escalera, “Video-based isolated hand sign language recognition using a deep cascaded model” Multimedia Tools and Applications, vol. 79, pp. 22965-22987, Jun. 2020.
  • [9] R. Rastgoo, K. Kiani and S. Escalera, “Hand pose aware multimodal isolated sign language recognition” Multimedia Tools and Applications, vol. 80, pp. 127-163, Sep. 2021.
  • [10] S. Sharma, R. Gupta and A. Kumar “Continuous sign language recognition using isolated signs data and deep transfer learning” Journal of Ambient Intelligence and Humanized Computing, vol. 14, pp. 1-12, Aug. 2021.
  • [11] H. Hu, W. Zhou and H. Li, “Hand-model-aware sign language recognition” in Proc. AAAI conference on artificial intelligence, May 2021, vol. 35, no. 2, pp. 1558-1566.
  • [12] Z. Zhou, K. S. Kui, V. W. Tam and E. Y. Lam, “Applying (3+ 2+ 1) D residual neural network with frame selection for Hong Kong sign language recognition” in 2020 25th International Conference on Pattern Recognition (ICPR), Jan. 2021, pp. 4296-4302.
  • [13] S. Yang, S. Jung, H. Kang and C. Kim, “The Korean sign language dataset for action recognition” in International conference on multimedia modelling, Dec. 2019, pp. 532-542.
  • [14] Q. Zhang, D. Wang, R. Zhao, & Y. Yu, “MyoSign: enabling end-to-end sign language recognition with wearables” in Proc. of the 24th international conference on intelligent user interfaces, Mar. 2019, pp. 650-660.
  • [15] B. Saunders, N. C. Camgoz and R. Bowden, “Continuous 3d multi-channel sign language production via progressive transformers and mixture density networks” International journal of computer vision, vol. 129, no. 7, pp. 2113-2135, Mar. 2021.
  • [16] J. Fink, B. Frénay, L. Meurant and A. Cleve, “LSFB-CONT and LSFB-ISOL: Two new datasets for vision-based sign language recognition” in 2021 International Joint Conference on Neural Networks (IJCNN), Jul. 2021, pp. 1-8.
  • [17] S. Das, M. S. Imtiaz, N. H. Neom, N. Siddique, and H. Wang, “A hybrid approach for Bangla sign language recognition using deep transfer learning model with random forest classifier” Expert Systems with Applications, vol. 213, pp. 118914, Mar. 2023.
  • [18] E. Rajalakshmi, R. Elakkiya, A. L. Prikhodko, M. G. Grif, M. A. Bakaev, J. R. Saini, ... and V. Subramaniyaswamy, “Static and dynamic isolated Indian and Russian sign language recognition with spatial and temporal feature detection using hybrid neural network” ACM Transactions on Asian and Low-Resource Language Information Processing, vol. 22, no.1, pp. 1-23, Nov. 2022.
  • [19] S. Fakhfakh and Y. B. Jemaa, “Deep Learning Shape Trajectories for Isolated Word Sign Language Recognition” Int. Arab J. Inf. Technol., vol. 19, no. 4, pp. 660-666, Jul. 2022.
  • [20] Y. Fang, Z. Xiao, S. Cai and Ni L., “Adversarial multi-task deep learning for signer-independent feature representation” Applied Intelligence, vol. 53, no. 4, pp. 4380-4392, Jun. 2023.
  • [21] H. Luqman, “An Efficient Two-Stream Network for Isolated Sign Language Recognition Using Accumulative Video Motion” IEEE Access, vol. 10, pp. 93785-93798, Sep. 2022.
  • [22] N. Sarhan and S. Frintrop, “Sign, Attend and Tell: Spatial Attention for Sign Language Recognition” in 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), Dec. 2021, pp. 1-8.
  • [23] N. Takayama, G. Benitez-Garcia and H. Takahashi “Masked batch normalization to improve tracking-based sign language recognition using graph convolutional networks” in 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), Dec. 2021, pp. 1-5.
  • [24] J. Wang, J. Chen and Y. Cai, “A framework for multimodal sign language recognition under small sample based on key-frame sampling” In Fifth International Workshop on Pattern Recognition, vol. 11526, pp. 46-52, Jun. 2020.
  • [25] A. Boukdir, M. Benaddy, A. Ellahyani, O. E. Meslouhi and M. Kardouchi, “Isolated video-based Arabic sign language recognition using convolutional and recursive neural networks” Arabian Journal for Science and Engineering, pp. 1-13, Sep. 2021.
  • [26] T. Pariwat and P. Seresangtakul , “Multi-stroke thai finger-spelling sign language recognition system with deep learning” Symmetry, vol. 13, no.2, pp. 262, Feb. 2021.
  • [27] E. Rajalakshmi, R. Elakkiya, V. Subramaniyaswamy, L. P. Alexey, G. Mikhail, M. Bakaev, ... and A. Abraham, “Multi-Semantic Discriminative Feature Learning for Sign Gesture Recognition Using Hybrid Deep Neural Architecture” IEEE Access, vol. 11, pp. 2226-2238, Jan. 2023.
  • [28] Deaf Professional Arts Network and the Georgia Institute of Technology, Kaggle ASL dataset, https://www.kaggle.com/competitions/asl-signs/overview (accessed June 12, 2023).
  • [29] MediaPipe Solutions, Mediapipe hand landmarks, (n.d.). https://developers.google.com/mediapipe/solutions/vision/hand_landmarker (accessed June 12, 2023).
  • [30] N. N. Arslan, D. Ozdemir and H. Temurtas, “ECG heartbeats classification with dilated convolutional autoencoder” Signal, Image and Video Processing, pp. 1-10, Sep. 2023.
  • [31] R. Llugsi, S. El Yacoubi, A. Fontaine and P. Lupera, “Comparison between Adam, AdaMax and Adam W optimizers to implement a Weather Forecast based on Neural Networks for the Andean city of Quito” in 2021 IEEE Fifth Ecuador Technical Chapters Meeting (ETCM), Oct. 2021, pp. 1-6.
  • [32] T. Andrei-Alexandru and D. E. Henrietta, “Low-cost defect detection using a deep convolutional neural network” in 2020 IEEE International conference on automation, quality and testing, robotics (AQTR), May 2020, pp. 1-5.
There are 32 citations in total.

Details

Primary Language English
Subjects Deep Learning
Journal Section Research Articles
Authors

Naciye Nur Arslan 0000-0002-3208-7986

Emrullah Şahin 0000-0002-3390-6285

Muammer Akçay 0000-0003-0244-1275

Publication Date December 31, 2023
Submission Date September 27, 2023
Published in Issue Year 2023 Issue: 055

Cite

IEEE N. N. Arslan, E. Şahin, and M. Akçay, “Deep learning-based isolated sign language recognition: a novel approach to tackling communication barriers for individuals with hearing impairments”, JSR-A, no. 055, pp. 50–59, December 2023, doi: 10.59313/jsr-a.1367212.