Araştırma Makalesi
BibTex RIS Kaynak Göster

Sign2Text: Turkish Sign Language recognition using Convolutional Neural Networks

Yıl 2020, Sayı: 19, 923 - 934, 31.08.2020
https://doi.org/10.31590/ejosat.747231

Öz

Sign language is a visual language created by the hearing impaired by using hand gestures and facial expressions while communicating among themselves. Although the hearing impaired can easily communicate with each other with the help of sign language, they have great difficulties in expressing themselves and understanding others in public institutions such as hospitals. The literacy rate for the hearing impaired is low. Those who are literate have difficulty in understanding what they read due to the different grammar of Turkish Sign Language and their narrow vocabulary. According to the reports of the World Health Organization, there are 34 million hearing impaired in Europe in 2018, and this number is expected to be 46 million by 2050. In the process of detecting the movements in the video and converting it into sign language. In this study, Convolutional Artificial Networks (CNN: Convolution Neural Network) and Long Short Term Memory (LSTM: Long Short Term Memory) deep learning techniques were used in the process of detecting the movements made by the hearing impaired individuals against their cameras and converting them into sign language without using any sensors. First of all, video pre-processing steps such as determining the head area and making it suitable for training, detecting and tracking the movements of the hands and cropping were applied on the data obtained through the camera. It is aimed to train the videos prepared with frames for the Convolutional Artificial Networks training model. The data set is divided into frames for the use of videos in the training phase. In sign language movements, hand and finger movements are primarily predicted. Since the training model will be fed only for hand movements, the head region where the skin color is found was determined. A 97% success rate was achieved in the estimation of the CNN + LSTM models, which were trained with the sign language movements of 10 numbers and 29 letters made in front of the camera. These results showed that deep learning methods can be used to perceive the camera movements of hearing impaired individuals and convert them into text.

Kaynakça

  • Haualand H., 2007, The two week village, The significance of sacred occasions for the deaf community, In Benedicte Ingstad & Ssuan R., Whyte, ed., Disability in local and global worlds, 33-55, Berkeley: University of California Press.Joachims, T. (1999, June). Transductive inference for text classification using support vector machines. In Icml (Vol. 99, pp. 200-209).
  • Murray J. J., 2008, Coequality and transnational studies: understanding deaf lives, In H.-D. L. Bauman (ed.) Open your eyes, Deaf studies talking, 100-110.. London: University of Minnesota Press.
  • Gordon R. G., Jr. ed., 2005, Ethnologue: Languages of the World, Fifteenth edition, Dallas TX: SIL International.
  • I. Marshall É. S., A prototype text to British Sign Language (BSL) translation system, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 2, Sapporo, Japan: Association for Computational Linguistics, 2003, pp. 113-116. [6].
  • Bungeroth J., Ney H., Statistical sign language translation, In Proc. of the Workshop on Representation and Processing of Sign Languages (LREC2004), pages 105–108, Lisbon, Portugal, 2004.
  • Almohimeed A., Wald M., Damper R. I., 2011, Arabic Text to Arabic Sign Language Translation System for the Deaf and HearingImpaired Community, In, EMNLP 2011: The Second Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), Edinburgh, UK, Scotland, pp. 101-109.
  • Takahashi T., Kishino F., 1992, A hand gesture recognition method and its application, Systems and Computers in Japan, 23 (3), 38-48.
  • Wang H., Leu M., Oz C., C., 2006, American Sign Language recognition using multidimensional Hidden Markov Models, Journal of Information Science and Engineering, 22 (5), 1109-1123.
  • Shanableh T., Assaleh K., 2011, User-independent recognition of Arabic sign language for facilitating communication with the deaf community, Digital Signal Processing: A Review Journal, 21 (4), 535-542.
  • Starner T., Weaver J., Pentland A., “Real-time American Sign Language Recognition using Desk and Wearable Computer based Video”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, pp. 1371–1375, 1998.
  • Grobel K., Assan M., “Isolated Sign Language Recognition using Hidden 65 Markov Models”, IEEE International Conference on Computational Cybernetics and Simulation, Systems, Man, and Cybernetics, pp. 162–167, 1997.
  • Chai X., Li G., Chen X., Zhou M., Wu G., Li H., “VisualComm: A Tool to Support Communication Between Deaf and Hearing Persons with the Kinect”, 15th International ACM SIGACCESS Conference on Computers and Accessibility, p. 76, 2013.
  • Haberdar H., 2005, Saklı Markov Modelleri Kullanılarak Görüntüden Gerçek Zamanlı Türk İşaret Dili Tanıma Sistemi, Bilgisayar Mühendisliği, Yüksek Lisans Tezi, Yıldız Teknik Üniversitesi.N. El-Makky et al., Sentiment analysis of colloquial Arabic tweets, 2015.
  • Işikdoğan F., Albayrak S., 2011, June, Automatic recognition of Turkish fingerspelling, In Innovations in Intelligent Systems and Applications (INISTA), 2011 International Symposium on (pp. 264-267), IEEE.
  • Ketenci̇ S., Kayikçioğlu T., Gangal A., 2015, May, Recognition of sign language numbers via electromyography signals, In Signal Processing and Communications Applications Conference (SIU), 2015 23th (pp. 2593-2596), IEEE.
  • Celik O, 2019, An artificial intelligence based remote communication system for hearing impaired. Ph.D. thesis, Eskisehir Osmangazi University.
  • Hochreiter S., Schmidhuber J., 1997, Long short-term memory. Neural computation, 9(8), 1735-1780.

Sign2Text: Konvolüsyonel Sinir Ağları Kullanarak Türk İşaret Dili Tanıma

Yıl 2020, Sayı: 19, 923 - 934, 31.08.2020
https://doi.org/10.31590/ejosat.747231

Öz

İşaret dili, işitme engellilerin kendi aralarında iletişim kurarken, el hareketlerini ve yüz mimiklerini kullanarak oluşturdukları görsel bir dildir. İşitme engelliler kendi aralarında işaret dili yardımıyla rahatlıkla iletişim kurabilmelerine rağmen hastane gibi kamu kurumlarında, hizmet almaya gidenlerin kendilerini ifade etmekte ve karşılarındakileri anlamakta büyük zorluklar çekmektedirler. İşitme engelli okuma yazma oranı düşüktür. Okuma yazması olanların ise Türk İşaret Dili dilbigisinin farklı olması ve dar kelime dağarcığından dolayı okuduklarını anlamada zorluk yaşamaktadır. Dünya sağlık örgütünün raporlarına göre 2018 yılında Avrupa’da 34 milyon işitme engelli bulunmakta, bu sayının 2050 yılına kadar 46 milyon olması beklenmektedir. Video içerisindeki hareketlerin algılanıp işaret diline çevirme işleminde. Bu çalışmada herhangi bir sensör kullanılmadan işitme engelli bireyler tarafından kamerası karşısında yapılan hareketlerin algılanıp işaret diline çevirme işleminde Konvolüsyonel Yapay Ağlar (CNN: Convolutıion Neural Network) ve Uzun Kısa Süreli Bellek (LSTM: Long Short Term Memory) derin öğrenme teknikleri kullanılmıştır. Öncelikle, kamera aracılığıyla elde edilen veri üzerinde baş bölgesinin tespiti ve eğitime uygun hale getirilmesi, ellerin tespiti ve hareketlerinin takip edilmesi ve kırpma gibi video ön işleme adımları uygulanmıştır. Hazırlanan videoların Konvolüsyonel Yapay Ağlar eğitim modeli için frameler ile eğitimi amaçlanmıştır. Veri seti videoların eğitim aşamasında kullanılması için framelere parçalanmıştır. İşaret dili hareketlerinde öncelikli olarak el ve parmak hareketlerinin tahminlemesi gerçekleştirilir. Sadece el hareketleri için eğitim modeli besleneceği için ten renginin bulunduğu kafa bölgesi tespiti çalışması gerçekleştirilmiştir. Kamera karşısında yapılan 10 rakam ve 29 harfin işaret dili hareketleri ile eğitilen CNN + LSTM modellerinde tahminlemesinde %97 başarı oranı elde edilmiştir. Bu sonuçlar, işitme engelli bireylerin kamera karşısında yaptığı hareketlerin algılanıp metne dönüştürmesinde derin öğrenme yöntemlerinin kullanılabileceğini göstermiştir.

Kaynakça

  • Haualand H., 2007, The two week village, The significance of sacred occasions for the deaf community, In Benedicte Ingstad & Ssuan R., Whyte, ed., Disability in local and global worlds, 33-55, Berkeley: University of California Press.Joachims, T. (1999, June). Transductive inference for text classification using support vector machines. In Icml (Vol. 99, pp. 200-209).
  • Murray J. J., 2008, Coequality and transnational studies: understanding deaf lives, In H.-D. L. Bauman (ed.) Open your eyes, Deaf studies talking, 100-110.. London: University of Minnesota Press.
  • Gordon R. G., Jr. ed., 2005, Ethnologue: Languages of the World, Fifteenth edition, Dallas TX: SIL International.
  • I. Marshall É. S., A prototype text to British Sign Language (BSL) translation system, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 2, Sapporo, Japan: Association for Computational Linguistics, 2003, pp. 113-116. [6].
  • Bungeroth J., Ney H., Statistical sign language translation, In Proc. of the Workshop on Representation and Processing of Sign Languages (LREC2004), pages 105–108, Lisbon, Portugal, 2004.
  • Almohimeed A., Wald M., Damper R. I., 2011, Arabic Text to Arabic Sign Language Translation System for the Deaf and HearingImpaired Community, In, EMNLP 2011: The Second Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), Edinburgh, UK, Scotland, pp. 101-109.
  • Takahashi T., Kishino F., 1992, A hand gesture recognition method and its application, Systems and Computers in Japan, 23 (3), 38-48.
  • Wang H., Leu M., Oz C., C., 2006, American Sign Language recognition using multidimensional Hidden Markov Models, Journal of Information Science and Engineering, 22 (5), 1109-1123.
  • Shanableh T., Assaleh K., 2011, User-independent recognition of Arabic sign language for facilitating communication with the deaf community, Digital Signal Processing: A Review Journal, 21 (4), 535-542.
  • Starner T., Weaver J., Pentland A., “Real-time American Sign Language Recognition using Desk and Wearable Computer based Video”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, pp. 1371–1375, 1998.
  • Grobel K., Assan M., “Isolated Sign Language Recognition using Hidden 65 Markov Models”, IEEE International Conference on Computational Cybernetics and Simulation, Systems, Man, and Cybernetics, pp. 162–167, 1997.
  • Chai X., Li G., Chen X., Zhou M., Wu G., Li H., “VisualComm: A Tool to Support Communication Between Deaf and Hearing Persons with the Kinect”, 15th International ACM SIGACCESS Conference on Computers and Accessibility, p. 76, 2013.
  • Haberdar H., 2005, Saklı Markov Modelleri Kullanılarak Görüntüden Gerçek Zamanlı Türk İşaret Dili Tanıma Sistemi, Bilgisayar Mühendisliği, Yüksek Lisans Tezi, Yıldız Teknik Üniversitesi.N. El-Makky et al., Sentiment analysis of colloquial Arabic tweets, 2015.
  • Işikdoğan F., Albayrak S., 2011, June, Automatic recognition of Turkish fingerspelling, In Innovations in Intelligent Systems and Applications (INISTA), 2011 International Symposium on (pp. 264-267), IEEE.
  • Ketenci̇ S., Kayikçioğlu T., Gangal A., 2015, May, Recognition of sign language numbers via electromyography signals, In Signal Processing and Communications Applications Conference (SIU), 2015 23th (pp. 2593-2596), IEEE.
  • Celik O, 2019, An artificial intelligence based remote communication system for hearing impaired. Ph.D. thesis, Eskisehir Osmangazi University.
  • Hochreiter S., Schmidhuber J., 1997, Long short-term memory. Neural computation, 9(8), 1735-1780.
Toplam 17 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Özer Çelik 0000-0002-4409-3101

Alper Odabas 0000-0002-4361-3056

Yayımlanma Tarihi 31 Ağustos 2020
Yayımlandığı Sayı Yıl 2020 Sayı: 19

Kaynak Göster

APA Çelik, Ö., & Odabas, A. (2020). Sign2Text: Konvolüsyonel Sinir Ağları Kullanarak Türk İşaret Dili Tanıma. Avrupa Bilim Ve Teknoloji Dergisi(19), 923-934. https://doi.org/10.31590/ejosat.747231