BibTex RIS Cite

HOSPISIGN: AN INTERACTIVE SIGN LANGUAGE PLATFORM FOR HEARING IMPAIRED

Year 2015, Volume: 11 Issue: 3, 75 - 92, 16.02.2016

Abstract

Sign language is the natural medium of communication for the Deaf community. In this study, we have developed an interactive communication interface for hospitals, HospiSign, using computer vision based sign language recognition methods. The objective of this paper is to review sign language based Human-Computer Interaction applications and to introduce HospiSign in this context. HospiSign is designed to meet deaf people at the information desk of a hospital and to assist them in their visit. The interface guides the deaf visitors to answer certain questions and express intention of their visit, in sign language, without the need of a translator. The system consists of a computer, a touch display to visualize the interface, and Microsoft Kinect v2 sensor to capture the users’ sign responses. HospiSign recognizes isolated signs in a structured activity diagram using Dynamic Time Warping based classifiers. In order to evaluate the developed interface, we performed usability tests and deduced that the system was able to assist its users in real time with high accuracy.

References

  • H. Cooper, B. Holt, and R. Bowden, (2011).Sign Language Recognition, in Visual Analysis of Humans, Springer, pp. 539–562.
  • L. R. Rabiner, (1989).A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286.
  • D. J. Berndt and J. Clifford, Using Dynamic Time Warping to Find Patterns in Time Series,in KDD Workshop, vol. 10, 1994, pp. 359– 370.
  • T. Starner and A. Pentland, (1997).Real-time American Sign Language Recognition from Video Using Hidden Markov Models,in Motion-Based Recognition. Springer, pp. 227–243.
  • K. Grobel and M. Assan, Isolated Sign Language Recognition using Hidden Markov Models,in IEEE Systems, Man, and Cybernetics International Conference on Computational Cybernetics and Simulation, vol. 1, 1997, pp. 162–167.
  • C. Vogler and D. Metaxas, Parallel Hidden Markov Models for American Sign Language Recognition,in the Proceedings of the IEEE Seventh International Conference on Computer Vision, vol. 1,1999, pp. 116–122.
  • H.-K. Lee and J. H. Kim,(1999).An HMM-based Threshold Model Approachfor Gesture Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 21, no. 10, pp. 961– 973.
  • X. Chai, G. Li, X. Chen, M. Zhou, G. Wu, and H. Li, Visualcomm: A Tool to Support Communication betweenDeaf and Hearing Persons withthe Kinect, in Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 2013, p. 76.
  • S. Theodorakis, V. Pitsikalis, and P. Maragos, (2014).Dynamic Static Unsupervised Sequentiality, Statistical Subunits and Lexicon for Sign Language Recognition, Image and Vision Computing, vol. 32, no. 8, pp. 533–549.
  • V. Pitsikalis, S. Theodorakis, C. Vogler, and P. Maragos,Advances in Phonetics-based Sub-unit Modeling for Transcription Alignment andSign Language Recognition, in IEEE Computer Society Conference onComputer Vision and Pattern Recognition Workshops(CVPRW), 2011.
  • Z. Zhang, (2012). Microsoft Kinect Sensor and Its Effect, MultiMedia, IEEE,vol. 19, no. 2, pp. 4–10.
  • B. S. Parton, (2006). Sign Language Recognition and Translation: A Multidiscipline Approach from the Field of Artificial Intelligence,Journal of Deaf Studies and Deaf Education, vol. 11, no. 1, pp. 94–101.
  • J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, M. Cook, and R. Moore, (2013). Real-Time Human Pose Recognition in Parts from Single Depth Images, Communications of the ACM, vol. 56, no. 1, pp. 116–124.
  • S. Cox, Speech and Language Processing for a Constrained Speech Translation System, in INTERSPEECH, 2002.
  • S. Cox, M. Lincoln, J. Tryggvason, M. Nakisa, M. Wells, M. Tutt, and S. Abbott, Tessa, a System to Aid Communication with Deaf People,in Proceedings of the Fifth International ACM Conference on Assistive Technologies, 2002, pp. 205–212.
  • O. Aran, I. Ari, L. Akarun, B. Sankur, A. Benoit, A. Caplier, P. Campr and A. H. Carrillo (2009). SignTutor: An Interactive System for Sign Language Tutoring, IEEE MultiMedia, no. 1, pp. 81–93.
  • Z. Zafrulla, H. Brashear, P. Yin, P. Presti, T. Starner, and H. Hamilton, American Sign Language Phrase Verification in an Educational Gamefor Deaf Children, 20th IEEE International Conference on PatternRecognition (ICPR), 2010, pp. 3846–3849.
  • K. A. Weaver and T. Starner, We need to communicate!: Helping Hearing Parents of Deaf Children Learn American Sign Language, in The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 2011, pp. 91–98.
  • M. Hrùz, P. Campr, E. Dikici, A. A. Kındıroğlu, Z. Krnoul, A. Ronzhin, H. Sak, D. Schorno, H. Yalcin, L. Akarun, O. Aran, A. Karpov, M. Saraçlar, M. Železný,(2011). Automatic Fingersign-toSpeech Translation System,Journal on Multimodal User Interfaces,vol. 4, no. 2, pp. 61–79.
  • Z. Zafrulla, H. Brashear, T. Starner, H. Hamilton, and P. Presti, AmericanSign Language Recognition with the Kinect, in Proceedings of the 13th International Conference on Multimodal Interfaces, ACM, 2011, pp. 279–286.
  • E. Efthimiou, S.-E. Fotinea, T. Hanke, J. Glauert, R. Bowden, A. Braffort, C. Collet, P. Maragos, and F. Lefebvre-Albaret, The DictaSignWiki: Enabling Web Communication for the Deaf,Computers Helping People with Special Needs, Springer, 2012.
  • A. Karpov, Z. Krnoul, M. Zelezny, and A. Ronzhin, (2013).Multimodal Synthesizer for Russian and Czech Sign Languages and Audio-Visual speech, Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion. Springer, pp. 520–529.
  • X. Chai, G. Li, Y. Lin, Z. Xu, Y. Tang, X. Chen, and M. Zhou,Sign Language Recognition and Translation with Kinect, in IEEE Conference on Automatic Face and Gesture Recognition, 2013.
  • J. Gameiro, T. Cardoso, and Y. Rybarczyk, (2014).Kinect-sign: Teaching Sign Language to Listeners through a Game, in Innovative and Creative Developments in Multimodal Interaction Systems. Springer, pp. 141–159.
  • Z. Zafrulla, H. Brashear, H. Hamilton, and T. Starner, Towards an American Sign LanguageVerifier for Educational Game for Deaf Children,in Proceedings of International Conference on Pattern Recognition (ICPR), 2010.
  • V. Lopez-Ludena, C. Gonzalez-Morcillo, J. Lopez, R. Barra-Chicote, R. Cordoba, and R. San-Segundo, (2014). Translating Bus Information into Sign Language for Deaf People, Engineering Applications of Artificial Intelligence, vol. 32, pp. 258–269.
  • T. Kadir, R. Bowden, E. J. Ong, and A. Zisserman,Minimal Training, Large Lexicon, Unconstrained Sign Language Recognition, in British Machine Vision Conference (BMVC), 2004.
Year 2015, Volume: 11 Issue: 3, 75 - 92, 16.02.2016

Abstract

References

  • H. Cooper, B. Holt, and R. Bowden, (2011).Sign Language Recognition, in Visual Analysis of Humans, Springer, pp. 539–562.
  • L. R. Rabiner, (1989).A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286.
  • D. J. Berndt and J. Clifford, Using Dynamic Time Warping to Find Patterns in Time Series,in KDD Workshop, vol. 10, 1994, pp. 359– 370.
  • T. Starner and A. Pentland, (1997).Real-time American Sign Language Recognition from Video Using Hidden Markov Models,in Motion-Based Recognition. Springer, pp. 227–243.
  • K. Grobel and M. Assan, Isolated Sign Language Recognition using Hidden Markov Models,in IEEE Systems, Man, and Cybernetics International Conference on Computational Cybernetics and Simulation, vol. 1, 1997, pp. 162–167.
  • C. Vogler and D. Metaxas, Parallel Hidden Markov Models for American Sign Language Recognition,in the Proceedings of the IEEE Seventh International Conference on Computer Vision, vol. 1,1999, pp. 116–122.
  • H.-K. Lee and J. H. Kim,(1999).An HMM-based Threshold Model Approachfor Gesture Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 21, no. 10, pp. 961– 973.
  • X. Chai, G. Li, X. Chen, M. Zhou, G. Wu, and H. Li, Visualcomm: A Tool to Support Communication betweenDeaf and Hearing Persons withthe Kinect, in Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 2013, p. 76.
  • S. Theodorakis, V. Pitsikalis, and P. Maragos, (2014).Dynamic Static Unsupervised Sequentiality, Statistical Subunits and Lexicon for Sign Language Recognition, Image and Vision Computing, vol. 32, no. 8, pp. 533–549.
  • V. Pitsikalis, S. Theodorakis, C. Vogler, and P. Maragos,Advances in Phonetics-based Sub-unit Modeling for Transcription Alignment andSign Language Recognition, in IEEE Computer Society Conference onComputer Vision and Pattern Recognition Workshops(CVPRW), 2011.
  • Z. Zhang, (2012). Microsoft Kinect Sensor and Its Effect, MultiMedia, IEEE,vol. 19, no. 2, pp. 4–10.
  • B. S. Parton, (2006). Sign Language Recognition and Translation: A Multidiscipline Approach from the Field of Artificial Intelligence,Journal of Deaf Studies and Deaf Education, vol. 11, no. 1, pp. 94–101.
  • J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, M. Cook, and R. Moore, (2013). Real-Time Human Pose Recognition in Parts from Single Depth Images, Communications of the ACM, vol. 56, no. 1, pp. 116–124.
  • S. Cox, Speech and Language Processing for a Constrained Speech Translation System, in INTERSPEECH, 2002.
  • S. Cox, M. Lincoln, J. Tryggvason, M. Nakisa, M. Wells, M. Tutt, and S. Abbott, Tessa, a System to Aid Communication with Deaf People,in Proceedings of the Fifth International ACM Conference on Assistive Technologies, 2002, pp. 205–212.
  • O. Aran, I. Ari, L. Akarun, B. Sankur, A. Benoit, A. Caplier, P. Campr and A. H. Carrillo (2009). SignTutor: An Interactive System for Sign Language Tutoring, IEEE MultiMedia, no. 1, pp. 81–93.
  • Z. Zafrulla, H. Brashear, P. Yin, P. Presti, T. Starner, and H. Hamilton, American Sign Language Phrase Verification in an Educational Gamefor Deaf Children, 20th IEEE International Conference on PatternRecognition (ICPR), 2010, pp. 3846–3849.
  • K. A. Weaver and T. Starner, We need to communicate!: Helping Hearing Parents of Deaf Children Learn American Sign Language, in The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 2011, pp. 91–98.
  • M. Hrùz, P. Campr, E. Dikici, A. A. Kındıroğlu, Z. Krnoul, A. Ronzhin, H. Sak, D. Schorno, H. Yalcin, L. Akarun, O. Aran, A. Karpov, M. Saraçlar, M. Železný,(2011). Automatic Fingersign-toSpeech Translation System,Journal on Multimodal User Interfaces,vol. 4, no. 2, pp. 61–79.
  • Z. Zafrulla, H. Brashear, T. Starner, H. Hamilton, and P. Presti, AmericanSign Language Recognition with the Kinect, in Proceedings of the 13th International Conference on Multimodal Interfaces, ACM, 2011, pp. 279–286.
  • E. Efthimiou, S.-E. Fotinea, T. Hanke, J. Glauert, R. Bowden, A. Braffort, C. Collet, P. Maragos, and F. Lefebvre-Albaret, The DictaSignWiki: Enabling Web Communication for the Deaf,Computers Helping People with Special Needs, Springer, 2012.
  • A. Karpov, Z. Krnoul, M. Zelezny, and A. Ronzhin, (2013).Multimodal Synthesizer for Russian and Czech Sign Languages and Audio-Visual speech, Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion. Springer, pp. 520–529.
  • X. Chai, G. Li, Y. Lin, Z. Xu, Y. Tang, X. Chen, and M. Zhou,Sign Language Recognition and Translation with Kinect, in IEEE Conference on Automatic Face and Gesture Recognition, 2013.
  • J. Gameiro, T. Cardoso, and Y. Rybarczyk, (2014).Kinect-sign: Teaching Sign Language to Listeners through a Game, in Innovative and Creative Developments in Multimodal Interaction Systems. Springer, pp. 141–159.
  • Z. Zafrulla, H. Brashear, H. Hamilton, and T. Starner, Towards an American Sign LanguageVerifier for Educational Game for Deaf Children,in Proceedings of International Conference on Pattern Recognition (ICPR), 2010.
  • V. Lopez-Ludena, C. Gonzalez-Morcillo, J. Lopez, R. Barra-Chicote, R. Cordoba, and R. San-Segundo, (2014). Translating Bus Information into Sign Language for Deaf People, Engineering Applications of Artificial Intelligence, vol. 32, pp. 258–269.
  • T. Kadir, R. Bowden, E. J. Ong, and A. Zisserman,Minimal Training, Large Lexicon, Unconstrained Sign Language Recognition, in British Machine Vision Conference (BMVC), 2004.
There are 27 citations in total.

Details

Primary Language English
Journal Section Articles
Authors

Muhammed Süzgün This is me

Hilal Özdemir This is me

Necati Camgöz This is me

Ahmet Kındıroğlu This is me

Doğaç Başaran This is me

Cengiz Togay This is me

Lale Akarun This is me

Publication Date February 16, 2016
Published in Issue Year 2015 Volume: 11 Issue: 3

Cite

APA Süzgün, M., Özdemir, H., Camgöz, N., Kındıroğlu, A., et al. (2016). HOSPISIGN: AN INTERACTIVE SIGN LANGUAGE PLATFORM FOR HEARING IMPAIRED. Journal of Naval Sciences and Engineering, 11(3), 75-92.