Research Article
BibTex RIS Cite

Real-time Parental Voice Recognition System For Persons Having Impaired Hearing

Year 2018, , 40 - 46, 25.03.2018
https://doi.org/10.30516/bilgesci.350016

Abstract

Persons having impaired hearing do not live a comfortable life because
they can’t hear sounds when they are asleep or alone at home. In this study, a
parental voice recognition system was proposed for those people. Persons having
impaired hearing are informed by vibration about which one of their parents is
speaking. By this means, the person having impaired hearing real timely
perceives who is calling or who is speaking to him. The wearable device that we
developed can real timely perceive parental voice very easily, and transmits it
to person having impaired hearing, while he/she is asleep or at home. A
wearable device has been developed for persons having impaired hearing to use
easily at home environment. Our device is placed on user’s back, and just a ring-sized
vibration motor is attached to the finger of person. Our device consists of
Raspberry Pi, usb sound card, microphone, power supply and vibration motor.
First of all, the sound is received by a microphone, and sampling is made.
According to the Nyquist Theorem, 44100 samples are made per second.
Normalization during preprocessing phase, Mel Frequency Cepstral Coefficients
(MFCC) during feature extraction stage, k nearest neighbor (knn) during the
classification phase were used. Statistical or Z-score normalization was used
in the pre-processing phase. By means of normalization of the data, it is
ensured that each parameter in the training input set contributes equally to
the prediction of the model. MFCC is one of the feature extraction methods that
are frequently used in voice recognition applications. MFCC represents the
short-time power spectrum of the audio signal, and models the manner of
perception of human ear. Knn is an educational learning algorithm, and its aim
is to classify the existing learning data when a new sampling arrives. The
sound data received via microphone were estimated through preprocessing,
feature extraction and classification stages, and the person having impaired
hearing was informed through real time vibrations about to whom this voice
belongs. This study was tested on 2 deaf, 3 normal hearing persons. The ears of
normal hearing persons were covered with a earphone that gives out loud noise.
Persons having impaired hearing estimated their mothers’ voice by 76%, and
fathers’ voice by 81% accuracy in real-time tests. The success rate decreases
due to the noise of environment especially while watching tv.  In the tests performed while these persons
are asleep, a person having impaired hearing perceives his/her mother’s voice by
78%, and   father’s voice by 83%
accuracy. In this study it was aimed for persons having impaired hearing to
perceive their parents’ voice and accordingly have a more prosperous standard
of living.

References

  • Atal, B.S. (1976). Automatic recognition of speakers from their voices, Proceedings of the IEEE, vol. 64, pp. 460-475, 1976.
  • Bhattacharyya, S. (2015). Handbook of Research on Advanced Hybrid Intelligent Techniques and Applications, IGI Global.
  • Babu, C.G., Kumar, R.H., and Vanathi, P. (2012). Performance analysis of hybrid robust automatic speech recognition system, in Signal Processing, Computing and Control (ISPCC), 2012 IEEE International Conference on, 2012, pp. 1-4.
  • Balasubramaniyan, C. and Manivannan, D. (2016). IoT Enabled Air Quality Monitoring System (AQMS) using Raspberry Pi, Indian Journal of Science and Technology, vol. 9.
  • Chang C.-H., Zhou, Z.-H, Lin, S.-H, Wang, J.-C. And Wang J.-F. (2012). Intelligent appliance control using a low-cost embedded speech recognizer, in Computing and Networking Technology (ICCNT), 2012 8th International Conference on, 2012, pp. 311-314.
  • Fezari, M. and Bousbia-Salah, M. (2006). A voice command system for autonomous robots guidance," in Advanced Motion Control, 9th IEEE International Workshop on, 2006, pp. 261-265.
  • Jiang, Z., Huang, H., Yang, S., Lu, S., and Hao, Z. (2009). Acoustic feature comparison of MFCC and CZT-based cepstrum for speech recognition, in Natural Computation, 2009. ICNC'09. Fifth International Conference on, 2009, pp. 55-59.
  • Leechor, P., Pompanomchai, C., and Sukklay, P. (2010). Operation of a radio-controlled car by voice commands, Mechanical and Electronics Engineering (ICMEE), pp. V1-14-V1-17.
  • Muda, L., Begram, M., and Elamvazuthi, I. (2010). Voice recognition algorithms using mel frequency cepstral coefficient (MFCC) and dynamic time warping (DTW) techniques, Journal of Computing, Volume 2, Issue 3.
  • Nandyala, S.P., and Kumar D.T.K. (2010). Real Time Isolated Word Speech Recognition System for Human Computer Interaction, International Journal of Computer Applications, vol. 12, pp. 1-7.
  • Phoophuangpairoj, R. (2011). Using multiple HMM recognizers and the maximum accuracy method to improve voice-controlled robots, Intelligent Signal Processing and Communications Systems (ISPACS), pp. 1-6.
  • Schafer, R.W., and Rabiner, L.R. (1975). Digital representations of speech signals, Proceedings of the IEEE, vol. 63, pp. 662-677.
  • Shearme, J, and Leach, P. (1968). Some experiments with a simple word recognition system, IEEE Transactions on Audio and Electroacoustics, vol. 16, pp. 256-261.
  • Yağanoğlu, M, Bozkurt, F., and Günay, F.B. (2014). EEG tabanli beyin-bilgisayar arayüzü sistemlerinde öznitelik çikarma yöntemleri, Mühendislik Bilimleri ve Tasarım Dergisi, vol. 2, pp. 313-318,
Year 2018, , 40 - 46, 25.03.2018
https://doi.org/10.30516/bilgesci.350016

Abstract

References

  • Atal, B.S. (1976). Automatic recognition of speakers from their voices, Proceedings of the IEEE, vol. 64, pp. 460-475, 1976.
  • Bhattacharyya, S. (2015). Handbook of Research on Advanced Hybrid Intelligent Techniques and Applications, IGI Global.
  • Babu, C.G., Kumar, R.H., and Vanathi, P. (2012). Performance analysis of hybrid robust automatic speech recognition system, in Signal Processing, Computing and Control (ISPCC), 2012 IEEE International Conference on, 2012, pp. 1-4.
  • Balasubramaniyan, C. and Manivannan, D. (2016). IoT Enabled Air Quality Monitoring System (AQMS) using Raspberry Pi, Indian Journal of Science and Technology, vol. 9.
  • Chang C.-H., Zhou, Z.-H, Lin, S.-H, Wang, J.-C. And Wang J.-F. (2012). Intelligent appliance control using a low-cost embedded speech recognizer, in Computing and Networking Technology (ICCNT), 2012 8th International Conference on, 2012, pp. 311-314.
  • Fezari, M. and Bousbia-Salah, M. (2006). A voice command system for autonomous robots guidance," in Advanced Motion Control, 9th IEEE International Workshop on, 2006, pp. 261-265.
  • Jiang, Z., Huang, H., Yang, S., Lu, S., and Hao, Z. (2009). Acoustic feature comparison of MFCC and CZT-based cepstrum for speech recognition, in Natural Computation, 2009. ICNC'09. Fifth International Conference on, 2009, pp. 55-59.
  • Leechor, P., Pompanomchai, C., and Sukklay, P. (2010). Operation of a radio-controlled car by voice commands, Mechanical and Electronics Engineering (ICMEE), pp. V1-14-V1-17.
  • Muda, L., Begram, M., and Elamvazuthi, I. (2010). Voice recognition algorithms using mel frequency cepstral coefficient (MFCC) and dynamic time warping (DTW) techniques, Journal of Computing, Volume 2, Issue 3.
  • Nandyala, S.P., and Kumar D.T.K. (2010). Real Time Isolated Word Speech Recognition System for Human Computer Interaction, International Journal of Computer Applications, vol. 12, pp. 1-7.
  • Phoophuangpairoj, R. (2011). Using multiple HMM recognizers and the maximum accuracy method to improve voice-controlled robots, Intelligent Signal Processing and Communications Systems (ISPACS), pp. 1-6.
  • Schafer, R.W., and Rabiner, L.R. (1975). Digital representations of speech signals, Proceedings of the IEEE, vol. 63, pp. 662-677.
  • Shearme, J, and Leach, P. (1968). Some experiments with a simple word recognition system, IEEE Transactions on Audio and Electroacoustics, vol. 16, pp. 256-261.
  • Yağanoğlu, M, Bozkurt, F., and Günay, F.B. (2014). EEG tabanli beyin-bilgisayar arayüzü sistemlerinde öznitelik çikarma yöntemleri, Mühendislik Bilimleri ve Tasarım Dergisi, vol. 2, pp. 313-318,
There are 14 citations in total.

Details

Primary Language English
Subjects Computer Software
Journal Section Research Articles
Authors

Mete Yağanoğlu

Cemal Köse

Publication Date March 25, 2018
Acceptance Date March 6, 2018
Published in Issue Year 2018

Cite

APA Yağanoğlu, M., & Köse, C. (2018). Real-time Parental Voice Recognition System For Persons Having Impaired Hearing. Bilge International Journal of Science and Technology Research, 2(1), 40-46. https://doi.org/10.30516/bilgesci.350016