Research Article
BibTex RIS Cite
Year 2020, Volume: 8 Issue: 4, 232 - 235, 31.12.2020
https://doi.org/10.18100/ijamec.809476

Abstract

References

  • Amodei, D., et al. Deep speech 2: End-to-end speech recognition in english and mandarin. in International conference on machine learning. 2016.
  • Chen, C., et al., A bilevel framework for joint optimization of session compensation and classification for speaker identification. Digital Signal Processing, 2019. 89: p. 104-115.
  • Black, M., et al. Automatic classification of married couples' behavior using audio features. in Eleventh annual conference of the international speech communication association. 2010.
  • Metze, F., et al. Comparison of four approaches to age and gender recognition for telephone applications. in 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP'07. 2007. IEEE.
  • Konig, Y., N. Morgan, and C. Chandra, GDNN: a gender-dependent neural network for continuous speech recognition. 1991: International Computer Science Institute.
  • RAMADHAN, M.M., et al., Parameter Tuning in Random Forest Based on Grid Search Method for Gender Classification Based on Voice Frequency. DEStech Transactions on Computer Science and Engineering, 2017(cece).
  • Jain, A. and V. Kanhangad, Gender recognition in smartphones using touchscreen gestures. Pattern Recognition Letters, 2019. 125: p. 604-611.
  • Markitantov, M. and O. Verkholyak. Automatic Recognition of Speaker Age and Gender Based on Deep Neural Networks. in International Conference on Speech and Computer. 2019. Springer.
  • Zvarevashe, K. and O.O. Olugbara. Gender voice recognition using random forest recursive feature elimination with gradient boosting machines. in 2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD). 2018. IEEE.
  • Gupta, P., S. Goel, and A. Purwar. A stacked technique for gender recognition through voice. in 2018 Eleventh International Conference on Contemporary Computing (IC3). 2018. IEEE.
  • Sharma, G. and S. Mala. Framework for gender recognition using voice. in 2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence). 2020. IEEE.
  • Buyukyilmaz, M. and A.O. Cibikdiken. Voice gender recognition using deep learning. in 2016 International Conference on Modeling, Simulation and Optimization Technologies and Applications (MSOTA2016). 2016. Atlantis Press.
  • Dataset. Dataset. 08.07.2020]; Available from: https://raw.githubusercontent.com/primaryobjects/voice-gender/master/voice.csv.
  • Araya‐Salas, M. and G. Smith‐Vidaurre, warbleR: an R package to streamline analysis of animal acoustic signals. Methods in Ecology and Evolution, 2017. 8(2): p. 184-191.
  • Ertam, F., An effective gender recognition approach using voice data via deeper LSTM networks. Applied Acoustics, 2019. 156: p. 351-358.
  • Akçayol, M., Bir anahtarlamalı relüktans motorun sinirsel-bulanık denetimi. 2001, Doktora Tezi, Gazi Üniversitesi Fen Bilimleri Enstitüsü, Ankara.

Gender Determination Using Voice Data

Year 2020, Volume: 8 Issue: 4, 232 - 235, 31.12.2020
https://doi.org/10.18100/ijamec.809476

Abstract

The rapid advancement of today's technologies, it is tried to facilitate whichever system will be used by using voice features such as person recognition and speech recognition by making use of the voices of the users. Organizations serving in these systems need less manpower and facilitate the operation by helping users faster. The decision-making process using sound features is a very challenging process. With gender recognition, which is one of these steps, it is possible to address the user by gender. In this study, it is aimed to define the genders according to the voices in terms of both forensic informatics and the rapid and accurate progress of the processes. In this study, 3168 male and female voice samples were taken as a dataset. Sound samples were first analyzed by acoustic analysis in R using seewave and tuneR packages. Artificial neural networks were used in the classification stage. In order to increase the classification accuracy, the dataset was divided into 10 parts and each part was excluded from training for testing and used for retesting. Average classification success was found by taking the arithmetic mean of the results. In the classification made with artificial neural networks, male and female voices could be distinguished from each other with a success of 97.9%.

References

  • Amodei, D., et al. Deep speech 2: End-to-end speech recognition in english and mandarin. in International conference on machine learning. 2016.
  • Chen, C., et al., A bilevel framework for joint optimization of session compensation and classification for speaker identification. Digital Signal Processing, 2019. 89: p. 104-115.
  • Black, M., et al. Automatic classification of married couples' behavior using audio features. in Eleventh annual conference of the international speech communication association. 2010.
  • Metze, F., et al. Comparison of four approaches to age and gender recognition for telephone applications. in 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP'07. 2007. IEEE.
  • Konig, Y., N. Morgan, and C. Chandra, GDNN: a gender-dependent neural network for continuous speech recognition. 1991: International Computer Science Institute.
  • RAMADHAN, M.M., et al., Parameter Tuning in Random Forest Based on Grid Search Method for Gender Classification Based on Voice Frequency. DEStech Transactions on Computer Science and Engineering, 2017(cece).
  • Jain, A. and V. Kanhangad, Gender recognition in smartphones using touchscreen gestures. Pattern Recognition Letters, 2019. 125: p. 604-611.
  • Markitantov, M. and O. Verkholyak. Automatic Recognition of Speaker Age and Gender Based on Deep Neural Networks. in International Conference on Speech and Computer. 2019. Springer.
  • Zvarevashe, K. and O.O. Olugbara. Gender voice recognition using random forest recursive feature elimination with gradient boosting machines. in 2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD). 2018. IEEE.
  • Gupta, P., S. Goel, and A. Purwar. A stacked technique for gender recognition through voice. in 2018 Eleventh International Conference on Contemporary Computing (IC3). 2018. IEEE.
  • Sharma, G. and S. Mala. Framework for gender recognition using voice. in 2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence). 2020. IEEE.
  • Buyukyilmaz, M. and A.O. Cibikdiken. Voice gender recognition using deep learning. in 2016 International Conference on Modeling, Simulation and Optimization Technologies and Applications (MSOTA2016). 2016. Atlantis Press.
  • Dataset. Dataset. 08.07.2020]; Available from: https://raw.githubusercontent.com/primaryobjects/voice-gender/master/voice.csv.
  • Araya‐Salas, M. and G. Smith‐Vidaurre, warbleR: an R package to streamline analysis of animal acoustic signals. Methods in Ecology and Evolution, 2017. 8(2): p. 184-191.
  • Ertam, F., An effective gender recognition approach using voice data via deeper LSTM networks. Applied Acoustics, 2019. 156: p. 351-358.
  • Akçayol, M., Bir anahtarlamalı relüktans motorun sinirsel-bulanık denetimi. 2001, Doktora Tezi, Gazi Üniversitesi Fen Bilimleri Enstitüsü, Ankara.
There are 16 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Research Article
Authors

Yavuz Selim Taşpınar 0000-0002-7278-4241

Mücahid Mustafa Sarıtaş 0000-0001-5451-9092

İlkay Çınar 0000-0003-0611-3316

Murat Koklu 0000-0002-2737-2360

Publication Date December 31, 2020
Published in Issue Year 2020 Volume: 8 Issue: 4

Cite

APA Taşpınar, Y. S., Sarıtaş, M. M., Çınar, İ., Koklu, M. (2020). Gender Determination Using Voice Data. International Journal of Applied Mathematics Electronics and Computers, 8(4), 232-235. https://doi.org/10.18100/ijamec.809476
AMA Taşpınar YS, Sarıtaş MM, Çınar İ, Koklu M. Gender Determination Using Voice Data. International Journal of Applied Mathematics Electronics and Computers. December 2020;8(4):232-235. doi:10.18100/ijamec.809476
Chicago Taşpınar, Yavuz Selim, Mücahid Mustafa Sarıtaş, İlkay Çınar, and Murat Koklu. “Gender Determination Using Voice Data”. International Journal of Applied Mathematics Electronics and Computers 8, no. 4 (December 2020): 232-35. https://doi.org/10.18100/ijamec.809476.
EndNote Taşpınar YS, Sarıtaş MM, Çınar İ, Koklu M (December 1, 2020) Gender Determination Using Voice Data. International Journal of Applied Mathematics Electronics and Computers 8 4 232–235.
IEEE Y. S. Taşpınar, M. M. Sarıtaş, İ. Çınar, and M. Koklu, “Gender Determination Using Voice Data”, International Journal of Applied Mathematics Electronics and Computers, vol. 8, no. 4, pp. 232–235, 2020, doi: 10.18100/ijamec.809476.
ISNAD Taşpınar, Yavuz Selim et al. “Gender Determination Using Voice Data”. International Journal of Applied Mathematics Electronics and Computers 8/4 (December 2020), 232-235. https://doi.org/10.18100/ijamec.809476.
JAMA Taşpınar YS, Sarıtaş MM, Çınar İ, Koklu M. Gender Determination Using Voice Data. International Journal of Applied Mathematics Electronics and Computers. 2020;8:232–235.
MLA Taşpınar, Yavuz Selim et al. “Gender Determination Using Voice Data”. International Journal of Applied Mathematics Electronics and Computers, vol. 8, no. 4, 2020, pp. 232-5, doi:10.18100/ijamec.809476.
Vancouver Taşpınar YS, Sarıtaş MM, Çınar İ, Koklu M. Gender Determination Using Voice Data. International Journal of Applied Mathematics Electronics and Computers. 2020;8(4):232-5.

Cited By

Speech-to-Gender Recognition Based on Machine Learning Algorithms
International Journal of Applied Mathematics Electronics and Computers
https://doi.org/10.18100/ijamec.1221455