Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2024, Sayı: 057, 12 - 26, 30.06.2024
https://doi.org/10.59313/jsr-a.1422792

Öz

Kaynakça

  • [1] A. Rana, A. Dumka, R. Singh, M. Rashid, N. Ahmad, and M. K. Panda, “An Efficient Machine Learning Approach for Diagnosing Parkinson’s Disease by Utilizing Voice Features,” Electronics (Basel), vol. 11, no. 22, p. 3782, 2022.
  • [2] E. H. Houssein, A. Hammad, and A. A. Ali, “Human emotion recognition from EEG-based brain–computer interface using machine learning: a comprehensive review,” Neural Comput Appl, vol. 34, no. 15, pp. 12527–12557, 2022.
  • [3] E. Dritsas and M. Trigka, “Stroke risk prediction with machine learning techniques,” Sensors, vol. 22, no. 13, p. 4670, 2022.
  • [4] M. M. Kumbure, C. Lohrmann, P. Luukka, and J. Porras, “Machine learning techniques and data for stock market forecasting: A literature review,” Expert Syst Appl, vol. 197, p. 116659, 2022.
  • [5] N. N. Arslan, D. Ozdemir, and H. Temurtas, “ECG heartbeats classification with dilated convolutional autoencoder,” Signal Image Video Process, vol. 18, no. 1, pp. 417–426, 2024, doi: 10.1007/s11760-023-02737-2.
  • [6] S. B. Kotsiantis, I. Zaharakis, and P. Pintelas, “Supervised machine learning: A review of classification techniques,” Emerging artificial intelligence applications in computer engineering, vol. 160, no. 1, pp. 3–24, 2007.
  • [7] S. Duan, J. Zhang, P. Roe, and M. Towsey, “A survey of tagging techniques for music, speech and environmental sound,” Artif Intell Rev, vol. 42, no. 4, pp. 637–661, 2014, doi: 10.1007/s10462-012-9362-y.
  • [8] S. Jayalakshmy and G. F. Sudha, “GTCC-based BiLSTM deep-learning framework for respiratory sound classification using empirical mode decomposition,” Neural Comput Appl, vol. 33, no. 24, pp. 17029–17040, 2021, doi: 10.1007/s00521-021-06295-x.
  • [9] R. Palaniappan, K. Sundaraj, and N. U. Ahamed, “Machine learning in lung sound analysis: A systematic review,” Biocybern Biomed Eng, vol. 33, no. 3, pp. 129–135, 2013, doi: https://doi.org/10.1016/j.bbe.2013.07.001.
  • [10] M. Tschannen, T. Kramer, G. Marti, M. Heinzmann, and T. Wiatowski, “Heart sound classification using deep structured features,” in 2016 Computing in Cardiology Conference (CinC), 2016, pp. 565–568.
  • [11] M. Xiang et al., “Research of heart sound classification using two-dimensional features,” Biomed Signal Process Control, vol. 79, p. 104190, 2023, doi: https://doi.org/10.1016/j.bspc.2022.104190.
  • [12] S. Esmer, M. K. Uçar, İ. Çil, and M. R. Bozkurt, “Parkinson hastalığı teşhisi için makine öğrenmesi tabanlı yeni bir yöntem,” Düzce Üniversitesi Bilim ve Teknoloji Dergisi, vol. 8, no. 3, pp. 1877–1893, 2020.
  • [13] A. F. R. Nogueira, H. S. Oliveira, J. J. M. Machado, and J. M. R. S. Tavares, “Sound Classification and Processing of Urban Environments: A Systematic Literature Review,” Sensors, vol. 22, no. 22, p. 8608, 2022.
  • [14] Y. R. Pandeya, D. Kim, and J. Lee, “Domestic cat sound classification using learned features from deep neural nets,” Applied Sciences, vol. 8, no. 10, p. 1949, 2018.
  • [15] K. J. Piczak, “Environmental sound classification with convolutional neural networks,” in 2015 IEEE 25th international workshop on machine learning for signal processing (MLSP), IEEE, 2015, pp. 1–6.
  • [16] S. Fagerlund, “Bird species recognition using support vector machines,” EURASIP J Adv Signal Process, vol. 2007, pp. 1–8, 2007.
  • [17] C.-J. Huang, Y.-J. Yang, D.-X. Yang, and Y.-J. Chen, “Frog classification using machine learning techniques,” Expert Syst Appl, vol. 36, no. 2, Part 2, pp. 3737–3743, 2009, doi: https://doi.org/10.1016/j.eswa.2008.02.059.
  • [18] D. W. Armitage and H. K. Ober, “A comparison of supervised learning techniques in the classification of bat echolocation calls,” Ecol Inform, vol. 5, no. 6, pp. 465–473, 2010, doi: https://doi.org/10.1016/j.ecoinf.2010.08.001.
  • [19] J. Xie, M. Towsey, A. Truskinger, P. Eichinski, J. Zhang, and P. Roe, “Acoustic classification of Australian anurans using syllable features,” in 2015 IEEE Tenth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015, pp. 1–6. doi: 10.1109/ISSNIP.2015.7106924.
  • [20] M. Malfante, J. I. Mars, M. Dalla Mura, and C. Gervaise, “Automatic fish sounds classification,” J Acoust Soc Am, vol. 143, no. 5, pp. 2834–2846, May 2018, doi: 10.1121/1.5036628.
  • [21] A. P. Ribeiro, N. F. F. da Silva, F. N. Mesquita, P. de C. S. Araújo, T. C. Rosa, and J. N. Mesquita-Neto, “Machine learning approach for automatic recognition of tomato-pollinating bees based on their buzzing-sounds,” PLoS Comput Biol, vol. 17, no. 9, pp. e1009426-, Sep. 2021, [Online]. Available: https://doi.org/10.1371/journal.pcbi.1009426
  • [22] U. Haider et al., “Bioacoustics Signal Classification Using Hybrid Feature Space with Machine Learning,” in 2023 15th International Conference on Computer and Automation Engineering (ICCAE), 2023, pp. 376–380. doi: 10.1109/ICCAE56788.2023.10111384.
  • [23] K. J. Piczak, “ESC: Dataset for environmental sound classification,” in Proceedings of the 23rd ACM international conference on Multimedia, 2015, pp. 1015–1018.
  • [24] J. Salamon, C. Jacoby, and J. P. Bello, “A dataset and taxonomy for urban sound research,” in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp. 1041–1044.
  • [25] Z. Mushtaq, S.-F. Su, and Q.-V. Tran, “Spectral images based environmental sound classification using CNN with meaningful data augmentation,” Applied Acoustics, vol. 172, p. 107581, 2021.
  • [26] R. Mohd Hanifa, K. Isa, and S. Mohamad, “A review on speaker recognition: Technology and challenges,” Computers & Electrical Engineering, vol. 90, p. 107005, 2021, doi: https://doi.org/10.1016/j.compeleceng.2021.107005.
  • [27] S. Furui, “40 Years of Progress in Automatic Speaker Recognition,” in Advances in Biometrics, M. Tistarelli and M. S. Nixon, Eds., Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 1050–1059.
  • [28] N. Singh, A. Agrawal, and R. Khan, “The development of speaker recognition technology,” IJARET., no. May, 2018.
  • [29] P. Krishnamoorthy, H. S. Jayanna, and S. R. M. Prasanna, “Speaker recognition under limited data condition by noise addition,” Expert Syst Appl, vol. 38, no. 10, pp. 13487–13490, 2011, doi: https://doi.org/10.1016/j.eswa.2011.04.069.
  • [30] S. Bhardwaj, S. Srivastava, M. Hanmandlu, and J. R. P. Gupta, “GFM-Based Methods for Speaker Identification,” IEEE Trans Cybern, vol. 43, no. 3, pp. 1047–1058, 2013, doi: 10.1109/TSMCB.2012.2223461.
  • [31] M. Soleymanpour and H. Marvi, “Text-independent speaker identification based on selection of the most similar feature vectors,” Int J Speech Technol, vol. 20, no. 1, pp. 99–108, 2017, doi: 10.1007/s10772-016-9385-x.
  • [32] S. Sedigh, “Application of polyscale methods for speaker verification,” Master Thesis, The University of Manitoba, Winnipeg, 2018.
  • [33] K. P. Bharath and M. Rajesh Kumar, “ELM speaker identification for limited dataset using multitaper based MFCC and PNCC features with fusion score,” Multimed Tools Appl, vol. 79, no. 39, pp. 28859–28883, 2020, doi: 10.1007/s11042-020-09353-z.
  • [34] U. Ayvaz, H. Gürüler, F. Khan, N. Ahmed, T. Whangbo, and A. Bobomirzaevich, “Automatic speaker recognition using mel-frequency cepstral coefficients through machine learning,” CMC-Computers Materials & Continua, vol. 71, no. 3, 2022.
  • [35] J. I. Ramírez-Hernández, A. Manzo-Martínez, F. Gaxiola, L. C. González-Gurrola, V. C. Álvarez-Oliva, and R. López-Santillán, “A Comparison Between MFCC and MSE Features for Text-Independent Speaker Recognition Using Machine Learning Algorithms,” in Fuzzy Logic and Neural Networks for Hybrid Intelligent System Design, O. Castillo and P. Melin, Eds., Cham: Springer International Publishing, 2023, pp. 123–140. doi: 10.1007/978-3-031-22042-5_7.
  • [36] S. H. Shah, M. S. Saeed, S. Nawaz, and M. H. Yousaf, “Speaker Recognition in Realistic Scenario Using Multimodal Data,” in 2023 3rd International Conference on Artificial Intelligence (ICAI), 2023, pp. 209–213. doi: 10.1109/ICAI58407.2023.10136626.
  • [37] S. Sedigh and W. Kinsner, “A Manitoban Speech Dataset,” IEEE DataPort, January, 2018, doi: 10.21227/H2KM16.
  • [38] G. Sharma, K. Umapathy, and S. Krishnan, “Trends in audio signal feature extraction methods,” Applied Acoustics, vol. 158, p. 107020, 2020, doi: https://doi.org/10.1016/j.apacoust.2019.107020.
  • [39] S. Khalid, T. Khalil, and S. Nasreen, “A survey of feature selection and feature extraction techniques in machine learning,” in 2014 science and information conference, IEEE, 2014, pp. 372–378.
  • [40] K. N. Stevens, “Autocorrelation analysis of speech sounds,” J Acoust Soc Am, vol. 22, no. 6, pp. 769–771, 1950.
  • [41] G. Tzanetakis and P. Cook, “Musical genre classification of audio signals,” IEEE Transactions on speech and audio processing, vol. 10, no. 5, pp. 293–302, 2002.
  • [42] M. Sahidullah and G. Saha, “Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition,” Speech Commun, vol. 54, no. 4, pp. 543–565, 2012.
  • [43] P. Pedersen, “The mel scale,” Journal of Music Theory, vol. 9, no. 2, pp. 295–308, 1965.
  • [44] Z. K. Abdul and A. K. Al-Talabani, “Mel Frequency Cepstral Coefficient and its Applications: A Review,” IEEE Access, vol. 10, pp. 122136–122158, 2022, doi: 10.1109/ACCESS.2022.3223444.
  • [45] M. Lahat, R. Niederjohn, and D. Krubsack, “A spectral autocorrelation method for measurement of the fundamental frequency of noise-corrupted speech,” IEEE Trans Acoust, vol. 35, no. 6, pp. 741–750, 1987.
  • [46] I. V. Bele, “The speaker’s formant,” Journal of Voice, vol. 20, no. 4, pp. 555–578, 2006.
  • [47] G. Batista and D. F. Silva, “How k-nearest neighbor parameters affect its performance,” in Argentine symposium on artificial intelligence, Citeseer, 2009, pp. 1–12.
  • [48] O. Kramer, “K-Nearest Neighbors,” in Dimensionality Reduction with Unsupervised Nearest Neighbors, O. Kramer, Ed., Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 13–23. doi: 10.1007/978-3-642-38652-7_2.
  • [49] C. Bentéjac, A. Csörgő, and G. Martínez-Muñoz, “A comparative analysis of gradient boosting algorithms,” Artif Intell Rev, vol. 54, pp. 1937–1967, 2021.
  • [50] A. Natekin and A. Knoll, “Gradient boosting machines, a tutorial,” Front Neurorobot, vol. 7, p. 21, 2013.
  • [51] M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, and B. Scholkopf, “Support vector machines,” IEEE Intelligent Systems and their applications, vol. 13, no. 4, pp. 18–28, 1998.
  • [52] D. A. Pisner and D. M. Schnyer, “Support vector machine,” in Machine learning, Elsevier, 2020, pp. 101–121.
  • [53] S. Suthaharan and S. Suthaharan, “Support vector machine,” Machine learning models and algorithms for big data classification: thinking with examples for effective learning, pp. 207–235, 2016.
  • [54] W. Loh, “Fifty years of classification and regression trees,” International Statistical Review, vol. 82, no. 3, pp. 329–348, 2014.
  • [55] E. Şahin Sadık, H. M. Saraoğlu, S. Canbaz Kabay, M. Tosun, C. Keskinkılıç, and G. Akdağ, “Investigation of the effect of rosemary odor on mental workload using EEG: an artificial intelligence approach,” Signal Image Video Process, vol. 16, no. 2, pp. 497–504, 2022.
  • [56] F. Pedregosa et al., “Scikit-learn: Machine learning in Python,” the Journal of machine Learning research, vol. 12, pp. 2825–2830, 2011.

Unified voice analysis: speaker recognition, age group and gender estimation using spectral features and machine learning classifiers

Yıl 2024, Sayı: 057, 12 - 26, 30.06.2024
https://doi.org/10.59313/jsr-a.1422792

Öz

Predicting speaker's personal traits from voice data has been a subject of attention in many fields such as forensic cases, automatic voice response systems, and biomedical applications. Within the scope of this study, gender and age group prediction was made with the voice data recorded from 24 volunteers. Mel-frequency cepstral coefficients (MFCC) were extracted from the audio data as hybrid time/frequency domain features, and fundamental frequencies and formants were extracted as frequency domain features. These obtained features were fused in a feature pool and age group and gender estimation studies were carried out with 4 different machine learning algorithms. According to the results obtained, the age groups of the participants could be classified with 93% accuracy and the genders with 99% accuracy with the Support Vector Machines algorithm. Also, speaker recognition task was successfully completed with 93% accuracy with the Support Vector Machines.

Kaynakça

  • [1] A. Rana, A. Dumka, R. Singh, M. Rashid, N. Ahmad, and M. K. Panda, “An Efficient Machine Learning Approach for Diagnosing Parkinson’s Disease by Utilizing Voice Features,” Electronics (Basel), vol. 11, no. 22, p. 3782, 2022.
  • [2] E. H. Houssein, A. Hammad, and A. A. Ali, “Human emotion recognition from EEG-based brain–computer interface using machine learning: a comprehensive review,” Neural Comput Appl, vol. 34, no. 15, pp. 12527–12557, 2022.
  • [3] E. Dritsas and M. Trigka, “Stroke risk prediction with machine learning techniques,” Sensors, vol. 22, no. 13, p. 4670, 2022.
  • [4] M. M. Kumbure, C. Lohrmann, P. Luukka, and J. Porras, “Machine learning techniques and data for stock market forecasting: A literature review,” Expert Syst Appl, vol. 197, p. 116659, 2022.
  • [5] N. N. Arslan, D. Ozdemir, and H. Temurtas, “ECG heartbeats classification with dilated convolutional autoencoder,” Signal Image Video Process, vol. 18, no. 1, pp. 417–426, 2024, doi: 10.1007/s11760-023-02737-2.
  • [6] S. B. Kotsiantis, I. Zaharakis, and P. Pintelas, “Supervised machine learning: A review of classification techniques,” Emerging artificial intelligence applications in computer engineering, vol. 160, no. 1, pp. 3–24, 2007.
  • [7] S. Duan, J. Zhang, P. Roe, and M. Towsey, “A survey of tagging techniques for music, speech and environmental sound,” Artif Intell Rev, vol. 42, no. 4, pp. 637–661, 2014, doi: 10.1007/s10462-012-9362-y.
  • [8] S. Jayalakshmy and G. F. Sudha, “GTCC-based BiLSTM deep-learning framework for respiratory sound classification using empirical mode decomposition,” Neural Comput Appl, vol. 33, no. 24, pp. 17029–17040, 2021, doi: 10.1007/s00521-021-06295-x.
  • [9] R. Palaniappan, K. Sundaraj, and N. U. Ahamed, “Machine learning in lung sound analysis: A systematic review,” Biocybern Biomed Eng, vol. 33, no. 3, pp. 129–135, 2013, doi: https://doi.org/10.1016/j.bbe.2013.07.001.
  • [10] M. Tschannen, T. Kramer, G. Marti, M. Heinzmann, and T. Wiatowski, “Heart sound classification using deep structured features,” in 2016 Computing in Cardiology Conference (CinC), 2016, pp. 565–568.
  • [11] M. Xiang et al., “Research of heart sound classification using two-dimensional features,” Biomed Signal Process Control, vol. 79, p. 104190, 2023, doi: https://doi.org/10.1016/j.bspc.2022.104190.
  • [12] S. Esmer, M. K. Uçar, İ. Çil, and M. R. Bozkurt, “Parkinson hastalığı teşhisi için makine öğrenmesi tabanlı yeni bir yöntem,” Düzce Üniversitesi Bilim ve Teknoloji Dergisi, vol. 8, no. 3, pp. 1877–1893, 2020.
  • [13] A. F. R. Nogueira, H. S. Oliveira, J. J. M. Machado, and J. M. R. S. Tavares, “Sound Classification and Processing of Urban Environments: A Systematic Literature Review,” Sensors, vol. 22, no. 22, p. 8608, 2022.
  • [14] Y. R. Pandeya, D. Kim, and J. Lee, “Domestic cat sound classification using learned features from deep neural nets,” Applied Sciences, vol. 8, no. 10, p. 1949, 2018.
  • [15] K. J. Piczak, “Environmental sound classification with convolutional neural networks,” in 2015 IEEE 25th international workshop on machine learning for signal processing (MLSP), IEEE, 2015, pp. 1–6.
  • [16] S. Fagerlund, “Bird species recognition using support vector machines,” EURASIP J Adv Signal Process, vol. 2007, pp. 1–8, 2007.
  • [17] C.-J. Huang, Y.-J. Yang, D.-X. Yang, and Y.-J. Chen, “Frog classification using machine learning techniques,” Expert Syst Appl, vol. 36, no. 2, Part 2, pp. 3737–3743, 2009, doi: https://doi.org/10.1016/j.eswa.2008.02.059.
  • [18] D. W. Armitage and H. K. Ober, “A comparison of supervised learning techniques in the classification of bat echolocation calls,” Ecol Inform, vol. 5, no. 6, pp. 465–473, 2010, doi: https://doi.org/10.1016/j.ecoinf.2010.08.001.
  • [19] J. Xie, M. Towsey, A. Truskinger, P. Eichinski, J. Zhang, and P. Roe, “Acoustic classification of Australian anurans using syllable features,” in 2015 IEEE Tenth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015, pp. 1–6. doi: 10.1109/ISSNIP.2015.7106924.
  • [20] M. Malfante, J. I. Mars, M. Dalla Mura, and C. Gervaise, “Automatic fish sounds classification,” J Acoust Soc Am, vol. 143, no. 5, pp. 2834–2846, May 2018, doi: 10.1121/1.5036628.
  • [21] A. P. Ribeiro, N. F. F. da Silva, F. N. Mesquita, P. de C. S. Araújo, T. C. Rosa, and J. N. Mesquita-Neto, “Machine learning approach for automatic recognition of tomato-pollinating bees based on their buzzing-sounds,” PLoS Comput Biol, vol. 17, no. 9, pp. e1009426-, Sep. 2021, [Online]. Available: https://doi.org/10.1371/journal.pcbi.1009426
  • [22] U. Haider et al., “Bioacoustics Signal Classification Using Hybrid Feature Space with Machine Learning,” in 2023 15th International Conference on Computer and Automation Engineering (ICCAE), 2023, pp. 376–380. doi: 10.1109/ICCAE56788.2023.10111384.
  • [23] K. J. Piczak, “ESC: Dataset for environmental sound classification,” in Proceedings of the 23rd ACM international conference on Multimedia, 2015, pp. 1015–1018.
  • [24] J. Salamon, C. Jacoby, and J. P. Bello, “A dataset and taxonomy for urban sound research,” in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp. 1041–1044.
  • [25] Z. Mushtaq, S.-F. Su, and Q.-V. Tran, “Spectral images based environmental sound classification using CNN with meaningful data augmentation,” Applied Acoustics, vol. 172, p. 107581, 2021.
  • [26] R. Mohd Hanifa, K. Isa, and S. Mohamad, “A review on speaker recognition: Technology and challenges,” Computers & Electrical Engineering, vol. 90, p. 107005, 2021, doi: https://doi.org/10.1016/j.compeleceng.2021.107005.
  • [27] S. Furui, “40 Years of Progress in Automatic Speaker Recognition,” in Advances in Biometrics, M. Tistarelli and M. S. Nixon, Eds., Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 1050–1059.
  • [28] N. Singh, A. Agrawal, and R. Khan, “The development of speaker recognition technology,” IJARET., no. May, 2018.
  • [29] P. Krishnamoorthy, H. S. Jayanna, and S. R. M. Prasanna, “Speaker recognition under limited data condition by noise addition,” Expert Syst Appl, vol. 38, no. 10, pp. 13487–13490, 2011, doi: https://doi.org/10.1016/j.eswa.2011.04.069.
  • [30] S. Bhardwaj, S. Srivastava, M. Hanmandlu, and J. R. P. Gupta, “GFM-Based Methods for Speaker Identification,” IEEE Trans Cybern, vol. 43, no. 3, pp. 1047–1058, 2013, doi: 10.1109/TSMCB.2012.2223461.
  • [31] M. Soleymanpour and H. Marvi, “Text-independent speaker identification based on selection of the most similar feature vectors,” Int J Speech Technol, vol. 20, no. 1, pp. 99–108, 2017, doi: 10.1007/s10772-016-9385-x.
  • [32] S. Sedigh, “Application of polyscale methods for speaker verification,” Master Thesis, The University of Manitoba, Winnipeg, 2018.
  • [33] K. P. Bharath and M. Rajesh Kumar, “ELM speaker identification for limited dataset using multitaper based MFCC and PNCC features with fusion score,” Multimed Tools Appl, vol. 79, no. 39, pp. 28859–28883, 2020, doi: 10.1007/s11042-020-09353-z.
  • [34] U. Ayvaz, H. Gürüler, F. Khan, N. Ahmed, T. Whangbo, and A. Bobomirzaevich, “Automatic speaker recognition using mel-frequency cepstral coefficients through machine learning,” CMC-Computers Materials & Continua, vol. 71, no. 3, 2022.
  • [35] J. I. Ramírez-Hernández, A. Manzo-Martínez, F. Gaxiola, L. C. González-Gurrola, V. C. Álvarez-Oliva, and R. López-Santillán, “A Comparison Between MFCC and MSE Features for Text-Independent Speaker Recognition Using Machine Learning Algorithms,” in Fuzzy Logic and Neural Networks for Hybrid Intelligent System Design, O. Castillo and P. Melin, Eds., Cham: Springer International Publishing, 2023, pp. 123–140. doi: 10.1007/978-3-031-22042-5_7.
  • [36] S. H. Shah, M. S. Saeed, S. Nawaz, and M. H. Yousaf, “Speaker Recognition in Realistic Scenario Using Multimodal Data,” in 2023 3rd International Conference on Artificial Intelligence (ICAI), 2023, pp. 209–213. doi: 10.1109/ICAI58407.2023.10136626.
  • [37] S. Sedigh and W. Kinsner, “A Manitoban Speech Dataset,” IEEE DataPort, January, 2018, doi: 10.21227/H2KM16.
  • [38] G. Sharma, K. Umapathy, and S. Krishnan, “Trends in audio signal feature extraction methods,” Applied Acoustics, vol. 158, p. 107020, 2020, doi: https://doi.org/10.1016/j.apacoust.2019.107020.
  • [39] S. Khalid, T. Khalil, and S. Nasreen, “A survey of feature selection and feature extraction techniques in machine learning,” in 2014 science and information conference, IEEE, 2014, pp. 372–378.
  • [40] K. N. Stevens, “Autocorrelation analysis of speech sounds,” J Acoust Soc Am, vol. 22, no. 6, pp. 769–771, 1950.
  • [41] G. Tzanetakis and P. Cook, “Musical genre classification of audio signals,” IEEE Transactions on speech and audio processing, vol. 10, no. 5, pp. 293–302, 2002.
  • [42] M. Sahidullah and G. Saha, “Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition,” Speech Commun, vol. 54, no. 4, pp. 543–565, 2012.
  • [43] P. Pedersen, “The mel scale,” Journal of Music Theory, vol. 9, no. 2, pp. 295–308, 1965.
  • [44] Z. K. Abdul and A. K. Al-Talabani, “Mel Frequency Cepstral Coefficient and its Applications: A Review,” IEEE Access, vol. 10, pp. 122136–122158, 2022, doi: 10.1109/ACCESS.2022.3223444.
  • [45] M. Lahat, R. Niederjohn, and D. Krubsack, “A spectral autocorrelation method for measurement of the fundamental frequency of noise-corrupted speech,” IEEE Trans Acoust, vol. 35, no. 6, pp. 741–750, 1987.
  • [46] I. V. Bele, “The speaker’s formant,” Journal of Voice, vol. 20, no. 4, pp. 555–578, 2006.
  • [47] G. Batista and D. F. Silva, “How k-nearest neighbor parameters affect its performance,” in Argentine symposium on artificial intelligence, Citeseer, 2009, pp. 1–12.
  • [48] O. Kramer, “K-Nearest Neighbors,” in Dimensionality Reduction with Unsupervised Nearest Neighbors, O. Kramer, Ed., Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 13–23. doi: 10.1007/978-3-642-38652-7_2.
  • [49] C. Bentéjac, A. Csörgő, and G. Martínez-Muñoz, “A comparative analysis of gradient boosting algorithms,” Artif Intell Rev, vol. 54, pp. 1937–1967, 2021.
  • [50] A. Natekin and A. Knoll, “Gradient boosting machines, a tutorial,” Front Neurorobot, vol. 7, p. 21, 2013.
  • [51] M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, and B. Scholkopf, “Support vector machines,” IEEE Intelligent Systems and their applications, vol. 13, no. 4, pp. 18–28, 1998.
  • [52] D. A. Pisner and D. M. Schnyer, “Support vector machine,” in Machine learning, Elsevier, 2020, pp. 101–121.
  • [53] S. Suthaharan and S. Suthaharan, “Support vector machine,” Machine learning models and algorithms for big data classification: thinking with examples for effective learning, pp. 207–235, 2016.
  • [54] W. Loh, “Fifty years of classification and regression trees,” International Statistical Review, vol. 82, no. 3, pp. 329–348, 2014.
  • [55] E. Şahin Sadık, H. M. Saraoğlu, S. Canbaz Kabay, M. Tosun, C. Keskinkılıç, and G. Akdağ, “Investigation of the effect of rosemary odor on mental workload using EEG: an artificial intelligence approach,” Signal Image Video Process, vol. 16, no. 2, pp. 497–504, 2022.
  • [56] F. Pedregosa et al., “Scikit-learn: Machine learning in Python,” the Journal of machine Learning research, vol. 12, pp. 2825–2830, 2011.
Toplam 56 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Konuşma Tanıma
Bölüm Research Articles
Yazarlar

Kaya Akgün 0000-0001-5678-9643

Şerif Ali Sadık 0000-0003-2883-1431

Yayımlanma Tarihi 30 Haziran 2024
Gönderilme Tarihi 19 Ocak 2024
Kabul Tarihi 12 Şubat 2024
Yayımlandığı Sayı Yıl 2024 Sayı: 057

Kaynak Göster

IEEE K. Akgün ve Ş. A. Sadık, “Unified voice analysis: speaker recognition, age group and gender estimation using spectral features and machine learning classifiers”, JSR-A, sy. 057, ss. 12–26, Haziran 2024, doi: 10.59313/jsr-a.1422792.