Konferans Bildirisi
BibTex RIS Kaynak Göster

A Comparative Assessment of Text-independent Automatic Speaker Identification Methods Using Limited Data

Yıl 2021, Sayı: 26 - Ejosat Özel Sayı 2021 (HORA), 217 - 222, 31.07.2021
https://doi.org/10.31590/ejosat.950218

Öz

Automatic Speaker Identification (ASI) is one of the active fields of research in signal processing. Various machine learning algorithms have been used for this purpose. With the recent developments in hardware technologies and data accumulation, Deep Learning (DL) methods have become the new state-of-the-art approach in several classification and identification tasks. In this paper, we evaluate the performance of traditional methods such as Gaussian Mixture Model-Universal Background Model (GMM-UBM) and DL-based techniques such as Factorized Time-Delay Neural Network (FTDNN) and Convolutional Neural Networks (CNN) for text-independent closed-set automatic speaker identification on two datasets with different conditions. LibriSpeech is one of the experimental datasets, which consists of clean audio signals from audiobooks, collected from a large number of speakers. The other dataset was collected and prepared by us, which has rather limited speech data with low signal-to-noise-ratio from real-life conversations of customers with the agents in a call center. The duration of the speech signals in the query phase is an important factor affecting the performances of ASI methods. In this work, a CNN architecture is proposed for automatic speaker identification from short speech segments. The architecture design aims at capturing the temporal nature of speech signal in an optimum convolutional neural network with low number of parameters compared to the well-known CNN architectures. We show that the proposed CNN-based algorithm performs better on the large and clean dataset, whereas on the other dataset with limited amount of data, traditional method outperforms all DL approaches. The achieved top-1 accuracy by the proposed model is 99.5% on 1-second voice instances from LibriSpeech dataset.

Destekleyen Kurum

Arcelik, Scientific Project Unit (BAP) of Istanbul Technical University

Proje Numarası

MOA-2019-42321

Kaynakça

  • Beigi, H. (2011). Fundamentals of Speaker Recognition. Springer Publishing Company, Incorporated.
  • Chowdhury, M. F. R., Selouani, S.-A., and O’Shaughnessy, D. (2010). Text-independent distributed speaker identification and verification using gmm-ubm speaker models for mobile communications. In 10th International Conference on Information Science, Signal Processing and their Applications (ISSPA 2010), pages 57–60. IEEE.
  • Chung, J. S., Huh, J., Mun, S., Lee, M., Heo, H. S., Choe, S., Ham, C., Jung, S., Lee, B.-J., and Han, I. (2020). In defence of metric learning for speaker recognition. arXiv preprint arXiv:2003.11982.
  • Jain, A. K., Flynn, P., and Ross, A. A. (2007). Handbook of biometrics. Springer Science & Business Media.
  • Jin, Q. and Waibel, A. (2000). Application of lda to speaker recognition. In Sixth International Conference on Spoken Language Processing.
  • Kanagasundaram, A., Vogt, R., Dean, D. B., Sridharan, S., and Mason, M. W. (2011). I-vector based speaker recognition on short utterances. In "Proceedings of the 12th Annual Conference of the International Speech Communication Association", pages 2341–2344. International Speech Communication Association (ISCA).
  • Kanagasundaram, A., Vogt, R. J., Dean, D. B., and Sridharan, S. (2012). Plda based speaker recognition on short utterances. In "The Speaker and Language Recognition Workshop (Odyssey 2012)". ISCA.
  • Kenny, P., Stafylakis, T., Ouellet, P., and Alam, M. J. (2014). Jfa-based front ends for speaker recognition. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1705–1709. IEEE.
  • Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS’12, pages 1097–1105, USA. Curran Associates Inc.
  • Latinus, M. and Belin, P. (2011). Human voice perception. Current Biology, 21(4):R143 – R145.
  • Li, R., Jiang, J.-Y., Li, J. L., Hsieh, C.-C., and Wang, W. (2020). Automatic speaker recognition with limited data. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 340–348.
  • Moon, T. K. (1996). The expectation-maximization algorithm. IEEE Signal Processing Magazine, 13(6):47–60.
  • Nagrani, A., Chung, J. S., Xie, W., and Zisserman, A. (2020). Voxceleb: Large-scale speaker verification in the wild. Computer Speech & Language, 60:101027.
  • Y. Lukic, C. Vogt, O. Dürr and T. Stadelmann, “Speaker identification and clustering using convolutional neural networks,” 016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), 2016, pp. 1-6, doi: 10.1109/MLSP.2016.7738816
  • Panayotov, V., Chen, G., Povey, D., and Khudanpur, S. (2015). Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206–5210. IEEE.
  • Reynolds, D. (1992). A Gaussian Mixture Modeling Approach to Text-independent Speaker Identification. College of Engineering, Georgia Institute of Technology.
  • Senoussaoui, M., Kenny, P., Brümmer, N., Villiers, E. d., and Dumouchel, P. (2011). Mixture of plda models in ivector space for gender independent speaker recognition. In Twelfth Annual Conference of the International Speech Communication Association.
  • Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  • Soong, F. K., Rosenberg, A. E., Juang, B., and Rabiner, L. R. (1987). Report: A vector quantization approach to speaker recognition. AT T Technical Journal, 66(2):14–26.
  • Villalba, J., Chen, N., Snyder, D., Garcia-Romero, D., McCree, A., Sell, G., Borgstrom, J., García-Perera, L. P., Richardson, F., Dehak, R., Torres-Carrasquillo, P. A., and Dehak, N. (2020). State-of-the-art speaker recognition with neural network embeddings in nist sre18 and speakers in the wild evaluations. Computer Speech & Language, 60:101026.
  • Wolf, J. J. (1969). Acoustic measurements for speaker recognition. The Journal of the Acoustical Society of America, 46(1A):89–90.
  • Zheng, R., Zhang, S., and Xu, B. (2004). Text-independent speaker identification using gmm-ubm and frame level likelihood normalization. In 2004 International Symposium on Chinese Spoken Language Processing, pages 289–292. IEEE.

Sınırlı Veri Kullanılarak Metinden Bağımsız Otomatik Konuşmacı Tanıma Yöntemlerinin Karşılaştırmalı Bir Değerlendirmesi

Yıl 2021, Sayı: 26 - Ejosat Özel Sayı 2021 (HORA), 217 - 222, 31.07.2021
https://doi.org/10.31590/ejosat.950218

Öz

Otomatik Konuşmacı Tanıma, sinyal işlemedeki aktif araştırma alanlarından biridir. Bu amaçla çeşitli makine öğrenme algoritmaları kullanılmıştır. Donanım teknolojilerindeki ve veri birikimindeki son gelişmelerle birlikte, Derin Öğrenme yöntemleri, çeşitli sınıflandırma ve tanımlama görevlerinde en son teknolojiye sahip yeni yaklaşım haline gelmiştir. Bu makalede, metinden bağımsız, kapalı-küme otomatik konuşmacı tanımlama için Gauss Karışım Modeli-Evrensel Arka Plan Modeli (GMM-UBM) gibi geleneksel yöntemlerin ve Faktörize Zaman Gecikmeli Sinir Ağı ve Evrişimli Sinir Ağları gibi derin öğrenme tabanlı tekniklerin performansını değerlendiriyoruz. Bu karşılaştırmalar, farklı koşullara sahip iki veri kümesinde değerlendirildi. Deneysel veri kümelerinden biri LibriSpeech. Bu veri seti çok sayıda konuşmacıdan oluşan sesli kitaplardan toplanan temiz ses sinyallerinden oluşmaktadır. Ayrıca, müşterilerin bir çağrı merkezindeki temsilcilerle doğal konuşmalarından oluşan bir veri kümesi ise bizim tarafımızdan toplandı ve hazırlandı. Çağrı merkezi veri setindeki ses örnekleri sinyal-gürültü oranı düşük ve oldukça sınırlı sayıda ses örnekleri mevut. Konuşmacı sorgulama aşamasındaki konuşma sinyallerinin süresi, otomatik konuşmacı tanımlama yöntemlerinin performanslarını etkileyen önemli bir faktördür. Bu çalışmada, kısa konuşma bölütlerinden otomatik konuşmacı tanımlaması için bir CNN mimarisi önerilmiştir. Mimari tasarımı, iyi bilinen CNN mimarilerine kıyasla düşük sayıda parametre ile optimum bir evrişimsel sinir ağıdır ve konuşma sinyalinin zamansal yapısını yakalamayı amaçlamaktadır. Önerilen CNN tabanlı algoritmanın büyük ve temiz veri setinde daha iyi performans gösterdiğini, buna karşın sınırlı miktarda veriye sahip diğer veri setinde geleneksel yöntemin tüm derin öğrenme yaklaşımlarından daha iyi performans gösterdiğini gözlemledik. Önerilen model tarafından elde edilen doğruluk, LibriSpeech veri setinden 1 saniyelik ses örneklerinde %99,5'tir.

Proje Numarası

MOA-2019-42321

Kaynakça

  • Beigi, H. (2011). Fundamentals of Speaker Recognition. Springer Publishing Company, Incorporated.
  • Chowdhury, M. F. R., Selouani, S.-A., and O’Shaughnessy, D. (2010). Text-independent distributed speaker identification and verification using gmm-ubm speaker models for mobile communications. In 10th International Conference on Information Science, Signal Processing and their Applications (ISSPA 2010), pages 57–60. IEEE.
  • Chung, J. S., Huh, J., Mun, S., Lee, M., Heo, H. S., Choe, S., Ham, C., Jung, S., Lee, B.-J., and Han, I. (2020). In defence of metric learning for speaker recognition. arXiv preprint arXiv:2003.11982.
  • Jain, A. K., Flynn, P., and Ross, A. A. (2007). Handbook of biometrics. Springer Science & Business Media.
  • Jin, Q. and Waibel, A. (2000). Application of lda to speaker recognition. In Sixth International Conference on Spoken Language Processing.
  • Kanagasundaram, A., Vogt, R., Dean, D. B., Sridharan, S., and Mason, M. W. (2011). I-vector based speaker recognition on short utterances. In "Proceedings of the 12th Annual Conference of the International Speech Communication Association", pages 2341–2344. International Speech Communication Association (ISCA).
  • Kanagasundaram, A., Vogt, R. J., Dean, D. B., and Sridharan, S. (2012). Plda based speaker recognition on short utterances. In "The Speaker and Language Recognition Workshop (Odyssey 2012)". ISCA.
  • Kenny, P., Stafylakis, T., Ouellet, P., and Alam, M. J. (2014). Jfa-based front ends for speaker recognition. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1705–1709. IEEE.
  • Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS’12, pages 1097–1105, USA. Curran Associates Inc.
  • Latinus, M. and Belin, P. (2011). Human voice perception. Current Biology, 21(4):R143 – R145.
  • Li, R., Jiang, J.-Y., Li, J. L., Hsieh, C.-C., and Wang, W. (2020). Automatic speaker recognition with limited data. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 340–348.
  • Moon, T. K. (1996). The expectation-maximization algorithm. IEEE Signal Processing Magazine, 13(6):47–60.
  • Nagrani, A., Chung, J. S., Xie, W., and Zisserman, A. (2020). Voxceleb: Large-scale speaker verification in the wild. Computer Speech & Language, 60:101027.
  • Y. Lukic, C. Vogt, O. Dürr and T. Stadelmann, “Speaker identification and clustering using convolutional neural networks,” 016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), 2016, pp. 1-6, doi: 10.1109/MLSP.2016.7738816
  • Panayotov, V., Chen, G., Povey, D., and Khudanpur, S. (2015). Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206–5210. IEEE.
  • Reynolds, D. (1992). A Gaussian Mixture Modeling Approach to Text-independent Speaker Identification. College of Engineering, Georgia Institute of Technology.
  • Senoussaoui, M., Kenny, P., Brümmer, N., Villiers, E. d., and Dumouchel, P. (2011). Mixture of plda models in ivector space for gender independent speaker recognition. In Twelfth Annual Conference of the International Speech Communication Association.
  • Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  • Soong, F. K., Rosenberg, A. E., Juang, B., and Rabiner, L. R. (1987). Report: A vector quantization approach to speaker recognition. AT T Technical Journal, 66(2):14–26.
  • Villalba, J., Chen, N., Snyder, D., Garcia-Romero, D., McCree, A., Sell, G., Borgstrom, J., García-Perera, L. P., Richardson, F., Dehak, R., Torres-Carrasquillo, P. A., and Dehak, N. (2020). State-of-the-art speaker recognition with neural network embeddings in nist sre18 and speakers in the wild evaluations. Computer Speech & Language, 60:101026.
  • Wolf, J. J. (1969). Acoustic measurements for speaker recognition. The Journal of the Acoustical Society of America, 46(1A):89–90.
  • Zheng, R., Zhang, S., and Xu, B. (2004). Text-independent speaker identification using gmm-ubm and frame level likelihood normalization. In 2004 International Symposium on Chinese Spoken Language Processing, pages 289–292. IEEE.
Toplam 22 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Mandana Fasounaki 0000-0002-8332-7054

Emirhan Burak Yüce 0000-0002-4220-9940

Serkan Öncül 0000-0001-9302-8712

Gökhan İnce 0000-0002-0034-030X

Proje Numarası MOA-2019-42321
Yayımlanma Tarihi 31 Temmuz 2021
Yayımlandığı Sayı Yıl 2021 Sayı: 26 - Ejosat Özel Sayı 2021 (HORA)

Kaynak Göster

APA Fasounaki, M., Yüce, E. B., Öncül, S., İnce, G. (2021). A Comparative Assessment of Text-independent Automatic Speaker Identification Methods Using Limited Data. Avrupa Bilim Ve Teknoloji Dergisi(26), 217-222. https://doi.org/10.31590/ejosat.950218