Araştırma Makalesi
BibTex RIS Kaynak Göster

Türkçe Müzikten Duygu Tanıma

Yıl 2020, Ejosat Özel Sayı 2020 (ICCEES), 6 - 12, 05.10.2020
https://doi.org/10.31590/ejosat.802169

Öz

Müzikten duygu tanıma yapılması, günümüzde hala oldukça zor bir görevdir. Bu çalışmada, müzikten duygu tanıma yapılması için genel problemler tespit edilmiş, bu problemlerin üstesinden gelmek ve sınıflandırma başarısını artırmak için yaklaşımlar geliştirilmiştir. Bu amaçla, çeşitli makine öğrenmesi yöntemleri ve farklı araçlardan elde edilen öznitelikler kullanılarak Türkçe müziklerden duygu tanıması yapılmak istenmiştir. Yöntem olarak Bayes Ağları, Sıralı Minimal Optimizasyon (SMO), Karar Ağaçları (J.48) ve Lojistik Regresyon kullanılmıştır. Bu yöntemler, duygu tanıma yapmak için oluşturulan bir veri tabanı üzerine uygulanmış ve performansları ölçülmüştür. Bu veri tabanı her biri 30 saniyelik 124 müzik alıntısından oluşan Türkçe Duygusal Müzik Veri Tabanı‘dır. Müzik sinyallerinden öznitelik elde etmek için ise, yapılan çalışmalarda sık sık karşımıza çıkan ve öznitelik çıkarma sırasında karşılaşılan sorunlara kapsamlı çözüm sağlayan çeşitli araçlar kullanılmıştır. Bu araçlar çok sayıda farklı öznitelik elde etmemize olanak sağlar. Buna ek olarak gereksiz olan öznitelikleri çıkarmak ve sınıflandırıcı performansını artırmak amacıyla korelasyon tabanlı öznitelik seçme yöntemi (Correlation-based Feature Selection) kullanılmıştır. Her bir araçtan elde edilen özellikler ayrı ayrı kullanılarak, makine öğrenmesi yöntemleri ile birlikte sınıflandırma işlemi yapılmıştır. Sınıflandırma aşamasında sonuçları değerlendirmek ve karşılaştırmak için 10 kat çapraz doğrulama yöntemi uygulanmıştır. Yapılan çalışmada, elde edilen özniteliklere öznitelik seçim yöntemi uygulanarak ve Bayes Ağları sınıflandırıcısı kullanılarak %94.35 oranında doğruluk ile duygu tanıma gerçekleştirilmiş, ve diğer sınıflandırıcıların hepsinden daha iyi sonuç alınmıştır. Son olarak, bütün araçlardan elde edilen öznitelikler bir araya getirilmiş ve bu özniteliklere yine seçim işlemi yapılmıştır. Bu işlemden sonra ise, Bayes Ağları kullanılarak elde edilen duygu tanıma oranı %1.6 artarak, %95.96 olmuştur.

Kaynakça

  • Aljanaki, A., Yang, Y. H., & Soleymani, M., (2017). Developing a benchmark for emotional analysis of music. PLoS ONE 12(3), 1–22, doi: 10.1371-journal.pone.0173392
  • Benito-Gorron, D. de. Lozano-Diez, A., Toledano, D. T., & Gonzalez- Rodriguez, J., (2019). Exploring convolutional, recurrent, and hybrid deep neural networks for speech and music detection in a large audio dataset. Eurasip Journal on Audio, Speech, and Music Processing, (1), 1–18.
  • Eyben, F., & Schuller, B., (2015). OpenSMILE – The Munich Versatile and fast open-source audio feature extractor. ACM SIGMultimedia Records, 6(4), 4–13, doi: 10.1145/2729095.2729097.
  • Feng, Y., Zhuang, Y., & Pan, Y., (2003). Popular music retrieval by detecting mood. in: SIGIR Forum (ACM Spec. Interes. Gr. Inf. Retrieval), 375–376.
  • Grekow, J., (2015). Audio features dedicated to the detection of four basic emotion. Computer Information Systems and Industrial Management: CISIM’2015: 14th IFIP TC8 International Conference, September 24-26, Warszawa, Poland.
  • Hall, M., & Smith, L., (1997). Feature subset selection: A correlation-based filter approach. Proceedings of the 4th International Conference on Neural Information Processing and Intelligent Information Systems, New Zealand, 855–858.
  • Hall, M., Frank, E., Holmes, G, Pfahringer, B., Reutemann, P., & Witten, I., (2009). The WEKA data mining software: An update. SIGKDD Explorations, 11, 10-18, doi: 10.1145/1656274.1656278.
  • Hevner, K., (1936). Experimental studies of the elements of expression in music. The American Journal of Psychology, 48, 2: 246-268.
  • Kim, E. Y., Schmidt, M. E., Migneco, R., Morton, B. G., Richardson, P., Scott, J., Speck, A. A., & Turnbull, D., (2010). Music emotion recognition: A state of the art review. Proceedings of the 11th International Society for Music Information Retrieval Conference, 9-13 August, Utrecht, Netherlands.
  • Lartillot, O., & Toiviainen, P., (2007). Mir in matlab (II): A toolbox for musical feature extraction from audio. Proceedings of the 8th International Conference on Music Information Retrieval, September 23-27, Vienna, Austria, 127–130.
  • Le Cessie, S., & van Houwelingen, J., C., (1992). Ridge estimators in logistic regression. Applied Statistics, 41(1), pp. 191-201.
  • Li, T., & Ogihara, M., (2003). Detecting emotion in music. Proceedings of the International Symposium on Music Information Retrieval, (3), 239-240.
  • Mathieu, B., Essid, S., Fillon, T., Prado, J., & Richard, G., (2010). Yaafe, an easy to use and efficient audio feature extraction software. Proceedings of the 11th International Society for Music Information Retrieval Conference August 9-13, Utrecht, Netherlands, 441–446.
  • McKay, C. (2009). JAudio: Towards a standardized extensible audio music feature extraction system. Course Paper, McGill University, Canada.
  • Panda, R., & Paiva, R. P., (2012). Music emotion classification: Dataset acquisition and comparative analysis. Proc. of the 15th Int. Conference on Digital Audio Effects, September 17-21, York, UK.
  • Pearl, J., (1985). Bayesian networks: A model of self-activated memory for evidential reasoning. Proceedings of the Seventh Conference of the Cognitive Science Society, California, USA.
  • Platt, J., (1998). Sequential minimal optimization: A fast algorithm for training support vector machines. Microsoft Research Technical Report: MSRTR, 98-14.
  • Quinlan, J., R., (1993). C4.5: Programs for machine learning. Morgan Kaufmann Publishers.
  • Rocha, B., Panda, R., & Paiva, R. P., (2013). Music emotion recognition: The importance of melodic features. 6th International Workshop on Music and Machine Learning in conjunction with the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, September, Prague, Czech Republic.
  • Russell, J. A., (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39, 1161–1178.
  • Sarkar, R., Choudhury, S., Dutta, S., Roy, A., & Saha, S. K., (2019). Recognition of emotion in music based on deep convolutional neural network. Multimedia Tools and Applications, 79:765–783.
  • Song, Y., Dixon, S., & Pearce, M., (2012). Evaluation of musical features for emotion classification. Proceedings of the 13th International Society for Music Information Retrieval Conference, October, Porto, Portugal.
  • Tzanetakis, G., & Cook, P., (1999). MARSYAS: A Framework for audio analysis. Organised Sound, 4(3), 169-175.
  • Yang, Y. H., Su, Y. F., Lin, Y. C., & Chen, H. H., (2007). Music emotion recognition: The role of individuality. In Proceedings of the ACM International Workshop on Human-Centered Multimedia, 13-21.

Emotion Recognition From Turkish Music

Yıl 2020, Ejosat Özel Sayı 2020 (ICCEES), 6 - 12, 05.10.2020
https://doi.org/10.31590/ejosat.802169

Öz

Recognizing emotion from music is still a very difficult task today. In this study, general problems were determined for emotion recognition from music, and approaches were developed to overcome these problems and to increase classification success. For this purpose, emotion recognition from Turkish music was aimed by using various machine learning methods and features obtained from different toolboxes. BayesNet, Sequential Minimal Optimization (SMO), Decision Trees (J.48) and Logistic Regression were used as methods. These methods were applied on a database constructed for emotion recognition and their performance was measured. This database is the Turkish Emotional Music Database consisting of 124 music excerpts of 30 seconds each. In order to obtain features from music signals, various toolboxes have been used that provide comprehensive solutions to the problems encountered frequently during feature extraction. These toolboxes allow us to obtain a large number of different features. In addition, the correlation-based feature selection method (CFS) was used to remove unnecessary features and to increase classifier performance. The classification was made with machine learning methods, using the features obtained from each toolbox separately. 10-fold cross validation method was applied to evaluate and compare the results at the classification. Accuracy measure was used to evaluate the success of the system. In the study, %94.35 emotion recognition was achieved by using the feature selection method and BayesNet classifier which yielded better results than all other classifiers. Finally, all features are combined and the selection process is made for these features again. After this process, the emotion recognition rate obtained by using BayesNet classifier increased by %1.6 to %95.96.

Kaynakça

  • Aljanaki, A., Yang, Y. H., & Soleymani, M., (2017). Developing a benchmark for emotional analysis of music. PLoS ONE 12(3), 1–22, doi: 10.1371-journal.pone.0173392
  • Benito-Gorron, D. de. Lozano-Diez, A., Toledano, D. T., & Gonzalez- Rodriguez, J., (2019). Exploring convolutional, recurrent, and hybrid deep neural networks for speech and music detection in a large audio dataset. Eurasip Journal on Audio, Speech, and Music Processing, (1), 1–18.
  • Eyben, F., & Schuller, B., (2015). OpenSMILE – The Munich Versatile and fast open-source audio feature extractor. ACM SIGMultimedia Records, 6(4), 4–13, doi: 10.1145/2729095.2729097.
  • Feng, Y., Zhuang, Y., & Pan, Y., (2003). Popular music retrieval by detecting mood. in: SIGIR Forum (ACM Spec. Interes. Gr. Inf. Retrieval), 375–376.
  • Grekow, J., (2015). Audio features dedicated to the detection of four basic emotion. Computer Information Systems and Industrial Management: CISIM’2015: 14th IFIP TC8 International Conference, September 24-26, Warszawa, Poland.
  • Hall, M., & Smith, L., (1997). Feature subset selection: A correlation-based filter approach. Proceedings of the 4th International Conference on Neural Information Processing and Intelligent Information Systems, New Zealand, 855–858.
  • Hall, M., Frank, E., Holmes, G, Pfahringer, B., Reutemann, P., & Witten, I., (2009). The WEKA data mining software: An update. SIGKDD Explorations, 11, 10-18, doi: 10.1145/1656274.1656278.
  • Hevner, K., (1936). Experimental studies of the elements of expression in music. The American Journal of Psychology, 48, 2: 246-268.
  • Kim, E. Y., Schmidt, M. E., Migneco, R., Morton, B. G., Richardson, P., Scott, J., Speck, A. A., & Turnbull, D., (2010). Music emotion recognition: A state of the art review. Proceedings of the 11th International Society for Music Information Retrieval Conference, 9-13 August, Utrecht, Netherlands.
  • Lartillot, O., & Toiviainen, P., (2007). Mir in matlab (II): A toolbox for musical feature extraction from audio. Proceedings of the 8th International Conference on Music Information Retrieval, September 23-27, Vienna, Austria, 127–130.
  • Le Cessie, S., & van Houwelingen, J., C., (1992). Ridge estimators in logistic regression. Applied Statistics, 41(1), pp. 191-201.
  • Li, T., & Ogihara, M., (2003). Detecting emotion in music. Proceedings of the International Symposium on Music Information Retrieval, (3), 239-240.
  • Mathieu, B., Essid, S., Fillon, T., Prado, J., & Richard, G., (2010). Yaafe, an easy to use and efficient audio feature extraction software. Proceedings of the 11th International Society for Music Information Retrieval Conference August 9-13, Utrecht, Netherlands, 441–446.
  • McKay, C. (2009). JAudio: Towards a standardized extensible audio music feature extraction system. Course Paper, McGill University, Canada.
  • Panda, R., & Paiva, R. P., (2012). Music emotion classification: Dataset acquisition and comparative analysis. Proc. of the 15th Int. Conference on Digital Audio Effects, September 17-21, York, UK.
  • Pearl, J., (1985). Bayesian networks: A model of self-activated memory for evidential reasoning. Proceedings of the Seventh Conference of the Cognitive Science Society, California, USA.
  • Platt, J., (1998). Sequential minimal optimization: A fast algorithm for training support vector machines. Microsoft Research Technical Report: MSRTR, 98-14.
  • Quinlan, J., R., (1993). C4.5: Programs for machine learning. Morgan Kaufmann Publishers.
  • Rocha, B., Panda, R., & Paiva, R. P., (2013). Music emotion recognition: The importance of melodic features. 6th International Workshop on Music and Machine Learning in conjunction with the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, September, Prague, Czech Republic.
  • Russell, J. A., (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39, 1161–1178.
  • Sarkar, R., Choudhury, S., Dutta, S., Roy, A., & Saha, S. K., (2019). Recognition of emotion in music based on deep convolutional neural network. Multimedia Tools and Applications, 79:765–783.
  • Song, Y., Dixon, S., & Pearce, M., (2012). Evaluation of musical features for emotion classification. Proceedings of the 13th International Society for Music Information Retrieval Conference, October, Porto, Portugal.
  • Tzanetakis, G., & Cook, P., (1999). MARSYAS: A Framework for audio analysis. Organised Sound, 4(3), 169-175.
  • Yang, Y. H., Su, Y. F., Lin, Y. C., & Chen, H. H., (2007). Music emotion recognition: The role of individuality. In Proceedings of the ACM International Workshop on Human-Centered Multimedia, 13-21.
Toplam 24 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Serhat Hızlısoy 0000-0001-8440-5539

Zekeriya Tüfekci 0000-0001-7835-2741

Yayımlanma Tarihi 5 Ekim 2020
Yayımlandığı Sayı Yıl 2020 Ejosat Özel Sayı 2020 (ICCEES)

Kaynak Göster

APA Hızlısoy, S., & Tüfekci, Z. (2020). Türkçe Müzikten Duygu Tanıma. Avrupa Bilim Ve Teknoloji Dergisi6-12. https://doi.org/10.31590/ejosat.802169