Research Article
BibTex RIS Cite

Türk Müziği Uyaranları Kullanılarak İnsan Duygularının Makine Öğrenmesi Yöntemi İle Tanınması

Year 2020, Volume: 8 Issue: 2, 458 - 474, 28.06.2020
https://doi.org/10.29109/gujsc.687199

Abstract

Müzik, zaman ve frekansa göre değişiklik gösteren çok çeşitli karmaşık bileşenlerden oluşan bir ses sinyalidir. Müziğin dinleyicide çok çeşitli duygular uyandırdığı literatürde yaygın olarak kabul edilmektedir. Bir kişinin dinlediği müziğe hüzünlü ya da mutlu duygu içeriyor demesi gerçekte hissettiği duyguyu ortaya koymayabilir. Ancak müzik dinleme anında hissedilen duyguya göre beynin içinde meydana gelen elektriksel dalgalanmalar, algılanan gerçek duygunun yapısını daha doğru bir şekilde ortaya koyabilmektedir. Beyin sinyalleri kullanılarak insan duygularının tespit edilmesi, birçok alanda güncel araştırma konusu olmuştur. Bu çalışmada ise müzik parçaları dinlerken insan duygularının tanınması problemi ele alınmıştır. Farklı türlerdeki Türk müziği parçaları katılımcılara dinletilerek beyinlerinde oluşan elektriksel dalgalar incelenerek mutlu, hüzünlü, rahatlatıcı ve gergin duygu durumları tanınmaya çalışılmıştır. Katılımcılardan gürültüsüz bir ortamda farklı türlerden müzik parçaları dinlemeleri istenilmiştir. Duyguların sınıflandırılması için öncelikle farklı kanallardan Elektroansefalografi (EEG) sinyalleri alınmıştır ve elde edilen bu sinyaller üzerinden belirli öznitelikler çıkarılmıştır. Çıkarılan öznitelikler Destek Vektör Makineleri (DVM), K En Yakın Komşu (KNN) ve Yapay Sinir Ağlarını (YSA) makine öğrenmesi algoritmaları kullanılarak sınıflandırılmıştır. Veri setini eğitmek ve insan duygularını sınıflandırmak için kullanılan algoritmalardan en iyi doğruluk oranı YSA ile elde edilmiştir. Elde edilen bulgulara göre, kullanılan yöntemin iyi performans gösterdiği gözlemlenmiştir.

References

  • [1] C. C. Pratt, Music as the language of emotion. Oxford, England: The Library of Congress, 1952.
  • [2] R.-F. Day, C.-H. Lin, W.-H. Huang, and S.-H. Chuang, “Effects of music tempo and task difficulty on multi-attribute decision-making: An eye-tracking approach,” Comput. Human Behav., vol. 25, no. 1, pp. 130–143, Jan. 2009.
  • [3] G. Varotto, P. Fazio, D. R. Sebastiano, G. Avanzini, S. Franceschetti, and F. Panzica, “Music and emotion: An EEG connectivity study in patients with disorders of consciousness,” in 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2012, pp. 5206–5209.
  • [4] D. HURON, “Is Music an Evolutionary Adaptation?,” Ann. N. Y. Acad. Sci., vol. 930, no. 1, pp. 43–61, 2001.
  • [5] I. Peretz and R. J. Zatorre, “Brain Organization for Music Processing,” Annu. Rev. Psychol., vol. 56, no. 1, pp. 89–114, 2005.
  • [6] S. M. Alarcão and M. J. Fonseca, “Emotions Recognition Using EEG Signals: A Survey,” IEEE Trans. Affect. Comput., vol. 10, no. 3, pp. 374–393, 2019.
  • [7] P. J. Lang, “The emotion probe: Studies of motivation and attention.,” American Psychologist, vol. 50, no. 5. American Psychological Association, US, pp. 372–385, 1995.
  • [8] R. W. Picard, Affective Computing. The MIT Press, 2000.
  • [9] L. Shu et al., “A Review of Emotion Recognition Using Physiological Signals,” Sensors (Basel)., vol. 18, no. 7, p. 2074, Jun. 2018.
  • [10] A. M. Bhatti, M. Majid, S. M. Anwar, and B. Khan, “Human emotion recognition and analysis in response to audio music using brain signals,” Comput. Human Behav., vol. 65, pp. 267–275, Dec. 2016.
  • [11] F. Zhang, H. Meng, and M. Li, “Emotion extraction and recognition from music,” 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). IEEE, 2016.
  • [12] A. Goshvarpour and A. Goshvarpour, “EEG spectral powers and source localization in depressing, sad, and fun music videos focusing on gender differences,” Cogn. Neurodyn., vol. 13, no. 2, pp. 161–173, 2018.
  • [13] A. M. Bhatti, M. Majid, S. M. Anwar, and B. Khan, “Human emotion recognition and analysis in response to audio music using brain signals,” Comput. Human Behav., vol. 65, pp. 267–275, Dec. 2016.
  • [14] C. Shahnaz, Shoaib-Bin-Masud, and S. M. S. Hasan, “Emotion recognition based on wavelet analysis of Empirical Mode Decomposed EEG signals responsive to music videos,” 2016 IEEE Region 10 Conference (TENCON). IEEE, 2016.
  • [15] Y. Liu et al., “What Strikes the Strings of Your Heart?–Multi-Label Dimensionality Reduction for Music Emotion Analysis via Brain Imaging,” IEEE Trans. Auton. Ment. Dev., vol. 7, no. 3, pp. 176–188, 2015.
  • [16] R. Nawaz, H. Nisar, and Y. V. Voon, “The Effect of Music on Human Brain; Frequency Domain and Time Series Analysis Using Electroencephalogram,” IEEE Access, vol. 6, pp. 45191–45205, 2018.
  • [17] J.-L. Hsu, Y.-L. Zhen, T.-C. Lin, and Y.-S. Chiu, “Affective content analysis of music emotion through EEG,” Multimed. Syst., vol. 24, no. 2, pp. 195–210, 2017.
  • [18] G. Balasubramanian, A. Kanagasabai, J. Mohan, and N. P. G. Seshadri, “Music induced emotion using wavelet packet decomposition—An EEG study,” Biomed. Signal Process. Control, vol. 42, pp. 115–128, 2018.
  • [19] M. Yanagimoto and C. Sugimoto, “Recognition of persisting emotional valence from EEG using convolutional neural networks,” 2016 IEEE 9th International Workshop on Computational Intelligence and Applications (IWCIA). IEEE, 2016.
  • [20] C.-Y. Liao, R.-C. Chen, and S.-K. Tai, “Emotion stress detection using EEG signal and deep learning technologies,” 2018 IEEE International Conference on Applied System Invention (ICASI). IEEE, 2018.
  • [21] S. Vaid, P. Singh, and C. Kaur, “EEG Signal Analysis for BCI Interface: A Review,” 2015 Fifth International Conference on Advanced Computing & Communication Technologies. IEEE, 2015.
  • [22] Siuly, “ANALYSIS AND CLASSIFICATION OF EEG SIGNALS,” UNIVERSITY OF SOUTHERN QUEENSLAND, 2012.
  • [23] B. Farnsworth, “What is EEG (Electroencephalography) and How Does it Work?” [Online]. Available: https://imotions.com/blog/what-is-eeg.
  • [24] S. D. Puthankattil, P. Joseph, U. R. Acharya, and C. Lim, “EEG signal analysis: a survey,” J. Med. Syst., vol. 34, pp. 195–212, Apr. 2010.
  • [25] H. . Jasper, “The Ten-Twenty Electrode System of the International Federation,” Electroencephalogr. Clin. Neurophysiol., vol. 10, pp. 371–375, 1958.
  • [26] B. S. Atal, “Automatic recognition of speakers from their voices,” Proc. IEEE, vol. 64, no. 4, pp. 460–475, 1976.
  • [27] S. Gupta, J. Jaafar, W. F. Wan Ahmad, and A. Bansal, “Feature Extraction Using Mfcc,” Signal Image Process. An Int. J., vol. 4, pp. 101–108, Aug. 2013.
  • [28] S. I.-J. Chien, Y. Ding, and C. Wei, “Dynamic Bus Arrival Time Prediction with Artificial Neural Networks,” J. Transp. Eng., vol. 128, no. 5, pp. 429–438, 2002.
  • [29] S. M. J. Pappu and S. N. Gummadi, “Artificial neural network and regression coupled genetic algorithm to optimize parameters for enhanced xylitol production by Debaryomyces nepalensis in bioreactor,” Biochem. Eng. J., vol. 120, pp. 136–145, 2017.
  • [30] Q. Yang, S. Le Blond, R. Aggarwal, Y. Wang, and J. Li, “New ANN method for multi-terminal HVDC protection relaying,” Electr. Power Syst. Res., vol. 148, pp. 192–201, 2017.
  • [31] D. Niebur and A. J. Germond, “Power flow classification for static security assessment,” Proceedings of the First International Forum on Applications of Neural Networks to Power Systems. IEEE.
  • [32] “Support Vector Machines, 1992; Boser, Guyon, Vapnik,” in SpringerReference, Springer-Verlag.
  • [33] A. Widodo and B.-S. Yang, “Support vector machine in machine condition monitoring and fault diagnosis,” Mech. Syst. Signal Process., vol. 21, no. 6, pp. 2560–2574, 2007.
  • [34] E. Kabir, Siuly, and Y. Zhang, “Epileptic seizure detection from EEG signals using logistic model trees,” Brain Informatics, vol. 3, no. 2, pp. 93–100, 2016.
  • [35] B. Schölkopf and A. J. Smola, Smola, A.: Learning with Kernels - Support Vector Machines, Regularization, Optimization and Beyond. MIT Press, Cambridge, MA, vol. 98. 2001.
  • [36] Y. Zhang, X. Ji, and S. Zhang, “An approach to EEG-based emotion recognition using combined feature extraction method,” Neurosci. Lett., vol. 633, pp. 152–157, 2016.
  • [37] H. Jégou, M. Douze, and C. Schmid, “Product Quantization for Nearest Neighbor Search,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 1, pp. 117–128, 2011.
  • [38] T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Trans. Inf. Theory, vol. 13, no. 1, pp. 21–27, 1967.
  • [39] C. Li et al., “Using the K-Nearest Neighbor Algorithm for the Classification of Lymph Node Metastasis in Gastric Cancer,” Comput. Math. Methods Med., vol. 2012, pp. 1–11, 2012.
Year 2020, Volume: 8 Issue: 2, 458 - 474, 28.06.2020
https://doi.org/10.29109/gujsc.687199

Abstract

References

  • [1] C. C. Pratt, Music as the language of emotion. Oxford, England: The Library of Congress, 1952.
  • [2] R.-F. Day, C.-H. Lin, W.-H. Huang, and S.-H. Chuang, “Effects of music tempo and task difficulty on multi-attribute decision-making: An eye-tracking approach,” Comput. Human Behav., vol. 25, no. 1, pp. 130–143, Jan. 2009.
  • [3] G. Varotto, P. Fazio, D. R. Sebastiano, G. Avanzini, S. Franceschetti, and F. Panzica, “Music and emotion: An EEG connectivity study in patients with disorders of consciousness,” in 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2012, pp. 5206–5209.
  • [4] D. HURON, “Is Music an Evolutionary Adaptation?,” Ann. N. Y. Acad. Sci., vol. 930, no. 1, pp. 43–61, 2001.
  • [5] I. Peretz and R. J. Zatorre, “Brain Organization for Music Processing,” Annu. Rev. Psychol., vol. 56, no. 1, pp. 89–114, 2005.
  • [6] S. M. Alarcão and M. J. Fonseca, “Emotions Recognition Using EEG Signals: A Survey,” IEEE Trans. Affect. Comput., vol. 10, no. 3, pp. 374–393, 2019.
  • [7] P. J. Lang, “The emotion probe: Studies of motivation and attention.,” American Psychologist, vol. 50, no. 5. American Psychological Association, US, pp. 372–385, 1995.
  • [8] R. W. Picard, Affective Computing. The MIT Press, 2000.
  • [9] L. Shu et al., “A Review of Emotion Recognition Using Physiological Signals,” Sensors (Basel)., vol. 18, no. 7, p. 2074, Jun. 2018.
  • [10] A. M. Bhatti, M. Majid, S. M. Anwar, and B. Khan, “Human emotion recognition and analysis in response to audio music using brain signals,” Comput. Human Behav., vol. 65, pp. 267–275, Dec. 2016.
  • [11] F. Zhang, H. Meng, and M. Li, “Emotion extraction and recognition from music,” 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). IEEE, 2016.
  • [12] A. Goshvarpour and A. Goshvarpour, “EEG spectral powers and source localization in depressing, sad, and fun music videos focusing on gender differences,” Cogn. Neurodyn., vol. 13, no. 2, pp. 161–173, 2018.
  • [13] A. M. Bhatti, M. Majid, S. M. Anwar, and B. Khan, “Human emotion recognition and analysis in response to audio music using brain signals,” Comput. Human Behav., vol. 65, pp. 267–275, Dec. 2016.
  • [14] C. Shahnaz, Shoaib-Bin-Masud, and S. M. S. Hasan, “Emotion recognition based on wavelet analysis of Empirical Mode Decomposed EEG signals responsive to music videos,” 2016 IEEE Region 10 Conference (TENCON). IEEE, 2016.
  • [15] Y. Liu et al., “What Strikes the Strings of Your Heart?–Multi-Label Dimensionality Reduction for Music Emotion Analysis via Brain Imaging,” IEEE Trans. Auton. Ment. Dev., vol. 7, no. 3, pp. 176–188, 2015.
  • [16] R. Nawaz, H. Nisar, and Y. V. Voon, “The Effect of Music on Human Brain; Frequency Domain and Time Series Analysis Using Electroencephalogram,” IEEE Access, vol. 6, pp. 45191–45205, 2018.
  • [17] J.-L. Hsu, Y.-L. Zhen, T.-C. Lin, and Y.-S. Chiu, “Affective content analysis of music emotion through EEG,” Multimed. Syst., vol. 24, no. 2, pp. 195–210, 2017.
  • [18] G. Balasubramanian, A. Kanagasabai, J. Mohan, and N. P. G. Seshadri, “Music induced emotion using wavelet packet decomposition—An EEG study,” Biomed. Signal Process. Control, vol. 42, pp. 115–128, 2018.
  • [19] M. Yanagimoto and C. Sugimoto, “Recognition of persisting emotional valence from EEG using convolutional neural networks,” 2016 IEEE 9th International Workshop on Computational Intelligence and Applications (IWCIA). IEEE, 2016.
  • [20] C.-Y. Liao, R.-C. Chen, and S.-K. Tai, “Emotion stress detection using EEG signal and deep learning technologies,” 2018 IEEE International Conference on Applied System Invention (ICASI). IEEE, 2018.
  • [21] S. Vaid, P. Singh, and C. Kaur, “EEG Signal Analysis for BCI Interface: A Review,” 2015 Fifth International Conference on Advanced Computing & Communication Technologies. IEEE, 2015.
  • [22] Siuly, “ANALYSIS AND CLASSIFICATION OF EEG SIGNALS,” UNIVERSITY OF SOUTHERN QUEENSLAND, 2012.
  • [23] B. Farnsworth, “What is EEG (Electroencephalography) and How Does it Work?” [Online]. Available: https://imotions.com/blog/what-is-eeg.
  • [24] S. D. Puthankattil, P. Joseph, U. R. Acharya, and C. Lim, “EEG signal analysis: a survey,” J. Med. Syst., vol. 34, pp. 195–212, Apr. 2010.
  • [25] H. . Jasper, “The Ten-Twenty Electrode System of the International Federation,” Electroencephalogr. Clin. Neurophysiol., vol. 10, pp. 371–375, 1958.
  • [26] B. S. Atal, “Automatic recognition of speakers from their voices,” Proc. IEEE, vol. 64, no. 4, pp. 460–475, 1976.
  • [27] S. Gupta, J. Jaafar, W. F. Wan Ahmad, and A. Bansal, “Feature Extraction Using Mfcc,” Signal Image Process. An Int. J., vol. 4, pp. 101–108, Aug. 2013.
  • [28] S. I.-J. Chien, Y. Ding, and C. Wei, “Dynamic Bus Arrival Time Prediction with Artificial Neural Networks,” J. Transp. Eng., vol. 128, no. 5, pp. 429–438, 2002.
  • [29] S. M. J. Pappu and S. N. Gummadi, “Artificial neural network and regression coupled genetic algorithm to optimize parameters for enhanced xylitol production by Debaryomyces nepalensis in bioreactor,” Biochem. Eng. J., vol. 120, pp. 136–145, 2017.
  • [30] Q. Yang, S. Le Blond, R. Aggarwal, Y. Wang, and J. Li, “New ANN method for multi-terminal HVDC protection relaying,” Electr. Power Syst. Res., vol. 148, pp. 192–201, 2017.
  • [31] D. Niebur and A. J. Germond, “Power flow classification for static security assessment,” Proceedings of the First International Forum on Applications of Neural Networks to Power Systems. IEEE.
  • [32] “Support Vector Machines, 1992; Boser, Guyon, Vapnik,” in SpringerReference, Springer-Verlag.
  • [33] A. Widodo and B.-S. Yang, “Support vector machine in machine condition monitoring and fault diagnosis,” Mech. Syst. Signal Process., vol. 21, no. 6, pp. 2560–2574, 2007.
  • [34] E. Kabir, Siuly, and Y. Zhang, “Epileptic seizure detection from EEG signals using logistic model trees,” Brain Informatics, vol. 3, no. 2, pp. 93–100, 2016.
  • [35] B. Schölkopf and A. J. Smola, Smola, A.: Learning with Kernels - Support Vector Machines, Regularization, Optimization and Beyond. MIT Press, Cambridge, MA, vol. 98. 2001.
  • [36] Y. Zhang, X. Ji, and S. Zhang, “An approach to EEG-based emotion recognition using combined feature extraction method,” Neurosci. Lett., vol. 633, pp. 152–157, 2016.
  • [37] H. Jégou, M. Douze, and C. Schmid, “Product Quantization for Nearest Neighbor Search,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 1, pp. 117–128, 2011.
  • [38] T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Trans. Inf. Theory, vol. 13, no. 1, pp. 21–27, 1967.
  • [39] C. Li et al., “Using the K-Nearest Neighbor Algorithm for the Classification of Lymph Node Metastasis in Gastric Cancer,” Comput. Math. Methods Med., vol. 2012, pp. 1–11, 2012.
There are 39 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section Tasarım ve Teknoloji
Authors

Mehmet Bilal Er 0000-0002-2074-1776

Harun Çiğ 0000-0003-0419-9531

Publication Date June 28, 2020
Submission Date February 10, 2020
Published in Issue Year 2020 Volume: 8 Issue: 2

Cite

APA Er, M. B., & Çiğ, H. (2020). Türk Müziği Uyaranları Kullanılarak İnsan Duygularının Makine Öğrenmesi Yöntemi İle Tanınması. Gazi Üniversitesi Fen Bilimleri Dergisi Part C: Tasarım Ve Teknoloji, 8(2), 458-474. https://doi.org/10.29109/gujsc.687199

                                TRINDEX     16167        16166    21432    logo.png

      

    e-ISSN:2147-9526