Research Article

Investigation of the Relation between Emotional State and Acoustic Parameters in the Context of Language

Number: 14 December 31, 2018
EN TR

Investigation of the Relation between Emotional State and Acoustic Parameters in the Context of Language

Abstract

Acoustic analysis is the most basic method used for speech emotion recognition. Speech records are digitized by signal processing methods, and various acoustic features of speech are obtained by acoustic analysis methods. The relationship between acoustic features and emotion has been investigated in many studies. However, studies have mostly focused on emotion recognition success or the effects of emotions on acoustic features. The effect of spoken language on speech emotion recognition has been investigated in a limited number. The purpose of this study is to investigate the variability of the relationship between acoustic features and emotions according to the spoken language. For this purpose, three emotions (anger, fear and neutral) of three different spoken languages (English, German and Italian) were used. In these data sets, the change in acoustic features according to spoken language was investigated statistically. According to the results obtained, the effect of anger on the acoustic features does not change according to the spoken language. For fear, change in spoken language shows a high similarity in Italian and German, but low similarity in English.

Keywords

References

  1. B. Zupan, D. Neumann, D. R. Babbage, and B. Willer, ‘The importance of vocal affect to bimodal processing of emotion: implications for individuals with traumatic brain injury’, J. Commun. Disord., vol. 42, no. 1, pp. 1–17, 2009.
  2. F. Burkhardt, A. Paeschke, M. Rolfes, W. F. Sendlmeier, and B. Weiss, ‘A database of German emotional speech.’, in Interspeech, 2005, vol. 5, pp. 1517–1520.
  3. G. Costantini, I. Iaderola, A. Paoloni, and M. Todisco, ‘EMOVO Corpus: an Italian Emotional Speech Database.’, in LREC, 2014, pp. 3501–3504.
  4. P. Jackson, S. Haq, and J. D. Edge, ‘Audio-Visual Feature Selection and Reduction for Emotion Classification’, in In Proc. Int’l Conf. on Auditory-Visual Speech Processing, 2008, pp. 185–190.
  5. J. H. Hansen, S. E. Bou-Ghazale, R. Sarikaya, and B. Pellom, ‘Getting started with SUSAS: a speech under simulated and actual stress database.’, in Eurospeech, 1997, vol. 97, pp. 1743–46.
  6. P. Boersma and D. Weenink, Praat: doing phonetics by computer [Computer program], Version 5.1. 44. 2010.
  7. F. Eyben, M. Wöllmer, and B. Schuller, ‘Opensmile: the munich versatile and fast open-source audio feature extractor’, in Proceedings of the international conference on Multimedia, 2010, pp. 1459–1462.
  8. W. Tarng, Y.-Y. Chen, C.-L. Li, K.-R. Hsie, and M. Chen, ‘Applications of support vector machines on smart phone systems for emotional speech recognition’, World Acad. Sci. Eng. Technol., vol. 72, pp. 106–113, 2010.

Details

Primary Language

English

Subjects

Engineering

Journal Section

Research Article

Publication Date

December 31, 2018

Submission Date

July 26, 2018

Acceptance Date

November 27, 2018

Published in Issue

Year 2018 Number: 14

APA
Özseven, T. (2018). Investigation of the Relation between Emotional State and Acoustic Parameters in the Context of Language. Avrupa Bilim Ve Teknoloji Dergisi, 14, 241-244. https://doi.org/10.31590/ejosat.448095

Cited By