<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN"
        "https://jats.nlm.nih.gov/publishing/1.4/JATS-journalpublishing1-4.dtd">
<article  article-type="research-article"        dtd-version="1.4">
            <front>

                <journal-meta>
                                    <journal-id></journal-id>
            <journal-title-group>
                                                                                    <journal-title>Balkan Journal of Electrical and Computer Engineering</journal-title>
            </journal-title-group>
                            <issn pub-type="ppub">2147-284X</issn>
                                        <issn pub-type="epub">2147-284X</issn>
                                                                                            <publisher>
                    <publisher-name>MUSA YILMAZ</publisher-name>
                </publisher>
                    </journal-meta>
                <article-meta>
                                        <article-id pub-id-type="doi">10.17694/bajece.419557</article-id>
                                                                <article-categories>
                                            <subj-group  xml:lang="en">
                                                            <subject>Engineering</subject>
                                                    </subj-group>
                                            <subj-group  xml:lang="tr">
                                                            <subject>Mühendislik</subject>
                                                    </subj-group>
                                    </article-categories>
                                                                                                                                                        <title-group>
                                                                                                                                                            <article-title>Speech Emotion Classification and Recognition with different methods for Turkish Language</article-title>
                                                                                                    </title-group>
            
                                                    <contrib-group content-type="authors">
                                                                        <contrib contrib-type="author">
                                                                <name>
                                    <surname>Bakır</surname>
                                    <given-names>Cigdem</given-names>
                                </name>
                                                            </contrib>
                                                    <contrib contrib-type="author">
                                                                <name>
                                    <surname>Yuzkat</surname>
                                    <given-names>Mecit</given-names>
                                </name>
                                                            </contrib>
                                                                                </contrib-group>
                        
                                        <pub-date pub-type="pub" iso-8601-date="20180430">
                    <day>04</day>
                    <month>30</month>
                    <year>2018</year>
                </pub-date>
                                        <volume>6</volume>
                                        <issue>2</issue>
                                        <fpage>122</fpage>
                                        <lpage>128</lpage>
                        
                        <history>
                                    <date date-type="received" iso-8601-date="20150822">
                        <day>08</day>
                        <month>22</month>
                        <year>2015</year>
                    </date>
                                                    <date date-type="accepted" iso-8601-date="20171116">
                        <day>11</day>
                        <month>16</month>
                        <year>2017</year>
                    </date>
                            </history>
                                        <permissions>
                    <copyright-statement>Copyright © 2013, Balkan Journal of Electrical and Computer Engineering</copyright-statement>
                    <copyright-year>2013</copyright-year>
                    <copyright-holder>Balkan Journal of Electrical and Computer Engineering</copyright-holder>
                </permissions>
            
                                                                                                                        <abstract><p>In severalapplication, emotion&amp;nbsp; recognition fromthe speech signal has been research topic since many years. To determine theemotions from the speech signal, many systems have been developed. To solve thespeaker emotion recognition problem, hybrid model is proposed to classify fivespeech emotions, including&amp;nbsp; anger,sadness, fear, happiness and neutral. The aim this study of was to actualizeautomatic voice and speech emotion recognition system using hybrid model takingTurkish sound forms and properties into consideration.&amp;nbsp; Approximately 3000 Turkish voice samples ofwords and clauses with differing lengths have been collected from 25 malesand&amp;nbsp; 25 females. In this study, anauthentic and unique&amp;nbsp; Turkish&amp;nbsp; database has been used. Features of thesevoice samples have been obtained using Mel Frequency Cepstral Coefficients(MFCC) and Mel Frequency Discrete Wavelet Coefficients (MFDWC). Moreover,spectral features of these voice samples have been obtained&amp;nbsp; using Support Vector Machine (SVM). Featurevectors of the voice samples obtained have been trained with such methods asGauss Mixture Model( GMM), Artifical Neural Network (ANN), Dynamic Time Warping(DTW), Hidden Markov Model (HMM) and hybrid model(GMM with combined SVM).&amp;nbsp; This hybrid model has been carried out bycombining with SVM and GMM.&amp;nbsp; In firststage of this model, with SVM has been performed&amp;nbsp; subsets obtained vector of&amp;nbsp; spectral features. In the second&amp;nbsp; phase, a set of training and tests have beenformed from these spectral features. In the test phase, owner of a given voicesample has been identified taking the trained voice samples into consideration.Results and performances of the algorithms employed in the study forclassification have been also demonstrated in a comparative manner.&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;</p></abstract>
                                                            
            
                                                                                        <kwd-group>
                                                    <kwd>MFCC</kwd>
                                                    <kwd>  MFDWC</kwd>
                                                    <kwd>  emotion</kwd>
                                                    <kwd>  HMM</kwd>
                                                    <kwd>  hybrid model</kwd>
                                            </kwd-group>
                            
                                                                                                                                                    </article-meta>
    </front>
    <back>
                            <ref-list>
                                    <ref id="ref1">
                        <label>1</label>
                        <mixed-citation publication-type="journal">[1]	Mohammed Shami, Wemen Verhelst, “An evaluation of the robustness of existing supervised machine learning approaches to the classification of emotions in speech”, Speech Communication, 2007, 49(3), p.201-212.
[2]	Lijiang Chen , Xia Mao, Yuli Xue , Lee Lung Cheng , “Speech emotion recognition: Features and classification models”, Digital Signal Processing, 22(6), 2012,  p.1154-1160.
[3]	Ling He, Margaret Lech, Namunu C. Maddage, Nicholas B. Allen, “Study of empirical mode decomposition and spectral analysis for stress and emotion classification in natural speech”, Biomedical Signal Processing and Control, 2011, 6(2), p.139-146.
[4]	Tim Polzehl , Shiva Sundaram , Hamed Ketabdar , Michael Wagner and Florian Metze, “Emotion Classification in Children’s Speech Using Fusion of Acoustic and Linguistic Features”, Interspeech 2009: 10th Annual Conference of the International Speech Communication Association, 2009.
[5]	Halicioglu, Tin Lay Nwe,  Foo Say Wei and Liyanage C De Silva, “Speech Based Emotion Classification”, TENCON 2001. Proceedings of IEEE Region 10 International Conference on Electrical and Electronic Technology, 2001.
[6]	Jasmine Bhaskar, Sruthi Ka  and Prema Nedungadi, “Hybrid Approach for Emotion Classification of Audio Conversation Based on Text and Speech Mining”, Procedia Computer Science, 2015, 46, p.635-643.
[7]	Jinkyu Lee and Ivan Tashev. “High-level Feature Representation using Recurrent Neural Network for Speech Emotion Recognition”, Interspeech 2015, 2015.
[8]	S.Oh and C.Suen, “A class-modular feed forward neural network for handwriting recognition”, Pattern Recognition, 2002, 35(1), p.229-244.
[9]	Dimitros and Kontropulos,  “Emotional speech recognition: Resources, features, and methods”, Speech Communication, 2006, 48(9), p.1162-1181.
[10]	D.A. Reynolds and R.C. Rose,   “Robust Text-Independent Speaker Identification Using Gaussian Mixture Speaker Models”, IEEE Trans. Speech Audio Proc., 1995, 3, p. 72–83.
[11]	Seok, Oh and Ching, Suen, “A class-modular feed forward neural network for handwriting recognition”, Pattern Recognition, 2002, 35(1), p.229-244.
[12]	Lihang, Li, Dongqing, Chen and Sarang, Lakare etc, “Image segmentation approach to extract colon lümen through colonic material taggng and hidden markov random field model for virtual colonoskopy”, Medical Imaging, 2002.
[13]	Edmondo, Trentin and Marko, Gori, “A survey of hybrid ANN/HMM models for automatic speech recognition”, Elsevier Neurocomputing 37, p.91-126, 2001.
[14]	Lindasalwa, Muda and Mumtaj, Began, “Voice Recognition Algorithms using Mel Frequency Cepstral Coefficient (MFCC) and Dynamic Time Warping (DTW) Techniques”, Journal Computing, 2010, 2(3), p.138-143, ISBN 2151-9617, 2010.
[15]	Hao Hu, Ming-XingXu, and Wei Wu, “GMM Supervector Based SVM with Spectral Features for Speech Emotion Recognition”, Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on, 2007.
[16]	Cigdem Bakir, “Automatic Speaker Gender Identification for the German Language”, Balkan Journal of Electrical&amp;Computer Engineering, 2015, 4(2),  p.79-83, 2015.
[17]	Cigdem Bakir, “Automatic voice and Speech Recognition System for the German Language”, 1st   International Conference on Engineering Technology and Applied Sciences, 2016, p.131-134.
[18]	Lindasalwa Muda, Mumtaj Begam and I. Elamvazuthi, “ Voice Recognition Algorithms using Mel Frequency Cepstral Coefficient (MFCC) and Dynamic Time Warping (DTW) Techniques”, Journal of Computing, vol.2, issue 3, p.138-143, ISSN 2151-9617, 2010.
[19]	M., Fahid M. and M.A, “Robust Voice conversion systems using MFDWC”, 2008 International Symposium on Telecommunications, p.778-781, 2008.</mixed-citation>
                    </ref>
                            </ref-list>
                    </back>
    </article>
