Araştırma Makalesi
BibTex RIS Kaynak Göster

Türkçe şarkılar için şarkı sözleri üzerinden müzik duygu sınıflandırması

Yıl 2018, Cilt: 24 Sayı: 2, 292 - 301, 30.04.2018

Öz

Müzik insanlık tarihinde önemli bir yere
sahiptir. Özellikle dijital çağda kişiler tarafından her gün yaratılan ve
ulaşılan müzik koleksiyonlarının büyüklüğü ile müziğin önemi daha da artmış ve
insanlar m
üzik içeren aktivitelere daha fazla zaman ayırmaya
başlamışlardır
. Bununla birlikte, müziğe bilgi geri getirim
sürecini kolay ve etkin hale getirmek için yapılan katalog bazlı aramalar duygu
tabanlı etiketlere göre aramalara dönüşmüştür. Bu araştırmada amacımız şarkı
sözlerine göre bir şarkıdan algılanan duygunun otomatik olarak çıkarıldığı bir
model geliştirmektir. Model metin bazlı sınıflandırma için kullanılan makina
öğrenmesi algoritmaları ile oluşturulmuştur. Bu amaçla araştırmada 300 şarkı
seçilmiş ve bu şarkılar kişiler tarafından hissedilen duygularına göre
etiketlenmiştir. Devamında metin ön analizi ile şarkı sözleri Türkçe köklerine
ayrıştırılarak Unigram, Bigram ve Trigram kelime özellikleri çıkartılmıştır.
Ardından endeksleri terim sıklığı ve tf-idf değerleri olan doküman bazında
terim matrisleri yaratılmıştır. Bu matris değerleri 5 farklı sınıflandırma
algoritmasına girdi olarak verilerek en yüksek doğruluk sonuçları, hatırlama ve
kesinlik metrikleri üzerinden araştırılmıştır. Araştırmanın sonucunda en yüksek
kesinlik değeri Zemberek Uzun Kök Ayıştırma Metodu ile Unigram kelime
özelliklerine göre ayrıştırılmış ve endeksi terim sıklığına göre belirlenmiş
terim bazlı doküman matrisinin Katlıterim Naïve Bayes kümeleyicisinde verdiği
görülmüştür. Bu kombinasyonda hatırlama metriği değeri 43.7 iken kesinlik
metriği değeri 46.9’dur.

Kaynakça

  • Yang YH, Chen HH. “Machine recognition of music emotion: A review”. ACM Transactions on Intelligent Systems and Technology, 3(3), 1-30, 2012.
  • Casey MA, Veltkamp R, Goto M, Leman M, Rhodes C, Slaney M. “Content-based music information retrieval: Current directions and future challenges”. Proceedings of the IEEE, 96(4), 668-696, 2008.
  • Song Y, Dixon S, Pearce M. “A survey of music recommendation systems and future perspectives”. 9th International Symposium on Computer Music Modeling and Retrieval, London, UK, 19-22 June 2012.
  • Lehtiniemi A, Ojala J. “Evaluating MoodPic-A concept for collaborative mood music playlist creation”. 17th International Conference on Information Visualisation (IV), London, UK, 15-18 July 2013.
  • Dulačka P, Bieliková M. “Validation of music metadata via game with a purpose”. 8th International Conference on Semantic Systems, Kristiansand, Norway, 03-06 September 2012.
  • Okada K, Karlsson BF, Sardinha L, Noleto T. “ContextPlayer: Learning contextual music preferences for situational recommendations”. Asia 2013 Symposium on Mobile Graphics and Interactive Applications, Hong Kong, China, 19-22 November 2013.
  • Lamere P. “Social tagging and music information retrieval”. Journal of New Music Research, 37(2), 101-114, 2008.
  • West K, Cox S, Lamere P. “Incorporating machine-learning into music similarity estimation”. 1st ACM Workshop on Audio and Music Computing Multimedia, Santa Barbara, CA, USA, 23-27 October 2006.
  • Liu JY, Liu SY, Yang YH. “LJ2M dataset: Toward better understanding of music listening behavior and user mood”. Multimedia and Expo (ICME), Chengdu, China, 14-18 July 2014.
  • Xia M, Huang Y, Duan W, Whinston A. “Ballot box communication in online communities”. Communications of the ACM, 52(9), 249-254, 2009.
  • Urbano J, Morato J, Marrero M, Martin D. “Crowdsourcing preference judgments for evaluation of music similarity tasks”. ACM SIGIR Workshop on Crowdsourcing for Search Evaluation, Geneva, Switzerland, 23 July 2010.
  • Feng Y, Zhuang Y, Pan Y. “Popular music retrieval by detecting mood”. 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Toronto, Canada, 28 July-1 August 2003.
  • Echonest. “Acoustic Attributes”. http://developer.echonest.com/acoustic-attributes.html (01.11.2014).
  • The MIR Group of the Vienna University of Technology. “Audio Feature Extraction Webservice”. http://www.ifs.tuwien.ac.at/mir/webservice (11.01.2015).
  • Corthaut N, Govaerts S, Verbert K, Duval E. “Connecting the dots: Music metadata generation, schemas and applications”. 9th International Society for Music Information Retrieval Conference, Philadelphia, USA, 14-18 September 2008.
  • Hauger D, Schedl M, Košir A, Tkalcic M. “The million musical tweets dataset: What can we learn from microblogs”. 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 4-8 November 2013.
  • Hu X, Downie JS. “When lyrics outperform audio for music mood classification: A feature analysis”. International Society for Music Information Retrieval Conference (ISMIR), Utrecht, Netherlands, 9-13 August 2010.
  • Cho YH, Lee KJ. “Automatic affect recognition using natural language processing techniques and manually built affect lexicon”. IEICE Transactions on Information and Systems, 89(12), 2964-2971, 2006.
  • Logan B, Salomon A. “Music similarity function based on signal analysis”. International Conference on Multimedia and Expo (ICME), Tokyo, Japan, 22-25 August 2001.
  • Fell M, Sporleder C. “Lyrics-based analysis and classification of music”. 25th International Confereence on Computational Linguistics, Dublin, Ireland, 23-29 August 2014.
  • Kim M, Kwon HC. “Lyrics-based emotion classification using feature selection by partial syntactic analysis”. 23rd IEEE International Conference on Tools with Artificial Intelligence, Boca Raton, Florida USA, 9-9 November 2011.
  • Howard S, Silla Jr CN, Johnson CG. “Automatic lyrics-based music genre classification in a multilingual setting”. Thirteenth Brazilian Symposium on Computer Music, Vitória, Brasil, 31 August-3 September 2011.
  • Türkmenoglu C, Tantug AC. "Sentiment analysis in Turkish media". International Conference on Machine Learning (ICML), Beijing, China, 21-26 June 2014.
  • Vural AG, Cambazoglu BB, Senkul P, Tokgoz ZO. A Framework for Sentiment Analysis in Turkish: Application to Polarity Detection of Movie Reviews in Turkish. Editors: Gelenbe E, Lent R. Computer and Information Sciences III, 437-445, London, UK, Springer, 2013.
  • Dehkharghani R, Saygin Y, Yanikoglu B, Oflazer K. “SentiTurkNet: A Turkish polarity lexicon for sentiment analysis”. Language Resources and Evaluation, 50(3), 667-685, 2016.
  • Tunalı V, Bilgin TT. “Türkçe metinlerin kümelenmesinde farklı kök bulma yöntemlerinin etkisinin araştırılması”. Elektrik, Elektronik ve Bilgisayar Mühendisliği Sempozyumu (LECO 2012), Bursa, Turkey, 29 Kasım-01 Aralık 2012.
  • Aizawa A. “An information-theoretic perspective of tf–idf measures”. Information Processing & Management, 39(1), 45-65, 2003.
  • Tzanetakis G, Cook P. “Musical genre classification of audio signals”. IEEE Transactions on Speech and Audio Processing, 10(5), 293-302, 2002.
  • Hamel P, Eck D. “Learning features from music audio with deep belief networks”. 11th International Society for Music Information Retrieval Conference, Utrecht, Netherlands, 9-13 August 2010.
  • McKay C, Fujinaga I. “Automatic genre classification using large high-level musical feature sets”. International Society for Music Information Retrieval (ISMIR), Barcelona, Spain, 10-15 October 2004.
  • Barthet M, Fazekas G, Sandler M. “Music emotion recognition: from content-to context-based models”. International Symposium on Computer Music Modeling and Retrieval, London, UK, 19-22 June 2012.
  • Laurier C, Meyers O, Serrà J, Blech M, Herrera P, Serra X. “Indexing music by mood: Design and integration of an automatic content-based annotator”. Multimedia Tools and Applications, 48(1), 161-184, 2010.
  • Bischoff K, Firan CS, Paiu R, Nejdl W, Laurier C, Sordo M. “Music mood and theme classification-a hybrid approach”. International Society for Music Information Retrieval (ISMIR), Kobe, Japan, 26-30 October 2009.
  • Patra BG, Das D, Bandyopadhyay S. “Unsupervised approach to Hindi music mood classification”. Mining Intelligence and Knowledge Exploration, Tamil Nadu, India, 18-20 December 2013.
  • Dewi KC, Harjoko A. “Kid's song classification based on mood parameters using k-nearest neighbor classification method and self organizing map”. International Conference on Distributed Framework and Applications (DFmA), Jogjakarta, Indonesia, 2-3 August 2010.
  • Weninger F, Eyben F, Schuller B. “On-line continuous-time music mood regression with deep recurrent neural networks”. IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), Florence, Italy, 4-9 May 2014.
  • Chi CY, Wu YS, Chu WR, Wu DC, Hsu JJ, Tsai RH. “The power of words: Enhancing music mood estimation with textual input of lyrics”. Affective Computing and Intelligent Interaction and Workshops (ACII 2009), Amsterdam, Netherlands, 10-12 September 2009.
  • Vikipedia. “Kategori: Türk Pop Şarkıcıları”. http://tr.wikipedia.org/w/index.php?title=Kategori:T%C3%BCrk_pop_%C5%9Fark%C4%B1c%C4%B1lar%C4%B1 (01.11.2014).
  • Russell JA. “A circumplex model of affect”. Journal of Personality and Social Psychology, 39(6), 1161-1178, 1980.
  • Laurier C, Grivolla J, Herrer P. “Multimodal music mood classification using audio and lyrics”. 7th International Conference on Machine Learning and Applications, San Diego, California, USA, 11-13 December 2008.
  • Song Y, Dixon S, Pearce M. “Evaluation of musical features for emotion classification”. 13th International Society for Music Information Retrieval Conference, Porto, Portugal, 8-12 October 2012.
  • Cohen J. “A coefficient of agreement for nominal scales”. Educational and Psychological Measurement, 20(1), 37-46, 1960.
  • Landis JR, Koch GG. “The measurement of observer agreement for categorical data”. Biometrics, 33(1), 159-174, 1977.
  • Fleiss JL, Nee JC, Landis JR, “Large sample variance of kappa in the case of different sets of raters”. Psychological Bulletin, 86(5), 974-977, 1979.
  • Songlyrics Know The Words. ”Lyrics”. www.songlyrics.com (20.11.2014).
  • Python Software Foundation. “Python Stopwords”. https://pypi.python.org/pypi/stop-words (08.01.2015).
  • Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M. “Scikit-learn: Machine learning in python”. The Journal of Machine Learning Research, 12, 2825-2830, 2011.
  • Bigand E, Vieillard S, Madurell F, Marozeau J, Dacquet A. “Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts”. Cognition & Emotion, 19(8), 1113-1139, 2005.

Music emotion classification for Turkish songs using lyrics

Yıl 2018, Cilt: 24 Sayı: 2, 292 - 301, 30.04.2018

Öz

Music
has grown into an important part of people’s daily lives. As we move further
into the digital age in which a large collection of music is being created
daily and becomes easily accessible renders people to spend more time on
activities that involve music. Consequently, the form of music retrieval is
changed from catalogue based searches to searches made based on emotion tags in
order for easy and effective musical information access. In this study, it is
aimed to generate a model for automatic recognition of the perceived emotion of
songs with the help of their lyrics and machine learning algorithms. For this
purpose, first 300 songs are selected and annotated by human taggers with
respect to their perceived emotions. Thereafter, Unigram, Bigram and Trigram
word features are extracted from song lyrics after performing text
preprocessing where stemming of the Turkish words is an essential part. Then,
term by document matrices are created where term frequencies and tf-idf scores
are considered as representations for the indices. Five different
classification algorithms are fed with these matrices in order to find the best
combination that achieves the highest accuracy results where recall and
precision values are used as comparison metrics. As a result, best accuracy
results are obtained by using Multinomial Naïve Bayes classifier where Unigram
features are used to create the term by document matrix. In this setting,
Unigram features are stemmed by Zemberek Long stemming method, and the index
representation is chosen as term frequency. For this combination, obtained
recall and precision values are 43.7 and 46.9, respectively.

Kaynakça

  • Yang YH, Chen HH. “Machine recognition of music emotion: A review”. ACM Transactions on Intelligent Systems and Technology, 3(3), 1-30, 2012.
  • Casey MA, Veltkamp R, Goto M, Leman M, Rhodes C, Slaney M. “Content-based music information retrieval: Current directions and future challenges”. Proceedings of the IEEE, 96(4), 668-696, 2008.
  • Song Y, Dixon S, Pearce M. “A survey of music recommendation systems and future perspectives”. 9th International Symposium on Computer Music Modeling and Retrieval, London, UK, 19-22 June 2012.
  • Lehtiniemi A, Ojala J. “Evaluating MoodPic-A concept for collaborative mood music playlist creation”. 17th International Conference on Information Visualisation (IV), London, UK, 15-18 July 2013.
  • Dulačka P, Bieliková M. “Validation of music metadata via game with a purpose”. 8th International Conference on Semantic Systems, Kristiansand, Norway, 03-06 September 2012.
  • Okada K, Karlsson BF, Sardinha L, Noleto T. “ContextPlayer: Learning contextual music preferences for situational recommendations”. Asia 2013 Symposium on Mobile Graphics and Interactive Applications, Hong Kong, China, 19-22 November 2013.
  • Lamere P. “Social tagging and music information retrieval”. Journal of New Music Research, 37(2), 101-114, 2008.
  • West K, Cox S, Lamere P. “Incorporating machine-learning into music similarity estimation”. 1st ACM Workshop on Audio and Music Computing Multimedia, Santa Barbara, CA, USA, 23-27 October 2006.
  • Liu JY, Liu SY, Yang YH. “LJ2M dataset: Toward better understanding of music listening behavior and user mood”. Multimedia and Expo (ICME), Chengdu, China, 14-18 July 2014.
  • Xia M, Huang Y, Duan W, Whinston A. “Ballot box communication in online communities”. Communications of the ACM, 52(9), 249-254, 2009.
  • Urbano J, Morato J, Marrero M, Martin D. “Crowdsourcing preference judgments for evaluation of music similarity tasks”. ACM SIGIR Workshop on Crowdsourcing for Search Evaluation, Geneva, Switzerland, 23 July 2010.
  • Feng Y, Zhuang Y, Pan Y. “Popular music retrieval by detecting mood”. 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Toronto, Canada, 28 July-1 August 2003.
  • Echonest. “Acoustic Attributes”. http://developer.echonest.com/acoustic-attributes.html (01.11.2014).
  • The MIR Group of the Vienna University of Technology. “Audio Feature Extraction Webservice”. http://www.ifs.tuwien.ac.at/mir/webservice (11.01.2015).
  • Corthaut N, Govaerts S, Verbert K, Duval E. “Connecting the dots: Music metadata generation, schemas and applications”. 9th International Society for Music Information Retrieval Conference, Philadelphia, USA, 14-18 September 2008.
  • Hauger D, Schedl M, Košir A, Tkalcic M. “The million musical tweets dataset: What can we learn from microblogs”. 14th International Society for Music Information Retrieval Conference, Curitiba, Brazil, 4-8 November 2013.
  • Hu X, Downie JS. “When lyrics outperform audio for music mood classification: A feature analysis”. International Society for Music Information Retrieval Conference (ISMIR), Utrecht, Netherlands, 9-13 August 2010.
  • Cho YH, Lee KJ. “Automatic affect recognition using natural language processing techniques and manually built affect lexicon”. IEICE Transactions on Information and Systems, 89(12), 2964-2971, 2006.
  • Logan B, Salomon A. “Music similarity function based on signal analysis”. International Conference on Multimedia and Expo (ICME), Tokyo, Japan, 22-25 August 2001.
  • Fell M, Sporleder C. “Lyrics-based analysis and classification of music”. 25th International Confereence on Computational Linguistics, Dublin, Ireland, 23-29 August 2014.
  • Kim M, Kwon HC. “Lyrics-based emotion classification using feature selection by partial syntactic analysis”. 23rd IEEE International Conference on Tools with Artificial Intelligence, Boca Raton, Florida USA, 9-9 November 2011.
  • Howard S, Silla Jr CN, Johnson CG. “Automatic lyrics-based music genre classification in a multilingual setting”. Thirteenth Brazilian Symposium on Computer Music, Vitória, Brasil, 31 August-3 September 2011.
  • Türkmenoglu C, Tantug AC. "Sentiment analysis in Turkish media". International Conference on Machine Learning (ICML), Beijing, China, 21-26 June 2014.
  • Vural AG, Cambazoglu BB, Senkul P, Tokgoz ZO. A Framework for Sentiment Analysis in Turkish: Application to Polarity Detection of Movie Reviews in Turkish. Editors: Gelenbe E, Lent R. Computer and Information Sciences III, 437-445, London, UK, Springer, 2013.
  • Dehkharghani R, Saygin Y, Yanikoglu B, Oflazer K. “SentiTurkNet: A Turkish polarity lexicon for sentiment analysis”. Language Resources and Evaluation, 50(3), 667-685, 2016.
  • Tunalı V, Bilgin TT. “Türkçe metinlerin kümelenmesinde farklı kök bulma yöntemlerinin etkisinin araştırılması”. Elektrik, Elektronik ve Bilgisayar Mühendisliği Sempozyumu (LECO 2012), Bursa, Turkey, 29 Kasım-01 Aralık 2012.
  • Aizawa A. “An information-theoretic perspective of tf–idf measures”. Information Processing & Management, 39(1), 45-65, 2003.
  • Tzanetakis G, Cook P. “Musical genre classification of audio signals”. IEEE Transactions on Speech and Audio Processing, 10(5), 293-302, 2002.
  • Hamel P, Eck D. “Learning features from music audio with deep belief networks”. 11th International Society for Music Information Retrieval Conference, Utrecht, Netherlands, 9-13 August 2010.
  • McKay C, Fujinaga I. “Automatic genre classification using large high-level musical feature sets”. International Society for Music Information Retrieval (ISMIR), Barcelona, Spain, 10-15 October 2004.
  • Barthet M, Fazekas G, Sandler M. “Music emotion recognition: from content-to context-based models”. International Symposium on Computer Music Modeling and Retrieval, London, UK, 19-22 June 2012.
  • Laurier C, Meyers O, Serrà J, Blech M, Herrera P, Serra X. “Indexing music by mood: Design and integration of an automatic content-based annotator”. Multimedia Tools and Applications, 48(1), 161-184, 2010.
  • Bischoff K, Firan CS, Paiu R, Nejdl W, Laurier C, Sordo M. “Music mood and theme classification-a hybrid approach”. International Society for Music Information Retrieval (ISMIR), Kobe, Japan, 26-30 October 2009.
  • Patra BG, Das D, Bandyopadhyay S. “Unsupervised approach to Hindi music mood classification”. Mining Intelligence and Knowledge Exploration, Tamil Nadu, India, 18-20 December 2013.
  • Dewi KC, Harjoko A. “Kid's song classification based on mood parameters using k-nearest neighbor classification method and self organizing map”. International Conference on Distributed Framework and Applications (DFmA), Jogjakarta, Indonesia, 2-3 August 2010.
  • Weninger F, Eyben F, Schuller B. “On-line continuous-time music mood regression with deep recurrent neural networks”. IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), Florence, Italy, 4-9 May 2014.
  • Chi CY, Wu YS, Chu WR, Wu DC, Hsu JJ, Tsai RH. “The power of words: Enhancing music mood estimation with textual input of lyrics”. Affective Computing and Intelligent Interaction and Workshops (ACII 2009), Amsterdam, Netherlands, 10-12 September 2009.
  • Vikipedia. “Kategori: Türk Pop Şarkıcıları”. http://tr.wikipedia.org/w/index.php?title=Kategori:T%C3%BCrk_pop_%C5%9Fark%C4%B1c%C4%B1lar%C4%B1 (01.11.2014).
  • Russell JA. “A circumplex model of affect”. Journal of Personality and Social Psychology, 39(6), 1161-1178, 1980.
  • Laurier C, Grivolla J, Herrer P. “Multimodal music mood classification using audio and lyrics”. 7th International Conference on Machine Learning and Applications, San Diego, California, USA, 11-13 December 2008.
  • Song Y, Dixon S, Pearce M. “Evaluation of musical features for emotion classification”. 13th International Society for Music Information Retrieval Conference, Porto, Portugal, 8-12 October 2012.
  • Cohen J. “A coefficient of agreement for nominal scales”. Educational and Psychological Measurement, 20(1), 37-46, 1960.
  • Landis JR, Koch GG. “The measurement of observer agreement for categorical data”. Biometrics, 33(1), 159-174, 1977.
  • Fleiss JL, Nee JC, Landis JR, “Large sample variance of kappa in the case of different sets of raters”. Psychological Bulletin, 86(5), 974-977, 1979.
  • Songlyrics Know The Words. ”Lyrics”. www.songlyrics.com (20.11.2014).
  • Python Software Foundation. “Python Stopwords”. https://pypi.python.org/pypi/stop-words (08.01.2015).
  • Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M. “Scikit-learn: Machine learning in python”. The Journal of Machine Learning Research, 12, 2825-2830, 2011.
  • Bigand E, Vieillard S, Madurell F, Marozeau J, Dacquet A. “Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts”. Cognition & Emotion, 19(8), 1113-1139, 2005.
Toplam 48 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Makale
Yazarlar

Ahmet Onur Durahim 0000-0002-0198-3307

Abide Coşkun Setirek Bu kişi benim 0000-0002-4575-3271

Birgül Başarır Özel Bu kişi benim 0000-0002-4336-2752

Hanife Kebapçı Bu kişi benim 0000-0001-7311-6838

Yayımlanma Tarihi 30 Nisan 2018
Yayımlandığı Sayı Yıl 2018 Cilt: 24 Sayı: 2

Kaynak Göster

APA Durahim, A. O., Coşkun Setirek, A., Başarır Özel, B., Kebapçı, H. (2018). Music emotion classification for Turkish songs using lyrics. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi, 24(2), 292-301.
AMA Durahim AO, Coşkun Setirek A, Başarır Özel B, Kebapçı H. Music emotion classification for Turkish songs using lyrics. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi. Nisan 2018;24(2):292-301.
Chicago Durahim, Ahmet Onur, Abide Coşkun Setirek, Birgül Başarır Özel, ve Hanife Kebapçı. “Music Emotion Classification for Turkish Songs Using Lyrics”. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi 24, sy. 2 (Nisan 2018): 292-301.
EndNote Durahim AO, Coşkun Setirek A, Başarır Özel B, Kebapçı H (01 Nisan 2018) Music emotion classification for Turkish songs using lyrics. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi 24 2 292–301.
IEEE A. O. Durahim, A. Coşkun Setirek, B. Başarır Özel, ve H. Kebapçı, “Music emotion classification for Turkish songs using lyrics”, Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi, c. 24, sy. 2, ss. 292–301, 2018.
ISNAD Durahim, Ahmet Onur vd. “Music Emotion Classification for Turkish Songs Using Lyrics”. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi 24/2 (Nisan 2018), 292-301.
JAMA Durahim AO, Coşkun Setirek A, Başarır Özel B, Kebapçı H. Music emotion classification for Turkish songs using lyrics. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi. 2018;24:292–301.
MLA Durahim, Ahmet Onur vd. “Music Emotion Classification for Turkish Songs Using Lyrics”. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi, c. 24, sy. 2, 2018, ss. 292-01.
Vancouver Durahim AO, Coşkun Setirek A, Başarır Özel B, Kebapçı H. Music emotion classification for Turkish songs using lyrics. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi. 2018;24(2):292-301.





Creative Commons Lisansı
Bu dergi Creative Commons Al 4.0 Uluslararası Lisansı ile lisanslanmıştır.