Araştırma Makalesi
BibTex RIS Kaynak Göster

Türkçe Metin Madenciliği için Çalışan Bellek Bağlantıları Tabanlı Uzun Kısa Süreli Bellek Mimarisi

Yıl 2022, Sayı: 34, 239 - 246, 31.03.2022
https://doi.org/10.31590/ejosat.1080239

Öz

Metin sınıflandırma, metin belgelerinin önceden belirlenmiş sınıf etiketlerinden birine atanmasına yönelik bir doğal dil işleme alanıdır. Metin sınıflandırma, aralarında duygu analizi, konu etiketleme, soru yanıtlama ve diyalog eylemi sınıflandırmanın da yer aldığı birçok doğal dil işleme probleminde kullanılmaktadır. Metin sınıflandırma, haber metinlerinin filtrelenmesi ve organizasyonu, istenmeyen e-posta içeriklerinin filtrelenmesi gibi birçok uygulama alanına sahiptir. Son yıllarda, metin sınıflandırma alanında, derin sinir ağı tabanlı mimariler ve sinirsel dil modelleri sıklıkla kullanılmaktadır. Uzun kısa süreli bellek tabanlı mimariler (LSTM), uzun süreli bağımlılıkları öğrenirken, geleneksel tekrarlayan sinir ağlarında gözlemlenen patlayan ve kaybolan gradyanları azaltmak için geçit mekanizmalarını kullanır. Bu nedenle, LSTM ve türevi mimariler, birçok dizi modelleme görevinde yaygın kullanıma sahiptir. LSTM tabanlı mimarilerde, bellek hücresi temel bilgileri içermesine karşın, geçit mekanizmasını doğrudan etkilemesine izin verilmez. Bu çalışmada, Türkçe duygu analizi için, tekrarlayan sinir ağı, uzun kısa süreli bellek, geçitli tekrarlayan birim, gözetleme deliği tabanlı uzun kısa süreli bellek mimarisi ve çalışan bellek bağlantıları tabanlı uzun kısa süreli bellek mimarisinin başarımı karşılaştırmalı olarak değerlendirilmektedir. Derlemin temsilinde, word2vec, fastText ve GloVe kelime gömme yöntemleri değerlendirilmiştir. Deneysel analizler, çalışan bellek bağlantıları tabanlı uzun kısa süreli bellek mimarisinin Türkçe metin belgeleri üzerinde duygu analizi için, gözetleme deliği tabanlı uzun kısa süreli bellek mimarisi, uzun kısa süreli bellek ve geçitli tekrarlayan birim mimarisine kıyasla daha yüksek doğru sınıflandırma oranı elde ettiğini göstermektedir.

Destekleyen Kurum

İzmir Katip Çelebi Üniversitesi

Proje Numarası

2022-GAP-MÜMF-0030

Teşekkür

Bu araştırma, İzmir Kâtip Çelebi Üniversitesi Bilimsel Araştırma Koordinasyon birimi (BAP) tarafından desteklenmiştir (Proje no: 2022-GAP-MÜMF-0030).

Kaynakça

  • Li, Q., Peng, H., Li, J., Xia, C., Yang, R., Sun, L., ... & He, L. (2020). A survey on text classification: From shallow to deep learning. arXiv preprint arXiv:2008.00364.
  • Onan, A., Korukoğlu, S., & Bulut, H. (2016). Ensemble of keyword extraction methods and classifiers in text classification. Expert Systems with Applications, 57, 232-247.
  • Fersini, E., Messina, E., & Pozzi, F. A. (2014). Sentiment analysis: Bayesian ensemble learning. Decision support systems, 68, 26-38.
  • Onan, A., Korukoğlu, S., & Bulut, H. (2016). A multiobjective weighted voting ensemble classifier based on differential evolution algorithm for text sentiment classification. Expert Systems with Applications, 62, 1-16.
  • Medhat, W., Hassan, A., & Korashy, H. (2014). Sentiment analysis algorithms and applications: A survey. Ain Shams engineering journal, 5(4), 1093-1113.
  • Onan, A., & Korukoğlu, S. (2016). Makine öğrenmesi yöntemlerinin görüş madenciliğinde kullanılması üzerine bir literatür araştırması. Pamukkale University Journal of Engineering Sciences, 22(2).
  • Chatterjee, A., Gupta, U., Chinnakotla, M. K., Srikanth, R., Galley, M., & Agrawal, P. (2019). Understanding emotions in text using deep learning and big data. Computers in Human Behavior, 93, 309-317.
  • Almeida, F., & Xexéo, G. (2019). Word embeddings: A survey. arXiv preprint arXiv:1901.09069.
  • Zhang, L., Wang, S., & Liu, B. (2018). Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4), e1253.
  • Xu, J., Chen, D., Qiu, X., & Huang, X. (2016). Cached long short-term memory neural networks for document-level sentiment classification. arXiv preprint arXiv:1610.04989.
  • Rao, G., Huang, W., Feng, Z., & Cong, Q. (2018). LSTM with sentence representations for document-level sentiment classification. Neurocomputing, 308, 49-57.
  • Al-Smadi, M., Talafha, B., Al-Ayyoub, M., & Jararweh, Y. (2019). Using long short-term memory deep neural networks for aspect-based sentiment analysis of Arabic reviews. International Journal of Machine Learning and Cybernetics, 10(8), 2163-2175.
  • Lu, C., Huang, H., Jian, P., Wang, D., & Guo, Y. D. (2017, May). A P-LSTM neural network for sentiment classification. In Pacific-Asia Conference on Knowledge Discovery and Data Mining (pp. 524-533). Springer, Cham.
  • Ma, Y., Peng, H., Khan, T., Cambria, E., & Hussain, A. (2018). Sentic LSTM: a hybrid network for targeted aspect-based sentiment analysis. Cognitive Computation, 10(4), 639-650.
  • Landi, F., Baraldi, L., Cornia, M., & Cucchiara, R. (2021). Working memory connections for LSTM. Neural Networks, 144, 334-341.
  • Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
  • Onan, A., & Toçoğlu, M. A. (2021). Weighted word embeddings and clustering‐based identification of question topics in MOOC discussion forum posts. Computer Applications in Engineering Education, 29(4), 675-689.
  • Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2017). Enriching word vectors with subword information. Transactions of the association for computational linguistics, 5, 135-146.
  • Pennington, J., Socher, R., & Manning, C. D. (2014, October). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543).
  • Gutiérrez, G., Canul-Reich, J., Zezzatti, A. O., Margain, L., & Ponce, J. (2018). Mining: Students comments about teacher performance assessment using machine learning algorithms. International Journal of Combinatorial Optimization Problems and Informatics, 9(3), 26.
  • Li, X., & Wu, X. (2015, April). Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In 2015 ieee international conference on acoustics, speech and signal processing (icassp) (pp. 4520-4524). IEEE.
  • Li, X., Peng, L., Yao, X., Cui, S., Hu, Y., You, C., & Chi, T. (2017). Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation. Environmental pollution, 231, 997-1004.
  • Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.
  • Rojas‐Barahona, L. M. (2016). Deep learning for sentiment analysis. Language and Linguistics Compass, 10(12), 701-719.

Long Short Term Memory Based on Working Memory Connections for Turkish Text Mining

Yıl 2022, Sayı: 34, 239 - 246, 31.03.2022
https://doi.org/10.31590/ejosat.1080239

Öz

Text classification is a natural language processing area for assigning text documents to one of the predetermined class labels. Text classification is used in many natural language processing problems, including sentiment analysis, topic tagging, question answering, and dialogue action classification. It has many applications, such as text classification, filtering and organization of news texts, and filtering of spam e-mail contents. In recent years, deep neural network-based architectures and neural language models have been used frequently in the field of text classification. Long-term memory-based architectures (LSTM) use gate mechanisms to reduce the vanishing and exploding gradients observed in the conventional recurrent neural networks when learning long-term dependencies. Therefore, LSTM and LSTM-based architectures have widespread use in many sequence modeling tasks. In LSTM-based architectures, although the memory cell contains the basic information, it is not allowed to directly affect the gate mechanism. In this study, the performance of recurrent neural network, long short-term memory, gated recurrent unit, peephole-based long-short-term memory architecture and working-memory connections-based long-term memory architecture are comparatively evaluated for Turkish sentiment analysis. In the representation of the corpus, word2vec, fastText and GloVe word embedding methods were evaluated. Experimental analyzes show that working memory connections-based long short-term memory architecture achieves higher classification accuracy for sentiment analysis on Turkish text documents compared to peephole-based long short-term memory architecture, long short-term memory and gated recurrent unit architecture.

Proje Numarası

2022-GAP-MÜMF-0030

Kaynakça

  • Li, Q., Peng, H., Li, J., Xia, C., Yang, R., Sun, L., ... & He, L. (2020). A survey on text classification: From shallow to deep learning. arXiv preprint arXiv:2008.00364.
  • Onan, A., Korukoğlu, S., & Bulut, H. (2016). Ensemble of keyword extraction methods and classifiers in text classification. Expert Systems with Applications, 57, 232-247.
  • Fersini, E., Messina, E., & Pozzi, F. A. (2014). Sentiment analysis: Bayesian ensemble learning. Decision support systems, 68, 26-38.
  • Onan, A., Korukoğlu, S., & Bulut, H. (2016). A multiobjective weighted voting ensemble classifier based on differential evolution algorithm for text sentiment classification. Expert Systems with Applications, 62, 1-16.
  • Medhat, W., Hassan, A., & Korashy, H. (2014). Sentiment analysis algorithms and applications: A survey. Ain Shams engineering journal, 5(4), 1093-1113.
  • Onan, A., & Korukoğlu, S. (2016). Makine öğrenmesi yöntemlerinin görüş madenciliğinde kullanılması üzerine bir literatür araştırması. Pamukkale University Journal of Engineering Sciences, 22(2).
  • Chatterjee, A., Gupta, U., Chinnakotla, M. K., Srikanth, R., Galley, M., & Agrawal, P. (2019). Understanding emotions in text using deep learning and big data. Computers in Human Behavior, 93, 309-317.
  • Almeida, F., & Xexéo, G. (2019). Word embeddings: A survey. arXiv preprint arXiv:1901.09069.
  • Zhang, L., Wang, S., & Liu, B. (2018). Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4), e1253.
  • Xu, J., Chen, D., Qiu, X., & Huang, X. (2016). Cached long short-term memory neural networks for document-level sentiment classification. arXiv preprint arXiv:1610.04989.
  • Rao, G., Huang, W., Feng, Z., & Cong, Q. (2018). LSTM with sentence representations for document-level sentiment classification. Neurocomputing, 308, 49-57.
  • Al-Smadi, M., Talafha, B., Al-Ayyoub, M., & Jararweh, Y. (2019). Using long short-term memory deep neural networks for aspect-based sentiment analysis of Arabic reviews. International Journal of Machine Learning and Cybernetics, 10(8), 2163-2175.
  • Lu, C., Huang, H., Jian, P., Wang, D., & Guo, Y. D. (2017, May). A P-LSTM neural network for sentiment classification. In Pacific-Asia Conference on Knowledge Discovery and Data Mining (pp. 524-533). Springer, Cham.
  • Ma, Y., Peng, H., Khan, T., Cambria, E., & Hussain, A. (2018). Sentic LSTM: a hybrid network for targeted aspect-based sentiment analysis. Cognitive Computation, 10(4), 639-650.
  • Landi, F., Baraldi, L., Cornia, M., & Cucchiara, R. (2021). Working memory connections for LSTM. Neural Networks, 144, 334-341.
  • Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
  • Onan, A., & Toçoğlu, M. A. (2021). Weighted word embeddings and clustering‐based identification of question topics in MOOC discussion forum posts. Computer Applications in Engineering Education, 29(4), 675-689.
  • Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2017). Enriching word vectors with subword information. Transactions of the association for computational linguistics, 5, 135-146.
  • Pennington, J., Socher, R., & Manning, C. D. (2014, October). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543).
  • Gutiérrez, G., Canul-Reich, J., Zezzatti, A. O., Margain, L., & Ponce, J. (2018). Mining: Students comments about teacher performance assessment using machine learning algorithms. International Journal of Combinatorial Optimization Problems and Informatics, 9(3), 26.
  • Li, X., & Wu, X. (2015, April). Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In 2015 ieee international conference on acoustics, speech and signal processing (icassp) (pp. 4520-4524). IEEE.
  • Li, X., Peng, L., Yao, X., Cui, S., Hu, Y., You, C., & Chi, T. (2017). Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation. Environmental pollution, 231, 997-1004.
  • Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.
  • Rojas‐Barahona, L. M. (2016). Deep learning for sentiment analysis. Language and Linguistics Compass, 10(12), 701-719.
Toplam 24 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Aytuğ Onan 0000-0002-9434-5880

Proje Numarası 2022-GAP-MÜMF-0030
Erken Görünüm Tarihi 30 Ocak 2022
Yayımlanma Tarihi 31 Mart 2022
Yayımlandığı Sayı Yıl 2022 Sayı: 34

Kaynak Göster

APA Onan, A. (2022). Türkçe Metin Madenciliği için Çalışan Bellek Bağlantıları Tabanlı Uzun Kısa Süreli Bellek Mimarisi. Avrupa Bilim Ve Teknoloji Dergisi(34), 239-246. https://doi.org/10.31590/ejosat.1080239