Araştırma Makalesi
BibTex RIS Kaynak Göster

Keyword Extraction from Kazakh News Dataset with BERT

Yıl 2022, , 1193 - 1200, 31.12.2022
https://doi.org/10.31202/ecjse.1131826

Öz

Keywords provide a concise and precise description of the document's content. Due to the importance of the keyword and the difficulty of manual markup, automatic keyword extraction makes this process easy and fast. In this paper, Keyword Extraction from Kazakh News Dataset was presented. Model performance results were obtained by using the BERT base - uncased and BERT-base-multilingual-uncased pre-trained language model for the newly compiled Kazakh News Dataset-KND. Compiled Kazakh news data set consists of 7060 data. Data were collected from the web pages anatili.kazgazeta.kz, Bilimdinews.kz, and zhasalash.kz using the BeautifulSoap and Requests libraries. These web pages mostly contain news, history, and literary texts. The dataset includes the publication name or news title, the author of the publication or news subject, and the URL of the Kazakh news site. In the evaluation of the training results, it was observed that the BERT base-multilingual-uncased F-score performance was higher than the BERT model.

Destekleyen Kurum

-

Proje Numarası

-

Teşekkür

-

Kaynakça

  • [1]. Birdevrim, S. A., Boyacı, A., Al Thani, D. A. S., “İyileştirilmiş otomatik anahtar kelime çıkarımı (BRAKE).” İstanbul Ticaret Üniversitesi Teknoloji ve Uygulamalı Bilimler Dergisi. 2018, 1(1): 11-19.
  • [2]. Siddiqi, S., Sharan, A., “Keyword and keyphrase extraction techniques: a literature review”. International Journal of Computer Applications, 2015, 109 (2).
  • [3]. Bekbulatov, E., Kartbayev, A., “A study of certain morphological structures of Kazakh and their impact on the machine translation quality”. In: 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), 2014, 1-5.
  • [4]. Myrzakhmetov, B., Kozhirbayev, Zh., “Extended language modeling experiments for kazakh.” the proceedings of 2018 International Workshop on Computational Models in Language and Speech, 2018.
  • [5]. Nugumanova, A., Mansurova, M., “Tabigi til matinderindegi terminderdi avtomatti turde tanu” Monografiya, Oskemen, ShQMU, 2019.
  • [6]. Raximova, D.R,. Qasimova, D.T, İsabaeva D.N., “Qazaq tiline arnalgan BERT modeli negizinde suraq-jauap juyesin zertteu jane azirleu.” Abay atındagı QazUPU-nin XABARSHISI, «Fizika-matematika gılımdarı» seriyası, 2021, 4 (76).
  • [7]. Alzaidy, R., Caragea, C., Giles, C., “Bi-LSTM-CRF sequence labeling for keyphrase extraction from scholarly documents.” In: The world wide web conference, 2019, 2551-255.
  • [8]. [8]. Santosh, T.Y., Sanyal, D.K., Bhowmick, P.K., Das, P.P., “Dake: Document-level attention for keyphrase extraction.” In Proceedings of the European Conference on Information Retrieval, 2020, 392–401.
  • [9]. Wang, J., Peng, H., Hu, J. S., “Automatic Keyphrases Extraction from Document Using Neural Network.” In Advances in Machine Learning and Cybernetics, 4th International Conference, 2006, 633-641.
  • [10]. Mu, F., Yu, Z., Wang, L., Wang, Y., Yin, Q., Sun, Y., ... & Zhou, X. “Keyphrase extraction with span-based feature representations.” arXiv preprint arXiv: 2002. 05407.
  • [11]. Liu, R., Lin, Z.,Wang, W. “Addressing Extraction and Generation Separately: Keyphrase Prediction With Pre-Trained Language Models.” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 3180-3191.
  • [12]. Zhao, J., Zhang, Y., “Incorporating Linguistic Constraints into Keyphrase Generation.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019, 5224-5233.
  • [13] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. “Attention is all you need.” In Advances in neural information processing systems. 2017, 5998-6008.
  • [14]. Subakan, C., Ravanelli, M., Cornell, S., Bronzi, M., Zhong, J. “Attention is all you need in speech separation.” In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, 21-25.

BERT ile Kazak Haber Veri Kümesinden Anahtar Kelime Çıkarımı

Yıl 2022, , 1193 - 1200, 31.12.2022
https://doi.org/10.31202/ecjse.1131826

Öz

Anahtar kelimeler, belgenin içeriğinin kısa ve kesin bir tanımını sağlar. Anahtar kelimenin önemi ve manuel işaretlemenin zorluğu nedeniyle, otomatik anahtar kelime çıkarımı bu işlemi kolay ve hızlı hale getirmektedir. Bu makalede Kazak haber veri setinden anahtar kelime çıkarımı sunulmaktadır. Yeni derlenen Kazak Haber Veri seti için BERT ve BERT-Base-Multilingual-Uncased önceden eğitilmiş dil modeli kullanılarak model performans sonuçları elde edilmiştir. Derlenen Kazak haber veri seti 7060 veriden oluşmaktadır. Veriler beautifulSoap ve requests kütüphaneleri kullanılarak aikyn.kz, anatili.kazgazeta.kz, zhasalash.kz ve baq.kz web sayfalarından toplanmıştır. Bu web sayfaları çoğunlukla haber, tarih, edebiyat metinlerini içermektedir. Veri seti yayın adını veya haber başlığını, yayının veya haberin konusunu ve Kazak haber sitesindeki URL'yi içermektedir. Eğitim sonuçları değerlendirildiğinde, BERT base-multilingual-uncased F-score başarımının BERT base - uncased modeline oranla daha yüksek olduğu gözlenmiştir.

Proje Numarası

-

Kaynakça

  • [1]. Birdevrim, S. A., Boyacı, A., Al Thani, D. A. S., “İyileştirilmiş otomatik anahtar kelime çıkarımı (BRAKE).” İstanbul Ticaret Üniversitesi Teknoloji ve Uygulamalı Bilimler Dergisi. 2018, 1(1): 11-19.
  • [2]. Siddiqi, S., Sharan, A., “Keyword and keyphrase extraction techniques: a literature review”. International Journal of Computer Applications, 2015, 109 (2).
  • [3]. Bekbulatov, E., Kartbayev, A., “A study of certain morphological structures of Kazakh and their impact on the machine translation quality”. In: 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), 2014, 1-5.
  • [4]. Myrzakhmetov, B., Kozhirbayev, Zh., “Extended language modeling experiments for kazakh.” the proceedings of 2018 International Workshop on Computational Models in Language and Speech, 2018.
  • [5]. Nugumanova, A., Mansurova, M., “Tabigi til matinderindegi terminderdi avtomatti turde tanu” Monografiya, Oskemen, ShQMU, 2019.
  • [6]. Raximova, D.R,. Qasimova, D.T, İsabaeva D.N., “Qazaq tiline arnalgan BERT modeli negizinde suraq-jauap juyesin zertteu jane azirleu.” Abay atındagı QazUPU-nin XABARSHISI, «Fizika-matematika gılımdarı» seriyası, 2021, 4 (76).
  • [7]. Alzaidy, R., Caragea, C., Giles, C., “Bi-LSTM-CRF sequence labeling for keyphrase extraction from scholarly documents.” In: The world wide web conference, 2019, 2551-255.
  • [8]. [8]. Santosh, T.Y., Sanyal, D.K., Bhowmick, P.K., Das, P.P., “Dake: Document-level attention for keyphrase extraction.” In Proceedings of the European Conference on Information Retrieval, 2020, 392–401.
  • [9]. Wang, J., Peng, H., Hu, J. S., “Automatic Keyphrases Extraction from Document Using Neural Network.” In Advances in Machine Learning and Cybernetics, 4th International Conference, 2006, 633-641.
  • [10]. Mu, F., Yu, Z., Wang, L., Wang, Y., Yin, Q., Sun, Y., ... & Zhou, X. “Keyphrase extraction with span-based feature representations.” arXiv preprint arXiv: 2002. 05407.
  • [11]. Liu, R., Lin, Z.,Wang, W. “Addressing Extraction and Generation Separately: Keyphrase Prediction With Pre-Trained Language Models.” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 3180-3191.
  • [12]. Zhao, J., Zhang, Y., “Incorporating Linguistic Constraints into Keyphrase Generation.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019, 5224-5233.
  • [13] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. “Attention is all you need.” In Advances in neural information processing systems. 2017, 5998-6008.
  • [14]. Subakan, C., Ravanelli, M., Cornell, S., Bronzi, M., Zhong, J. “Attention is all you need in speech separation.” In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, 21-25.
Toplam 14 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Aiman Abibullayeva Bu kişi benim 0000-0003-2449-2540

Aydın Çetin 0000-0002-8669-823X

Proje Numarası -
Yayımlanma Tarihi 31 Aralık 2022
Gönderilme Tarihi 16 Haziran 2022
Kabul Tarihi 7 Eylül 2022
Yayımlandığı Sayı Yıl 2022

Kaynak Göster

IEEE A. Abibullayeva ve A. Çetin, “Keyword Extraction from Kazakh News Dataset with BERT”, ECJSE, c. 9, sy. 4, ss. 1193–1200, 2022, doi: 10.31202/ecjse.1131826.