Research Article
BibTex RIS Cite

A Turkish Question Answering System Based on Deep Learning Neural Networks

Year 2021, Volume: 4 Issue: 2, 65 - 75, 23.09.2021
https://doi.org/10.38016/jista.815823

Abstract

In the domain of Natural Language Processing (NLP), despite the progress made for some common languages, difficulties persist for many others for the completion of particular NLP tasks. In this scope, the current study aims to explore these challenges by proposing a question answering (QA) system in the Turkish language. In particular, the system will generate the best answers in terms of content and length from questions that are based on a set of documents related to the banking sector. In order to achieve this goal, the system utilizes advanced artificial intelligence algorithms and large data sets. More specifically, BERT algorithm is used for the generation of the language model, followed by a fine-tuning procedure for performing a machine reading for question answering (MRQA) task. In this work, various experiments were conducted using original and translated data sets in an effort to solve the challenges that arise from morphologically complex languages as Turkish. Finally, the system achieved a performance that overall is applicable to a wider range than any other QA system in the Turkish language. The proposed methodology is not only proper to the Turkish language, but can also be adapted to any other language for performing various NLP tasks.

References

  • Devlin, J., Chang, M.-W., Lee, K., Toutanova, K., 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
  • Çelebi, E., Günel, B., Şen, B., 2011. Automatic question answering for Turkish with pattern parsing, in: INISTA 2011 - 2011 International Symposium on INnovations in Intelligent SysTems and Applications. pp. 389–393. https://doi.org/10.1109/INISTA.2011.5946098
  • Amasyalı, M.F., Diri, B., 2005. Bir soru cevaplama sistemi: Baybilmiş. Türkiye Bilişim Vakfı Bilgisayar Bilimleri ve Mühendisliği Dergisi 1.
  • Er, N.P., Cicekli, I., 2013. A factoid question answering system using answer pattern matching, in: Proceedings of the Sixth International Joint Conference on Natural Language Processing. pp. 854–858.
  • Biricik, G., Solmaz, S., Özdemir, E., Amasyalı, M.F., 2013. A Turkish Automatic Question Answering System with Question Multiplexing: Ben Bilirim. International Journal of Research in Information Technology (IJRIT) 1, 46–51.
  • Akın, A.A., Akın, M.D., 2007. Zemberek, an open source nlp framework for turkic languages. Structure 10, 1–5.
  • Oflazer, K., Say, B., Hakkani-Tür, D.Z., Tür, G., 2003. Building a Turkish Treebank. pp. 261–277. https://doi.org/10.1007/978-94-010-0201-1_15
  • Eryiğit, G., Oflazer, K., 2006. Statistical dependency parsing for turkish, in: 11th Conference of the European Chapter of the Association for Computational Linguistics.
  • Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., Zettlemoyer, L., 2018. Deep contextualized word representations, in: NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference. pp. 2227–2237. https://doi.org/10.18653/v1/n18-1202
  • Radford, A., Salimans, T., 2018. Improving Language Understanding by Generative Pre-Training. OpenAI 1–12.
  • Fisch, A., Talmor, A., Jia, R., Seo, M., Choi, E., Chen, D., 2019. MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension. pp. 1–13. https://doi.org/10.18653/v1/d19-5801
  • Antoun, W., Baly, F., Hajj, H., 2020. AraBERT: Transformer-based Model for Arabic Language Understanding. arXiv preprint arXiv:2003.00104.
  • Martin, L., Muller, B., Suárez, P.J.O., Dupont, Y., Romary, L., de la Clergerie, É.V., Seddah, D., Sagot, B., 2019. CamemBERT: a Tasty French Language Model. arXiv preprint arXiv:1911.03894.
  • Farahani, Mehrdad, Gharachorloo, M., Farahani, Marzieh, Manthouri, M., 2020. ParsBERT: Transformer-based Model for Persian Language Understanding. arXiv preprint arXiv:2005.12515.
  • Cui, Y., Che, W., Liu, T., Qin, B., Yang, Z., Wang, S., Hu, G., 2019. Pre-Training with Whole Word Masking for Chinese BERT. arXiv preprint arXiv:1906.08101.
  • Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P., 2016. SQuad: 100,000+ questions for machine comprehension of text, in: EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings. pp. 2383–2392.
  • Trischler, A., Wang, T., Yuan, X., Harris, J., Sordoni, A., Bachman, P., Suleman, K., 2016. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I., 2017. Attention is all you need. Advances in Neural Information Processing Systems 2017-Decem, 5999–6009.
  • Wiedemann, G., Remus, S., Chawla, A., Biemann, C., 2019. Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings. arXiv preprint arXiv:1909.10430.
  • Wu, Y., Schuster, M., Chen, Z., Le, Q. v., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., Kaiser, Ł., Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K.,
  • Schweter, S., 2020. BERTurk - BERT models for Turkish. https://doi.org/10.5281/zenodo.3770924

Derin Öğrenme Sinir Ağlarına Dayalı Türkçe Soru Cevaplama Sistemi

Year 2021, Volume: 4 Issue: 2, 65 - 75, 23.09.2021
https://doi.org/10.38016/jista.815823

Abstract

Doğal Dil İşleme (NLP) alanında, yaygın diller için kaydedilen bazı ilerlemelere rağmen, diğer dillerde belli başlı NLP görevleri için zorluklar devam etmektedir. Bu kapsamda, mevcut çalışma Türkçe dilinde bir soru cevaplama (QA) sistemi önererek bu zorluklara çözüm araştırmayı amaçlamaktadır. Sistem, bankacılık sektöründen seçilen dokümanları kullanarak, sorulan sorulara içerik ve uzunluk açısından en iyi yanıtları üretecektir. Bu amaca ulaşmak için sistem, gelişmiş yapay zeka algoritmaları ve büyük veri kümeleri kullanır. Daha spesifik olarak, dil modelinin oluşturulması için BERT algoritması kullanılmış, ardından sistemin soru cevaplama (MRQA) becerisini arttırmak için bir iyileştirme (fine-tuning) uygulanmıştır. Bu çalışmada, Türkçe gibi morfolojik açıdan karmaşık dillerden kaynaklanan zorlukları çözmek için orijinal ve İngilizce’den çevrilmiş veri setleri kullanılarak çeşitli deneyler yapılmıştır. Son olarak, sistem, genel olarak Türkçe dilinde diğer tüm QA sistemlerinden genel olarak daha yeni bir yelpazede yüksek bir performans elde etmiştir. Önerilen metodoloji sadece Türk diline özgü olmayıp aynı zamanda çeşitli NLP görevlerini yerine getirmek için başka diğer dillerde de uyarlanabilir.

References

  • Devlin, J., Chang, M.-W., Lee, K., Toutanova, K., 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
  • Çelebi, E., Günel, B., Şen, B., 2011. Automatic question answering for Turkish with pattern parsing, in: INISTA 2011 - 2011 International Symposium on INnovations in Intelligent SysTems and Applications. pp. 389–393. https://doi.org/10.1109/INISTA.2011.5946098
  • Amasyalı, M.F., Diri, B., 2005. Bir soru cevaplama sistemi: Baybilmiş. Türkiye Bilişim Vakfı Bilgisayar Bilimleri ve Mühendisliği Dergisi 1.
  • Er, N.P., Cicekli, I., 2013. A factoid question answering system using answer pattern matching, in: Proceedings of the Sixth International Joint Conference on Natural Language Processing. pp. 854–858.
  • Biricik, G., Solmaz, S., Özdemir, E., Amasyalı, M.F., 2013. A Turkish Automatic Question Answering System with Question Multiplexing: Ben Bilirim. International Journal of Research in Information Technology (IJRIT) 1, 46–51.
  • Akın, A.A., Akın, M.D., 2007. Zemberek, an open source nlp framework for turkic languages. Structure 10, 1–5.
  • Oflazer, K., Say, B., Hakkani-Tür, D.Z., Tür, G., 2003. Building a Turkish Treebank. pp. 261–277. https://doi.org/10.1007/978-94-010-0201-1_15
  • Eryiğit, G., Oflazer, K., 2006. Statistical dependency parsing for turkish, in: 11th Conference of the European Chapter of the Association for Computational Linguistics.
  • Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., Zettlemoyer, L., 2018. Deep contextualized word representations, in: NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference. pp. 2227–2237. https://doi.org/10.18653/v1/n18-1202
  • Radford, A., Salimans, T., 2018. Improving Language Understanding by Generative Pre-Training. OpenAI 1–12.
  • Fisch, A., Talmor, A., Jia, R., Seo, M., Choi, E., Chen, D., 2019. MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension. pp. 1–13. https://doi.org/10.18653/v1/d19-5801
  • Antoun, W., Baly, F., Hajj, H., 2020. AraBERT: Transformer-based Model for Arabic Language Understanding. arXiv preprint arXiv:2003.00104.
  • Martin, L., Muller, B., Suárez, P.J.O., Dupont, Y., Romary, L., de la Clergerie, É.V., Seddah, D., Sagot, B., 2019. CamemBERT: a Tasty French Language Model. arXiv preprint arXiv:1911.03894.
  • Farahani, Mehrdad, Gharachorloo, M., Farahani, Marzieh, Manthouri, M., 2020. ParsBERT: Transformer-based Model for Persian Language Understanding. arXiv preprint arXiv:2005.12515.
  • Cui, Y., Che, W., Liu, T., Qin, B., Yang, Z., Wang, S., Hu, G., 2019. Pre-Training with Whole Word Masking for Chinese BERT. arXiv preprint arXiv:1906.08101.
  • Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P., 2016. SQuad: 100,000+ questions for machine comprehension of text, in: EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings. pp. 2383–2392.
  • Trischler, A., Wang, T., Yuan, X., Harris, J., Sordoni, A., Bachman, P., Suleman, K., 2016. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I., 2017. Attention is all you need. Advances in Neural Information Processing Systems 2017-Decem, 5999–6009.
  • Wiedemann, G., Remus, S., Chawla, A., Biemann, C., 2019. Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings. arXiv preprint arXiv:1909.10430.
  • Wu, Y., Schuster, M., Chen, Z., Le, Q. v., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., Kaiser, Ł., Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K.,
  • Schweter, S., 2020. BERTurk - BERT models for Turkish. https://doi.org/10.5281/zenodo.3770924
There are 21 citations in total.

Details

Primary Language English
Subjects Artificial Intelligence
Journal Section Research Articles
Authors

Cavide Balkı Gemirter 0000-0003-3534-3129

Dionysis Goularas 0000-0002-4802-2802

Publication Date September 23, 2021
Submission Date October 25, 2020
Published in Issue Year 2021 Volume: 4 Issue: 2

Cite

APA Gemirter, C. B., & Goularas, D. (2021). A Turkish Question Answering System Based on Deep Learning Neural Networks. Journal of Intelligent Systems: Theory and Applications, 4(2), 65-75. https://doi.org/10.38016/jista.815823
AMA Gemirter CB, Goularas D. A Turkish Question Answering System Based on Deep Learning Neural Networks. JISTA. September 2021;4(2):65-75. doi:10.38016/jista.815823
Chicago Gemirter, Cavide Balkı, and Dionysis Goularas. “A Turkish Question Answering System Based on Deep Learning Neural Networks”. Journal of Intelligent Systems: Theory and Applications 4, no. 2 (September 2021): 65-75. https://doi.org/10.38016/jista.815823.
EndNote Gemirter CB, Goularas D (September 1, 2021) A Turkish Question Answering System Based on Deep Learning Neural Networks. Journal of Intelligent Systems: Theory and Applications 4 2 65–75.
IEEE C. B. Gemirter and D. Goularas, “A Turkish Question Answering System Based on Deep Learning Neural Networks”, JISTA, vol. 4, no. 2, pp. 65–75, 2021, doi: 10.38016/jista.815823.
ISNAD Gemirter, Cavide Balkı - Goularas, Dionysis. “A Turkish Question Answering System Based on Deep Learning Neural Networks”. Journal of Intelligent Systems: Theory and Applications 4/2 (September 2021), 65-75. https://doi.org/10.38016/jista.815823.
JAMA Gemirter CB, Goularas D. A Turkish Question Answering System Based on Deep Learning Neural Networks. JISTA. 2021;4:65–75.
MLA Gemirter, Cavide Balkı and Dionysis Goularas. “A Turkish Question Answering System Based on Deep Learning Neural Networks”. Journal of Intelligent Systems: Theory and Applications, vol. 4, no. 2, 2021, pp. 65-75, doi:10.38016/jista.815823.
Vancouver Gemirter CB, Goularas D. A Turkish Question Answering System Based on Deep Learning Neural Networks. JISTA. 2021;4(2):65-7.

Journal of Intelligent Systems: Theory and Applications