EN
TR
Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering
Abstract
Natural language processing (NLP) has made significant progress with the introduction of Transformer-based architectures that have revolutionized tasks such as question-answering (QA). While English is a primary focus of NLP research due to its high resource datasets, low-resource languages such as Turkish present unique challenges such as linguistic complexity and limited data availability. This study evaluates the performance of Transformer-based pre-trained language models on QA tasks and provides insights into their strengths and limitations for future improvements. In the study, using the SQuAD-TR dataset, which is the machine-translated Turkish version of the SQuAD 2.0 dataset, variations of the mBERT, BERTurk, ConvBERTurk, DistilBERTurk, and ELECTRA Turkish pre-trained models were fine-tuned. The performance of these fine-tuned models was tested using the XQuAD-TR dataset. The models were evaluated using Exact Match (EM) Rate and F1 Score metrics. Among the tested models, the ConvBERTurk Base (cased) model performed the best, achieving an EM Rate of 57.81512% and an F1 Score of 71.58769%. In contrast, the DistilBERTurk Base (cased) and ELECTRA TR Small (cased) models performed poorly due to their smaller size and fewer parameters. The results indicate that case-sensitive models generally perform better than case-insensitive models. The ability of case-sensitive models to discriminate proper names and abbreviations more effectively improved their performance. Moreover, models specifically adapted for Turkish performed better on QA tasks compared to the multilingual mBERT model.
Keywords
Ethical Statement
Ethics committee approval was not required for this study because of there was no study on animals or humans.
References
- Acheampong FA, Nunoo-Mensah H, Chen W. 2021. Transformer models for text-based emotion detection: a review of BERT-based approaches. Artif Intell Rev, 54(8): 5789-5829.
- Akyon FC, Cavusoglu D, Cengiz C, Altinuc SO, Temizel A. 2021. Automated question generation and question answering from Turkish texts. arXiv preprint arXiv:2111.06476.
- Allam AMN, Haggag MH. 2012. The question answering systems: a survey. Int J Res Rev Inf Sci (IJRRIS), 2(3).
- Alzubi JA, Jain R, Singh A, Parwekar P, Gupta M. 2023. COBERT: COVID-19 question answering system using BERT. Arab J Sci Eng, 48(8): 11003-11013.
- Artetxe M, Ruder S, Yogatama D. 2019. On the cross-lingual transferability of monolingual representations. arXiv preprint arXiv:1910.11856.
- Arzu M, Aydoğan M. 2023. Türkçe duygu sınıflandırma için transformers tabanlı mimarilerin karşılaştırmalı analizi. Comput Sci, 2023: 1-6.
- Budur E, Özçelik R, Soylu D, Khattab O, Güngör T, Potts C. 2024. Building efficient and effective OpenQA systems for low-resource languages. arXiv preprint arXiv:2401.03590.
- Clark K, Luong MT, Le QV, Manning CD. 2020. ELECTRA: pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555.
Details
Primary Language
English
Subjects
Information Systems Development Methodologies and Practice
Journal Section
Research Article
Publication Date
March 15, 2025
Submission Date
December 5, 2024
Acceptance Date
January 15, 2025
Published in Issue
Year 2025 Volume: 8 Number: 2
APA
İncidelen, M., & Aydoğan, M. (2025). Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering. Black Sea Journal of Engineering and Science, 8(2), 323-329. https://doi.org/10.34248/bsengineering.1596832
AMA
1.İncidelen M, Aydoğan M. Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering. BSJ Eng. Sci. 2025;8(2):323-329. doi:10.34248/bsengineering.1596832
Chicago
İncidelen, Mert, and Murat Aydoğan. 2025. “Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering”. Black Sea Journal of Engineering and Science 8 (2): 323-29. https://doi.org/10.34248/bsengineering.1596832.
EndNote
İncidelen M, Aydoğan M (March 1, 2025) Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering. Black Sea Journal of Engineering and Science 8 2 323–329.
IEEE
[1]M. İncidelen and M. Aydoğan, “Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering”, BSJ Eng. Sci., vol. 8, no. 2, pp. 323–329, Mar. 2025, doi: 10.34248/bsengineering.1596832.
ISNAD
İncidelen, Mert - Aydoğan, Murat. “Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering”. Black Sea Journal of Engineering and Science 8/2 (March 1, 2025): 323-329. https://doi.org/10.34248/bsengineering.1596832.
JAMA
1.İncidelen M, Aydoğan M. Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering. BSJ Eng. Sci. 2025;8:323–329.
MLA
İncidelen, Mert, and Murat Aydoğan. “Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering”. Black Sea Journal of Engineering and Science, vol. 8, no. 2, Mar. 2025, pp. 323-9, doi:10.34248/bsengineering.1596832.
Vancouver
1.Mert İncidelen, Murat Aydoğan. Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering. BSJ Eng. Sci. 2025 Mar. 1;8(2):323-9. doi:10.34248/bsengineering.1596832