Araştırma Makalesi
BibTex RIS Kaynak Göster

RETRIEVAL-AUGMENTED GENERATION IN TURKISH NATURAL LANGUAGE UNDERSTANDING: A COMPARATIVE STUDY OF LARGE LANGUAGE MODELS

Yıl 2025, Cilt: 11 Sayı: 2, 56 - 65, 31.12.2025
https://doi.org/10.22531/muglajsci.1781095

Öz

Large Language Models (LLMs) have markedly progressed natural language processing. Nevertheless, owing to the restricted availability of training data, they may prove insufficient in generating current and precise information, particularly for low-resource languages. The Retrieval-Augmented Generation (RAG) methodology, designed to resolve this challenge, improves the precision and dependability of models' outputs by leveraging external information sources. This study comparatively evaluated four distinct LLMs (Qwen-14B, Gemma3-12B, LLaMA3.1-8B, and DeepSeek-R1-14B) within the RAG framework using a Turkish question-answer dataset. Experimental results demonstrate the RAG methodology markedly enhances information precision, response uniformity, and contextual relevance in Turkish question-answering systems. Moreover, the LLaMA3.1-8B model had the best equitable performance regarding precision and recall. The findings illustrate the relevance of RAG-based applications for Turkish and offer significant insights for advancing knowledge-assisted generation methods. This study addresses a significant gap in the literature by illustrating the viability of RAG-based systems in morphologically rich and low-resource languages, including Turkish. It serves as a foundational reference for subsequent Turkish natural language processing research.

Kaynakça

  • Zhou, H., Hu, C., Yuan, Y., Cui, Y., Jin, Y., Chen, C., Wu, H., Yuan, D., Jiang, L., Wu, D., Liu, X., Zhang, J., Wang, X., and Liu, J., "Large language model (LLM) for telecommunications: A comprehensive survey on principles, key techniques, and opportunities", IEEE Communications Surveys & Tutorials, Vol. 27, No. 3, 1955-2005, 2025.
  • Yao, Y., Duan, J., Xu, K., Cai, Y., Sun, Z., and Zhang, Y., "A survey on large language model (LLM) security and privacy: The good, the bad, and the ugly", High-Confidence Computing, Vol. 4, No. 2, 100211, 2024.
  • Li, X., Wang, S., Zeng, S., Wu, Y., and Yang, Y., "A survey on LLM-based multi-agent systems: Workflow, infrastructure, and challenges", Vicinagearth, Vol. 1, No. 1, 9, 2024.
  • Yue, M., "A survey of large language model agents for question answering", arXiv preprint arXiv:2503.19213, 2025.
  • Gao, M., Hu, X., Yin, X., Ruan, J., Pu, X., and Wan, X., "LLM-based NLG evaluation: Current status and challenges", Computational Linguistics, 1-27, 2025.
  • Laskar, M. T. R., Alqahtani, S., Bari, M. S., Rahman, M., Khan, M. A. M., Khan, H., Jahan, I., Bhuiyan, A., Tan, C. W., Parvez, M. R., and others, "A systematic survey and critical review on evaluating large language models: Challenges, limitations, and recommendations", arXiv preprint arXiv:2407.04069, 2024.
  • Matarazzo, A., and Torlone, R., "A survey on large language models with some insights on their capabilities and limitations", arXiv preprint arXiv:2501.04040, 2025.
  • Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Kuksa, P., Minervini, P., Yih, W.-t., Rocktäschel, T., Riedel, S., and Kiela, D., "Retrieval-augmented generation for knowledge-intensive NLP tasks", Advances in Neural Information Processing Systems (NeurIPS), 2020.
  • Guţu, B. M., and Popescu, N., "Exploring data analysis methods in generative models: From fine-tuning to RAG implementation", Computers, Vol. 13, No. 12, 327, 2024.
  • Zhong, T., Yang, Z., Liu, Z., Zhang, R., Liu, Y., Sun, H., Pan, Y., Li, Y., Zhou, Y., Jiang, H., and others, "Opportunities and challenges of large language models for low-resource languages in humanities research", arXiv preprint arXiv:2412.04497, 2024.
  • Joshua, C., Banerjee, A., Kaplan, M. A., Willie, A., Ria, P., and Kalluri, W. A., "Efficient multi-lingual LLM deployment for low-resource languages", 2024.
  • Cekinel, R. F., Karagoz, P., and Coltekin, C., "Cross-lingual learning vs. low-resource fine-tuning: A case study with fact-checking in Turkish", 2024. [Online]. Available: https://arxiv.org/abs/2403.00411
  • Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.-t., Rocktäschel, T., and others, "Retrieval-augmented generation for knowledge-intensive NLP tasks", Advances in Neural Information Processing Systems, Vol. 33, 9459-9474, 2020.
  • Gupta, S., Ranjan, R., and Singh, S. N., "A comprehensive survey of retrieval-augmented generation (RAG): Evolution, current landscape and future directions", 2024. [Online]. Available: https://arxiv.org/abs/2410.12837
  • Sharma, S., Yoon, D. S., Dernoncourt, F., Sultania, D., Bagga, K., Zhang, M., Bui, T., and Kotte, V., "Retrieval augmented generation for domain-specific question answering", arXiv preprint arXiv:2404.14760, 2024.
  • Shi, Y., Xu, S., Yang, T., Liu, Z., Liu, T., Li, X., and Liu, N., "MKRAG: Medical knowledge retrieval augmented generation for medical question answering", AMIA Annual Symposium Proceedings, Vol. 2024, 1011, 2025.
  • Alan, A. Y., Karaarslan, E., and Aydın, Ö., "Improving LLM reliability with RAG in religious question-answering: MufassirQAS", Turkish Journal of Engineering, Vol. 9, No. 3, 544-559, 2025.
  • Bikmaz, E., Briman, M., and Arslan, S., "Bridging the language gap in RAG: A case study on Turkish retrieval and generation", Researcher, Vol. 5, No. 1, 38-49, 2025.
  • Yüksel, A., Köksal, A., Şenel, L. K., Korhonen, A., and Schütze, H., "TurkishMMLU: Measuring massive multitask language understanding in Turkish", arXiv preprint arXiv:2407.12402, 2024.
  • Kesgin, H. T., Yuce, M. K., Dogan, E., Uzun, M. E., Uz, A., Seyrek, H. E., Zeer, A., and Amasyali, M. F., "Introducing cosmosGPT: Monolingual training for Turkish language models", 2024 International Conference on Innovations in Intelligent Systems and Applications (INISTA), IEEE, 1-6, 2024.
  • Bai, J., Bai, S., Chu, Y., Cui, Z., Dang, K., Deng, X., Fan, Y., Ge, W., Han, Y., Huang, F., and others, "Qwen technical report", arXiv preprint arXiv:2309.16609, 2023.
  • Google AI Blog, "Gemma 3: Google’s next LLM", 2025. [Online]. Available: https://ai.googleblog.com/2025/03/introducing-Gemma-3-our-most-capable-models.html
  • IBM News (Think), "Meta releases new LLaMA 3.1 models, including highly anticipated 405B parameter variant", 2024. [Online]. Available: https://www.ibm.com/think/news/meta-releases-LLaMA-3-1-models-405b-parameter-variant
  • DeepSeek-AI, "DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning", arXiv preprint arXiv:2501.12948, 2025.
  • Vake, D., Vičič, J., and Tošić, A., "Bridging the question-answer gap in retrieval-augmented generation: Hypothetical prompt embeddings", IEEE Access, 2025.
  • TÜBİTAK, "Türkiye Bilimsel ve Teknolojik Araştırma Kurumu resmi web sitesi", 2025. [Online]. Available: https://www.tubitak.gov.tr. Accessed: 07.09.2025.
  • Mao, K., Liu, Z., Qian, H., Mo, F., Deng, C., and Dou, Z., "RAG-Studio: Towards in-domain adaptation of retrieval augmented generation through self-alignment", Findings of the Association for Computational Linguistics: EMNLP 2024, 725-735, 2024.
  • Arslan, M., Ghanem, H., Munawar, S., and Cruz, C., "A survey on RAG with LLMs", Procedia Computer Science, Vol. 246, 3781-3790, 2024.
  • Reimers, N., and Gurevych, I., "paraphrase-multilingual-MiniLM-L12-v2", 2025. [Online]. Available: https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
  • "Sentence-BERT: Sentence embeddings using Siamese BERT-networks", Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2019.
  • Facebook AI Research, "Faiss: A library for efficient similarity search and clustering of dense vectors", 2025. [Online]. Available: https://github.com/facebookresearch/faiss
  • Johnson, J., Douze, M., and Jégou, H., "Billion-scale similarity search with GPUs", IEEE Transactions on Big Data, 2019.
  • Ahmed, M. A., "From text to understanding the inner text: LLMs and translation accuracy and fluency", International Journal of Language and Literary Studies, Vol. 7, No. 2, 139-156, 2025.
  • Qwen Team (Alibaba Cloud), "Qwen-14B (base model)", 2025. [Online]. Available: https://huggingface.co/Qwen/Qwen-14B
  • Kopanov, K., and Atanasova, T., "A comparative pattern analysis of Qwen 2.5 and Gemma 3 text generation", WSEAS Transactions on Information Science and Applications, Vol. 22, 604-615, 2025.
  • Kamath, A., Ferret, J., Pathak, S., Vieillard, N., Merhej, R., Perrin, S., Matejovicova, T., Ramé, A., Rivière, M., Rouillard, L., and others, "Gemma 3 technical report", CoRR, 2025.
  • Kassianik, P., Saglam, B., Chen, A., Nelson, B., Vellore, A., Aufiero, M., Burch, F., Kedia, D., Zohary, A., Weerawardhena, S., and others, "LLaMA-3.1-FoundationAI-SecurityLLM-Base-8B technical report", arXiv preprint arXiv:2504.21039, 2025.
  • DeepSeek Research Team, "DeepSeek R1-14B (model)", 2025. [Online]. Available: https://huggingface.co/DeepSeek-ai/DeepSeek-R1-Distill-Qwen-14B
  • Yu, H., Gan, A., Zhang, K., Tong, S., Liu, Q., and Liu, Z., "Evaluation of retrieval-augmented generation: A survey", CCF Conference on Big Data, Springer, 102-120, 2024.
  • Ammar, A., Koubaa, A., Nacar, O., and Boulila, W., "Optimizing retrieval-augmented generation: Analysis of hyperparameter impact on performance and efficiency", arXiv preprint arXiv:2505.08445, 2025.
  • Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J., "BLEU: A method for automatic evaluation of machine translation", Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 311-318, 2002.
  • Lin, C.-Y., "ROUGE: A package for automatic evaluation of summaries", Text Summarization Branches Out, 74-81, 2004.
  • Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P., "SQuAD: 100,000+ questions for machine comprehension of text", arXiv preprint arXiv:1606.05250, 2016.
  • Şakar, T., and Emekci, H., "Maximizing RAG efficiency: A comparative analysis of RAG methods", Natural Language Processing, Vol. 31, No. 1, 1-25, 2025.
  • Hladěna, J., Šteflovič, K., Čech, P., Štekerová, K., and Žváčková, A., "The effect of chunk size on the RAG performance", Computer Science Online Conference, Springer, 317-326, 2025.
  • Stäbler, M., Turnbull, S., Müller, T., Langdon, C., Marx-Goméz, J., and Köster, F., "The impact of chunking strategies on domain-specific information retrieval in RAG systems", 2025 IEEE International Conference on Omni-layer Intelligent Systems (COINS), IEEE, 1-6, 2025.

TÜRKÇE DOĞAL DİL ANLAMA BECERISINDE GERİ ÇAĞIRMA-ARTIRILMIŞ ÜRETIM: BÜYÜK DİL MODELLERİNİN KARŞILAŞTIRMALI BİR ÇALIŞMASI

Yıl 2025, Cilt: 11 Sayı: 2, 56 - 65, 31.12.2025
https://doi.org/10.22531/muglajsci.1781095

Öz

Büyük Dil Modelleri (BDL), doğal dil işlemeyi önemli ölçüde ilerletmiştir. Bununla birlikte, eğitim verilerinin sınırlı erişilebilirliği nedeniyle, özellikle düşük kaynaklı diller için güncel ve kesin bilgi üretmede yetersiz kalabilirler. Bu zorluğu çözmek için tasarlanan Geri Alma-Artırılmış Üretim (GAÜ) metodolojisi, harici bilgi kaynaklarından yararlanarak modellerin çıktılarının kesinliğini ve güvenilirliğini artırır. Bu çalışmada, Türkçe soru-cevap veri kümesi kullanılarak GAÜ çerçevesi içinde dört farklı BDL (Qwen-14B, Gemma3-12B, LLaMA3.1-8B ve DeepSeek-R1-14B) karşılaştırmalı olarak değerlendirilmiştir. Deneysel sonuçlar, GAÜ metodolojisinin Türkçe soru cevap sistemlerinde bilgi kesinliğini, yanıt tekdüzeliğini ve bağlamsal alaka düzeyini önemli ölçüde artırdığını göstermektedir. Ayrıca, LLaMA3.1-8B modeli kesinlik ve geri çağırma konusunda en iyi performansa sahipti. Bulgular, GAÜ tabanlı uygulamaların Türkçe için önemini ortaya koymakta ve bilgi destekli üretim yöntemlerinin geliştirilmesi için önemli bilgiler sunmaktadır. Bu çalışma, Türkçe de dahil olmak üzere morfolojik olarak zengin ve düşük kaynaklı dillerde GAÜ tabanlı sistemlerin uygulanabilirliğini göstererek literatürdeki önemli bir boşluğu doldurmaktadır. Ayrıca, sonraki Türkçe doğal dil işleme araştırmaları için temel bir referans görevi görmektedir.

Kaynakça

  • Zhou, H., Hu, C., Yuan, Y., Cui, Y., Jin, Y., Chen, C., Wu, H., Yuan, D., Jiang, L., Wu, D., Liu, X., Zhang, J., Wang, X., and Liu, J., "Large language model (LLM) for telecommunications: A comprehensive survey on principles, key techniques, and opportunities", IEEE Communications Surveys & Tutorials, Vol. 27, No. 3, 1955-2005, 2025.
  • Yao, Y., Duan, J., Xu, K., Cai, Y., Sun, Z., and Zhang, Y., "A survey on large language model (LLM) security and privacy: The good, the bad, and the ugly", High-Confidence Computing, Vol. 4, No. 2, 100211, 2024.
  • Li, X., Wang, S., Zeng, S., Wu, Y., and Yang, Y., "A survey on LLM-based multi-agent systems: Workflow, infrastructure, and challenges", Vicinagearth, Vol. 1, No. 1, 9, 2024.
  • Yue, M., "A survey of large language model agents for question answering", arXiv preprint arXiv:2503.19213, 2025.
  • Gao, M., Hu, X., Yin, X., Ruan, J., Pu, X., and Wan, X., "LLM-based NLG evaluation: Current status and challenges", Computational Linguistics, 1-27, 2025.
  • Laskar, M. T. R., Alqahtani, S., Bari, M. S., Rahman, M., Khan, M. A. M., Khan, H., Jahan, I., Bhuiyan, A., Tan, C. W., Parvez, M. R., and others, "A systematic survey and critical review on evaluating large language models: Challenges, limitations, and recommendations", arXiv preprint arXiv:2407.04069, 2024.
  • Matarazzo, A., and Torlone, R., "A survey on large language models with some insights on their capabilities and limitations", arXiv preprint arXiv:2501.04040, 2025.
  • Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Kuksa, P., Minervini, P., Yih, W.-t., Rocktäschel, T., Riedel, S., and Kiela, D., "Retrieval-augmented generation for knowledge-intensive NLP tasks", Advances in Neural Information Processing Systems (NeurIPS), 2020.
  • Guţu, B. M., and Popescu, N., "Exploring data analysis methods in generative models: From fine-tuning to RAG implementation", Computers, Vol. 13, No. 12, 327, 2024.
  • Zhong, T., Yang, Z., Liu, Z., Zhang, R., Liu, Y., Sun, H., Pan, Y., Li, Y., Zhou, Y., Jiang, H., and others, "Opportunities and challenges of large language models for low-resource languages in humanities research", arXiv preprint arXiv:2412.04497, 2024.
  • Joshua, C., Banerjee, A., Kaplan, M. A., Willie, A., Ria, P., and Kalluri, W. A., "Efficient multi-lingual LLM deployment for low-resource languages", 2024.
  • Cekinel, R. F., Karagoz, P., and Coltekin, C., "Cross-lingual learning vs. low-resource fine-tuning: A case study with fact-checking in Turkish", 2024. [Online]. Available: https://arxiv.org/abs/2403.00411
  • Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.-t., Rocktäschel, T., and others, "Retrieval-augmented generation for knowledge-intensive NLP tasks", Advances in Neural Information Processing Systems, Vol. 33, 9459-9474, 2020.
  • Gupta, S., Ranjan, R., and Singh, S. N., "A comprehensive survey of retrieval-augmented generation (RAG): Evolution, current landscape and future directions", 2024. [Online]. Available: https://arxiv.org/abs/2410.12837
  • Sharma, S., Yoon, D. S., Dernoncourt, F., Sultania, D., Bagga, K., Zhang, M., Bui, T., and Kotte, V., "Retrieval augmented generation for domain-specific question answering", arXiv preprint arXiv:2404.14760, 2024.
  • Shi, Y., Xu, S., Yang, T., Liu, Z., Liu, T., Li, X., and Liu, N., "MKRAG: Medical knowledge retrieval augmented generation for medical question answering", AMIA Annual Symposium Proceedings, Vol. 2024, 1011, 2025.
  • Alan, A. Y., Karaarslan, E., and Aydın, Ö., "Improving LLM reliability with RAG in religious question-answering: MufassirQAS", Turkish Journal of Engineering, Vol. 9, No. 3, 544-559, 2025.
  • Bikmaz, E., Briman, M., and Arslan, S., "Bridging the language gap in RAG: A case study on Turkish retrieval and generation", Researcher, Vol. 5, No. 1, 38-49, 2025.
  • Yüksel, A., Köksal, A., Şenel, L. K., Korhonen, A., and Schütze, H., "TurkishMMLU: Measuring massive multitask language understanding in Turkish", arXiv preprint arXiv:2407.12402, 2024.
  • Kesgin, H. T., Yuce, M. K., Dogan, E., Uzun, M. E., Uz, A., Seyrek, H. E., Zeer, A., and Amasyali, M. F., "Introducing cosmosGPT: Monolingual training for Turkish language models", 2024 International Conference on Innovations in Intelligent Systems and Applications (INISTA), IEEE, 1-6, 2024.
  • Bai, J., Bai, S., Chu, Y., Cui, Z., Dang, K., Deng, X., Fan, Y., Ge, W., Han, Y., Huang, F., and others, "Qwen technical report", arXiv preprint arXiv:2309.16609, 2023.
  • Google AI Blog, "Gemma 3: Google’s next LLM", 2025. [Online]. Available: https://ai.googleblog.com/2025/03/introducing-Gemma-3-our-most-capable-models.html
  • IBM News (Think), "Meta releases new LLaMA 3.1 models, including highly anticipated 405B parameter variant", 2024. [Online]. Available: https://www.ibm.com/think/news/meta-releases-LLaMA-3-1-models-405b-parameter-variant
  • DeepSeek-AI, "DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning", arXiv preprint arXiv:2501.12948, 2025.
  • Vake, D., Vičič, J., and Tošić, A., "Bridging the question-answer gap in retrieval-augmented generation: Hypothetical prompt embeddings", IEEE Access, 2025.
  • TÜBİTAK, "Türkiye Bilimsel ve Teknolojik Araştırma Kurumu resmi web sitesi", 2025. [Online]. Available: https://www.tubitak.gov.tr. Accessed: 07.09.2025.
  • Mao, K., Liu, Z., Qian, H., Mo, F., Deng, C., and Dou, Z., "RAG-Studio: Towards in-domain adaptation of retrieval augmented generation through self-alignment", Findings of the Association for Computational Linguistics: EMNLP 2024, 725-735, 2024.
  • Arslan, M., Ghanem, H., Munawar, S., and Cruz, C., "A survey on RAG with LLMs", Procedia Computer Science, Vol. 246, 3781-3790, 2024.
  • Reimers, N., and Gurevych, I., "paraphrase-multilingual-MiniLM-L12-v2", 2025. [Online]. Available: https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
  • "Sentence-BERT: Sentence embeddings using Siamese BERT-networks", Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2019.
  • Facebook AI Research, "Faiss: A library for efficient similarity search and clustering of dense vectors", 2025. [Online]. Available: https://github.com/facebookresearch/faiss
  • Johnson, J., Douze, M., and Jégou, H., "Billion-scale similarity search with GPUs", IEEE Transactions on Big Data, 2019.
  • Ahmed, M. A., "From text to understanding the inner text: LLMs and translation accuracy and fluency", International Journal of Language and Literary Studies, Vol. 7, No. 2, 139-156, 2025.
  • Qwen Team (Alibaba Cloud), "Qwen-14B (base model)", 2025. [Online]. Available: https://huggingface.co/Qwen/Qwen-14B
  • Kopanov, K., and Atanasova, T., "A comparative pattern analysis of Qwen 2.5 and Gemma 3 text generation", WSEAS Transactions on Information Science and Applications, Vol. 22, 604-615, 2025.
  • Kamath, A., Ferret, J., Pathak, S., Vieillard, N., Merhej, R., Perrin, S., Matejovicova, T., Ramé, A., Rivière, M., Rouillard, L., and others, "Gemma 3 technical report", CoRR, 2025.
  • Kassianik, P., Saglam, B., Chen, A., Nelson, B., Vellore, A., Aufiero, M., Burch, F., Kedia, D., Zohary, A., Weerawardhena, S., and others, "LLaMA-3.1-FoundationAI-SecurityLLM-Base-8B technical report", arXiv preprint arXiv:2504.21039, 2025.
  • DeepSeek Research Team, "DeepSeek R1-14B (model)", 2025. [Online]. Available: https://huggingface.co/DeepSeek-ai/DeepSeek-R1-Distill-Qwen-14B
  • Yu, H., Gan, A., Zhang, K., Tong, S., Liu, Q., and Liu, Z., "Evaluation of retrieval-augmented generation: A survey", CCF Conference on Big Data, Springer, 102-120, 2024.
  • Ammar, A., Koubaa, A., Nacar, O., and Boulila, W., "Optimizing retrieval-augmented generation: Analysis of hyperparameter impact on performance and efficiency", arXiv preprint arXiv:2505.08445, 2025.
  • Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J., "BLEU: A method for automatic evaluation of machine translation", Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 311-318, 2002.
  • Lin, C.-Y., "ROUGE: A package for automatic evaluation of summaries", Text Summarization Branches Out, 74-81, 2004.
  • Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P., "SQuAD: 100,000+ questions for machine comprehension of text", arXiv preprint arXiv:1606.05250, 2016.
  • Şakar, T., and Emekci, H., "Maximizing RAG efficiency: A comparative analysis of RAG methods", Natural Language Processing, Vol. 31, No. 1, 1-25, 2025.
  • Hladěna, J., Šteflovič, K., Čech, P., Štekerová, K., and Žváčková, A., "The effect of chunk size on the RAG performance", Computer Science Online Conference, Springer, 317-326, 2025.
  • Stäbler, M., Turnbull, S., Müller, T., Langdon, C., Marx-Goméz, J., and Köster, F., "The impact of chunking strategies on domain-specific information retrieval in RAG systems", 2025 IEEE International Conference on Omni-layer Intelligent Systems (COINS), IEEE, 1-6, 2025.
Toplam 46 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Pekiştirmeli Öğrenme
Bölüm Araştırma Makalesi
Yazarlar

Ercan Atagün 0000-0001-5196-5732

Merve Güllü 0000-0001-7442-1332

Serdar Biroğul 0000-0003-4966-5970

Necaattin Barışçı 0000-0002-8762-5091

Gönderilme Tarihi 9 Eylül 2025
Kabul Tarihi 3 Kasım 2025
Yayımlanma Tarihi 31 Aralık 2025
Yayımlandığı Sayı Yıl 2025 Cilt: 11 Sayı: 2

Kaynak Göster

APA Atagün, E., Güllü, M., Biroğul, S., Barışçı, N. (2025). RETRIEVAL-AUGMENTED GENERATION IN TURKISH NATURAL LANGUAGE UNDERSTANDING: A COMPARATIVE STUDY OF LARGE LANGUAGE MODELS. Mugla Journal of Science and Technology, 11(2), 56-65. https://doi.org/10.22531/muglajsci.1781095
AMA Atagün E, Güllü M, Biroğul S, Barışçı N. RETRIEVAL-AUGMENTED GENERATION IN TURKISH NATURAL LANGUAGE UNDERSTANDING: A COMPARATIVE STUDY OF LARGE LANGUAGE MODELS. MJST. Aralık 2025;11(2):56-65. doi:10.22531/muglajsci.1781095
Chicago Atagün, Ercan, Merve Güllü, Serdar Biroğul, ve Necaattin Barışçı. “RETRIEVAL-AUGMENTED GENERATION IN TURKISH NATURAL LANGUAGE UNDERSTANDING: A COMPARATIVE STUDY OF LARGE LANGUAGE MODELS”. Mugla Journal of Science and Technology 11, sy. 2 (Aralık 2025): 56-65. https://doi.org/10.22531/muglajsci.1781095.
EndNote Atagün E, Güllü M, Biroğul S, Barışçı N (01 Aralık 2025) RETRIEVAL-AUGMENTED GENERATION IN TURKISH NATURAL LANGUAGE UNDERSTANDING: A COMPARATIVE STUDY OF LARGE LANGUAGE MODELS. Mugla Journal of Science and Technology 11 2 56–65.
IEEE E. Atagün, M. Güllü, S. Biroğul, ve N. Barışçı, “RETRIEVAL-AUGMENTED GENERATION IN TURKISH NATURAL LANGUAGE UNDERSTANDING: A COMPARATIVE STUDY OF LARGE LANGUAGE MODELS”, MJST, c. 11, sy. 2, ss. 56–65, 2025, doi: 10.22531/muglajsci.1781095.
ISNAD Atagün, Ercan vd. “RETRIEVAL-AUGMENTED GENERATION IN TURKISH NATURAL LANGUAGE UNDERSTANDING: A COMPARATIVE STUDY OF LARGE LANGUAGE MODELS”. Mugla Journal of Science and Technology 11/2 (Aralık2025), 56-65. https://doi.org/10.22531/muglajsci.1781095.
JAMA Atagün E, Güllü M, Biroğul S, Barışçı N. RETRIEVAL-AUGMENTED GENERATION IN TURKISH NATURAL LANGUAGE UNDERSTANDING: A COMPARATIVE STUDY OF LARGE LANGUAGE MODELS. MJST. 2025;11:56–65.
MLA Atagün, Ercan vd. “RETRIEVAL-AUGMENTED GENERATION IN TURKISH NATURAL LANGUAGE UNDERSTANDING: A COMPARATIVE STUDY OF LARGE LANGUAGE MODELS”. Mugla Journal of Science and Technology, c. 11, sy. 2, 2025, ss. 56-65, doi:10.22531/muglajsci.1781095.
Vancouver Atagün E, Güllü M, Biroğul S, Barışçı N. RETRIEVAL-AUGMENTED GENERATION IN TURKISH NATURAL LANGUAGE UNDERSTANDING: A COMPARATIVE STUDY OF LARGE LANGUAGE MODELS. MJST. 2025;11(2):56-65.

8805
Mugla Journal of Science and Technology (MJST) dergisi Creative Commons Atıf-GayriTicari 4.0 Uluslararası Lisansı ile lisanslanmıştır.