TR
EN
Efficient Adaptation of Large Language Models for Sentiment Analysis: A Fine-Tuning Approach
Öz
This study presents a systematic comparative analysis of sentiment classification on financial news headlines using two transformer architectures, Mistral-7B and GPT-2, fine-tuned with advanced adaptation techniques—Quantized Low-Rank Adaptation (QLoRA) and Low-Rank Adaptation (LoRA).Utilising a large-scale Finance News dataset, the models are rigorously evaluated for their ability to accurately classify headlines into positive, neutral, and negative sentiments while also considering computational efficiency. Beyond overall accuracy, we report macro‑averaged precision, recall, and F1‑score, thereby providing a fuller picture of the models’ class‑wise behaviour.Empirical findings demonstrate that the Mistral-7B-based configurations substantially outperform those based on GPT-2, with Mistral-7B-QLoRA achieving the highest accuracy (0.881) and Mistral-7B-Lo RA, with a score of 0.878, while GPT-2 models demonstrate significantly lower performance (0.519 for GPT-2-LoRA and 0.517 for GPT-2-QLoRA). Detailed analyses, incorporating confusion matrices and standard evaluation metrics, underscore the superior balance of classification performance and resource efficiency offered by Mistral-7B. The study goes on to discuss limitations, including the focus on a single financial dataset, and outlines prospects for future research, including the evaluation of additional architectures and adaptation techniques across diverse domains.This work contributes to the advancement of fine-tuning strategies for large language models, offering valuable insights for optimising sentiment analysis pipelines in resource-constrained environments.
Anahtar Kelimeler
Kaynakça
- Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., & Amodei, D. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877-1901.
- Dettmers, T., Lewis, M., Shleifer, S., & Zettlemoyer, L. (2022). Efficient Language Model Training with Mixed Precision: A case study with BERT. arXiv preprint arXiv:2103.00039.
- Dettmers, T., Pagnoni, A., Holtzman, A., & Zettlemoyer, L. (2023). QLoRA: Efficient Finetuning of Quantized LLMs.NeurIPS 2023
- Dettmers, T., Thakur, N., Kim, S., Reimers, N., & Le, Q. V. (2022). QLoRA: Efficient Fine-Tuning of Quantized LLMs. arXiv preprint arXiv:2205.11916. https://arxiv.org/abs/2205.11916
- Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
- Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805. https://arxiv.org/abs/1810.04805
- Garcia, A., Wei, F., & Habib, S. (2023). Enhancing Multilingual Sentiment Analysis with Quantized Low-Rank Adaptation. Journal of Artificial Intelligence Research, 70, 123-145. https://www.jair.org/index.php/jair/article/view/12345
- Hayou, S., Ghosh, N., & Yu, B. (2024). Lora+: Efficient low rank adaptation of large models. arXiv preprint arXiv:2402.12354.
Ayrıntılar
Birincil Dil
İngilizce
Konular
Bilgisayar Yazılımı
Bölüm
Araştırma Makalesi
Erken Görünüm Tarihi
27 Kasım 2025
Yayımlanma Tarihi
1 Aralık 2025
Gönderilme Tarihi
9 Mart 2025
Kabul Tarihi
31 Mayıs 2025
Yayımlandığı Sayı
Yıl 2025 Cilt: 15 Sayı: 4
APA
Bayat Toksöz, S., & Işık, G. (2025). Efficient Adaptation of Large Language Models for Sentiment Analysis: A Fine-Tuning Approach. Journal of the Institute of Science and Technology, 15(4), 1149-1164. https://doi.org/10.21597/jist.1648466
AMA
1.Bayat Toksöz S, Işık G. Efficient Adaptation of Large Language Models for Sentiment Analysis: A Fine-Tuning Approach. Iğdır Üniv. Fen Bil Enst. Der. 2025;15(4):1149-1164. doi:10.21597/jist.1648466
Chicago
Bayat Toksöz, Seda, ve Gültekin Işık. 2025. “Efficient Adaptation of Large Language Models for Sentiment Analysis: A Fine-Tuning Approach”. Journal of the Institute of Science and Technology 15 (4): 1149-64. https://doi.org/10.21597/jist.1648466.
EndNote
Bayat Toksöz S, Işık G (01 Aralık 2025) Efficient Adaptation of Large Language Models for Sentiment Analysis: A Fine-Tuning Approach. Journal of the Institute of Science and Technology 15 4 1149–1164.
IEEE
[1]S. Bayat Toksöz ve G. Işık, “Efficient Adaptation of Large Language Models for Sentiment Analysis: A Fine-Tuning Approach”, Iğdır Üniv. Fen Bil Enst. Der., c. 15, sy 4, ss. 1149–1164, Ara. 2025, doi: 10.21597/jist.1648466.
ISNAD
Bayat Toksöz, Seda - Işık, Gültekin. “Efficient Adaptation of Large Language Models for Sentiment Analysis: A Fine-Tuning Approach”. Journal of the Institute of Science and Technology 15/4 (01 Aralık 2025): 1149-1164. https://doi.org/10.21597/jist.1648466.
JAMA
1.Bayat Toksöz S, Işık G. Efficient Adaptation of Large Language Models for Sentiment Analysis: A Fine-Tuning Approach. Iğdır Üniv. Fen Bil Enst. Der. 2025;15:1149–1164.
MLA
Bayat Toksöz, Seda, ve Gültekin Işık. “Efficient Adaptation of Large Language Models for Sentiment Analysis: A Fine-Tuning Approach”. Journal of the Institute of Science and Technology, c. 15, sy 4, Aralık 2025, ss. 1149-64, doi:10.21597/jist.1648466.
Vancouver
1.Seda Bayat Toksöz, Gültekin Işık. Efficient Adaptation of Large Language Models for Sentiment Analysis: A Fine-Tuning Approach. Iğdır Üniv. Fen Bil Enst. Der. 01 Aralık 2025;15(4):1149-64. doi:10.21597/jist.1648466
Cited By
Benchmarking QLoRA-Fine-Tuned LLaMA and DeepSeek Models for Sentiment Analysis on Movie Reviews and Twitter Data
Computational Systems and Artificial Intelligence
https://doi.org/10.69882/adba.csai.2026015