Research Article
BibTex RIS Cite

Büyük Dil Modelleri Rasyonel mi, Davranışsal mı? Yatırımcı Davranışlarına Dair Karşılaştırmalı Bir Analiz

Year 2025, Volume: 13 Issue: 4, 1556 - 1582, 30.10.2025
https://doi.org/10.29130/dubited.1711955

Abstract

Bu çalışmada, farklı büyük dil modeli (Large Language Model-LLM) tabanlı yapay zekâ uygulamalarının davranışsal finans teorilerini anlama ve yatırımcı psikolojisi senaryolarını yorumlama kapasiteleri analiz edilmiştir. Analize dahil edilen beş uygulama (ChatGPT 4o, Deepseek, Gemini 2.0 Flash, QwenChat 2.5 Max, ve Copilot), 10 özgün senaryo üzerinden değerlendirilmiştir. Araştırma ile, LLM’lerin finansal karar alma süreçlerini anlamlandırmada rasyonel yatırımcı modelinin ötesine geçip geçemediğini ve davranışsal finans ilkelerine yönelik içgörüler sağlayıp sağlayamadığını ortaya koymak hedeflenmiştir. Sonuçlar, uygulamaların belirli durumlarda başarılı analizler sunduğunu, ancak veri kaynaklarının çeşitliliği, bağlamsal duyarlılık ve algoritmik yaklaşımlar açısından önemli farklılıklar gösterdiğini ortaya koymuştur. Ayrıca, yapay zekâların yatırımcı davranışlarına yönelik yorumlamalarında tutarlılık ve açıklanabilirlik açısından bazı sınırlamalar gözlemlenmiştir. Sonuç olarak, LLM tabanlı sistemlerin davranışsal finans yaklaşımlarını daha etkili bir şekilde benimseyebilmeleri için geliştirme süreçlerinde etik ilkeler, uzman katkısı ve veri kalitesine daha fazla odaklanılması gerekmektedir.

References

  • Ariely, D., & Berns, G. S. (2010). Neuromarketing: the hope and hype of neuroimaging in business. Nature Reviews Neuroscience, 11(4), 284-292. https://doi.org/10.1038/nrn2795
  • Barberis, N., & Thaler, R. (2003). A survey of behavioral finance. In G. M. Constantinides, M. Harris, & R. M. Stulz (Eds.), Handbook of the Economics of Finance, Volume 1A Corporate Finance, , (1st ed., pp. 1053–1128.) Elsevier. https://doi.org/10.1016/S1574-0102(03)01027-6
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623). https://doi.org/10.1145/3442188.3445922
  • Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Davis, J. Q., Demszky, D., … Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint. https://doi.org/10.48550/arXiv.2108.07258
  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. https://doi.org/10.48550/arXiv.2005.14165
  • Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T. & Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint. https://doi.org/10.48550/arXiv.2303.12712
  • Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., Ryder, … Zaremba, W. (2021). Evaluating large language models trained on code. arXiv preprint. https://doi.org/10.48550/arXiv.2107.03374
  • Chomsky, N. (1959). Syntactic structures. Mouton & Co.
  • Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019, June). Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (Vol. 1, Long and Short Papers) (pp. 4171–4186). Association for Computational Linguistics. https://doi.org/10.48550/arXiv.1810.04805
  • Floridi, L., & Cowls, J. (2022). A unified framework of five principles for AI in society. In R. Stouffs, S. Roudavski, & B. Davis (Eds.), Machine Learning and the City: Applications in Architecture and Urban Design (pp. 149–164). Springer. https://doi.org/10.1002/9781119815075.ch45
  • Howard, J., & Ruder, S. (2018). Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 328–339). Association for Computational Linguistics. https://doi.org/10.18653/v1/P18-1031
  • Jurafsky, D., & Martin, J. H. (2009). Speech and Language Processing (2nd ed.). Prentice Hall. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291. https://doi.org/10.2307/1914185
  • Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J. & Amodei, D. (2020). Scaling laws for neural language models. arXiv preprint. https://doi.org/10.48550/arXiv.2001.08361
  • Kwiatkowski, T., Palomaki, J., Redfield, O., Collins, M., Parikh, A., Alberti, C., Epstein, D., Polosukhin, I., Devlin, J., Lee, K., Toutanova, K., Jones, L., Kelcey, M., Chang, M. W., Dai, A. M., Uszkoreit, J., Le, Q. & Petrov, S. (2019). Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7, 453-466. https://doi.org/10.1162/tacl_a_00276
  • Lipton, Z. C. (2018). The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3), 31-57. https://doi.org/10.1145/3233231
  • Liu, Y., Gu, J., Goyal, N., Li, X., Edunov, S., Ghazvininejad, M., Lewis, M. & Zettlemoyer, L. (2020). Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8, 726-742. https://doi.org/10.48550/arXiv.2001.08210
  • Loewenstein, G. F., Weber, E. U., Hsee, C. K., & Welch, N. (2001). Risk as feelings. Psychological Bulletin, 127(2), 267-286. https://doi.org/10.1037/0033-2909.127.2.267
  • Maynez, J., Narayan, S., Bohnet, B., & McDonald, R. (2020). On faithfulness and factuality in abstractive summarization. arXiv preprint. https://doi.org/10.48550/arXiv.2005.00661
  • Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training [Technical report]. OpenAI. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
  • Shefrin, H., & Statman, M. (1985). The disposition to sell winners too early and ride losers too long: Theory and evidence. The Journal of Finance, 40(3), 777-790. https://doi.org/10.2307/2327802
  • Strubell, E., Ganesh, A., & McCallum, A. (2020, April). Energy and policy considerations for modern deep learning research. Proceedings of the AAAI Conference on Artificial Intelligence, 34(9), 13693-13696. https://doi.org/10.48550/arXiv.1906.02243
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998–6008. https://doi.org/10.48550/arXiv.1706.03762
  • Zhou, Y., Ni, Y., Gan, Y., Yin, Z., Liu, X., Zhang, J., Liu, S., Qiu, X., Ye, G. & Chai, H. (2024). Are LLMS rational investors? A study on detecting and reducing the financial bias in LLMs. arXiv preprint. https://doi.org/10.48550/arXiv.2402.12713

Are Large Language Models Rational or Behavioral? A Comparative Analysis of Investor Behavior Interpretation

Year 2025, Volume: 13 Issue: 4, 1556 - 1582, 30.10.2025
https://doi.org/10.29130/dubited.1711955

Abstract

This study aims to evaluate the ability of Large Language Model (LLM)-based AI applications to understand and interpret the fundamental theories of behavioral finance. In this context, the responses of five current LLM applications (ChatGPT 4o, Deepseek, Gemini 2.0 Flash, QwenChat 2.5 Max, and Copilot) were comparatively analyzed based on ten distinct scenarios involving behavioral biases and investment decision-making. The findings reveal how each model responds to behavioral concepts such as conceptual depth, psychological insight, strategic recommendation level, and originality. The results indicate that while the applications demonstrate successful analyses in certain cases, they also differ significantly in terms of data source diversity, contextual sensitivity, and algorithmic approaches. In particular, notable discrepancies were observed in explainability, consistency, and theory-based interpretive capacity. Ultimately, the study concludes that LLM systems have the potential to assess investment decisions not only through a rational framework but also from a behavioral perspective. Accordingly, the research provides both theoretical and practical contributions to the development of AI-based financial decision support systems.

Ethical Statement

This study does not involve human or animal participants. All procedures followed scientific and ethical principles, and all referenced studies are appropriately cited.

References

  • Ariely, D., & Berns, G. S. (2010). Neuromarketing: the hope and hype of neuroimaging in business. Nature Reviews Neuroscience, 11(4), 284-292. https://doi.org/10.1038/nrn2795
  • Barberis, N., & Thaler, R. (2003). A survey of behavioral finance. In G. M. Constantinides, M. Harris, & R. M. Stulz (Eds.), Handbook of the Economics of Finance, Volume 1A Corporate Finance, , (1st ed., pp. 1053–1128.) Elsevier. https://doi.org/10.1016/S1574-0102(03)01027-6
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623). https://doi.org/10.1145/3442188.3445922
  • Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Davis, J. Q., Demszky, D., … Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint. https://doi.org/10.48550/arXiv.2108.07258
  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. https://doi.org/10.48550/arXiv.2005.14165
  • Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T. & Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint. https://doi.org/10.48550/arXiv.2303.12712
  • Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., Ryder, … Zaremba, W. (2021). Evaluating large language models trained on code. arXiv preprint. https://doi.org/10.48550/arXiv.2107.03374
  • Chomsky, N. (1959). Syntactic structures. Mouton & Co.
  • Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019, June). Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (Vol. 1, Long and Short Papers) (pp. 4171–4186). Association for Computational Linguistics. https://doi.org/10.48550/arXiv.1810.04805
  • Floridi, L., & Cowls, J. (2022). A unified framework of five principles for AI in society. In R. Stouffs, S. Roudavski, & B. Davis (Eds.), Machine Learning and the City: Applications in Architecture and Urban Design (pp. 149–164). Springer. https://doi.org/10.1002/9781119815075.ch45
  • Howard, J., & Ruder, S. (2018). Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 328–339). Association for Computational Linguistics. https://doi.org/10.18653/v1/P18-1031
  • Jurafsky, D., & Martin, J. H. (2009). Speech and Language Processing (2nd ed.). Prentice Hall. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291. https://doi.org/10.2307/1914185
  • Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J. & Amodei, D. (2020). Scaling laws for neural language models. arXiv preprint. https://doi.org/10.48550/arXiv.2001.08361
  • Kwiatkowski, T., Palomaki, J., Redfield, O., Collins, M., Parikh, A., Alberti, C., Epstein, D., Polosukhin, I., Devlin, J., Lee, K., Toutanova, K., Jones, L., Kelcey, M., Chang, M. W., Dai, A. M., Uszkoreit, J., Le, Q. & Petrov, S. (2019). Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7, 453-466. https://doi.org/10.1162/tacl_a_00276
  • Lipton, Z. C. (2018). The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3), 31-57. https://doi.org/10.1145/3233231
  • Liu, Y., Gu, J., Goyal, N., Li, X., Edunov, S., Ghazvininejad, M., Lewis, M. & Zettlemoyer, L. (2020). Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8, 726-742. https://doi.org/10.48550/arXiv.2001.08210
  • Loewenstein, G. F., Weber, E. U., Hsee, C. K., & Welch, N. (2001). Risk as feelings. Psychological Bulletin, 127(2), 267-286. https://doi.org/10.1037/0033-2909.127.2.267
  • Maynez, J., Narayan, S., Bohnet, B., & McDonald, R. (2020). On faithfulness and factuality in abstractive summarization. arXiv preprint. https://doi.org/10.48550/arXiv.2005.00661
  • Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training [Technical report]. OpenAI. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
  • Shefrin, H., & Statman, M. (1985). The disposition to sell winners too early and ride losers too long: Theory and evidence. The Journal of Finance, 40(3), 777-790. https://doi.org/10.2307/2327802
  • Strubell, E., Ganesh, A., & McCallum, A. (2020, April). Energy and policy considerations for modern deep learning research. Proceedings of the AAAI Conference on Artificial Intelligence, 34(9), 13693-13696. https://doi.org/10.48550/arXiv.1906.02243
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998–6008. https://doi.org/10.48550/arXiv.1706.03762
  • Zhou, Y., Ni, Y., Gan, Y., Yin, Z., Liu, X., Zhang, J., Liu, S., Qiu, X., Ye, G. & Chai, H. (2024). Are LLMS rational investors? A study on detecting and reducing the financial bias in LLMs. arXiv preprint. https://doi.org/10.48550/arXiv.2402.12713
There are 23 citations in total.

Details

Primary Language English
Subjects Deep Learning
Journal Section Articles
Authors

Özkan Şahin 0000-0001-5341-1274

Publication Date October 30, 2025
Submission Date June 2, 2025
Acceptance Date July 24, 2025
Published in Issue Year 2025 Volume: 13 Issue: 4

Cite

APA Şahin, Ö. (2025). Are Large Language Models Rational or Behavioral? A Comparative Analysis of Investor Behavior Interpretation. Duzce University Journal of Science and Technology, 13(4), 1556-1582. https://doi.org/10.29130/dubited.1711955
AMA Şahin Ö. Are Large Language Models Rational or Behavioral? A Comparative Analysis of Investor Behavior Interpretation. DUBİTED. October 2025;13(4):1556-1582. doi:10.29130/dubited.1711955
Chicago Şahin, Özkan. “Are Large Language Models Rational or Behavioral? A Comparative Analysis of Investor Behavior Interpretation”. Duzce University Journal of Science and Technology 13, no. 4 (October 2025): 1556-82. https://doi.org/10.29130/dubited.1711955.
EndNote Şahin Ö (October 1, 2025) Are Large Language Models Rational or Behavioral? A Comparative Analysis of Investor Behavior Interpretation. Duzce University Journal of Science and Technology 13 4 1556–1582.
IEEE Ö. Şahin, “Are Large Language Models Rational or Behavioral? A Comparative Analysis of Investor Behavior Interpretation”, DUBİTED, vol. 13, no. 4, pp. 1556–1582, 2025, doi: 10.29130/dubited.1711955.
ISNAD Şahin, Özkan. “Are Large Language Models Rational or Behavioral? A Comparative Analysis of Investor Behavior Interpretation”. Duzce University Journal of Science and Technology 13/4 (October2025), 1556-1582. https://doi.org/10.29130/dubited.1711955.
JAMA Şahin Ö. Are Large Language Models Rational or Behavioral? A Comparative Analysis of Investor Behavior Interpretation. DUBİTED. 2025;13:1556–1582.
MLA Şahin, Özkan. “Are Large Language Models Rational or Behavioral? A Comparative Analysis of Investor Behavior Interpretation”. Duzce University Journal of Science and Technology, vol. 13, no. 4, 2025, pp. 1556-82, doi:10.29130/dubited.1711955.
Vancouver Şahin Ö. Are Large Language Models Rational or Behavioral? A Comparative Analysis of Investor Behavior Interpretation. DUBİTED. 2025;13(4):1556-82.