Evaluation of GPT-3 AI Language Model in Research Paper Writing
Year 2023,
Volume: 18 Issue: 2, 311 - 318, 01.09.2023
Oğuzhan Katar
,
Dilek Özkan
,
Gpt -3
,
Özal Yıldırım
,
U Rajendra Acharya
Abstract
Artificial intelligence (AI) has helped to obtain accurate, fast, robust results without any human errors.Hence, it has been used in various applications in our daily lives. The Turing test has been afundamental problem that AI systems aim to overcome. Recently developed various natural language problem (NLP) models have shown significant performances. AI language models, used intranslation, digital assistant, and sentiment analysis, have improved the quality of our lives. It canperform scans on thousands of documents in seconds and report them by establishing appropriatesentence structures. Generative pre-trained transformer (GPT)-3 is a popular model developedrecently has been used for many applications. Users of this model have obtained surprising results onvarious applications and shared them on various social media platforms. This study aims to evaluatethe performance of the GPT-3 model in writing an academic article. Hence, we chose the subject ofthe article as tools based on artificial intelligence in academic article writing. The organized querieson GPT-3 created the flow of this article. In this article, we have made an effort to highlight theadvantages and limitations of using GPT-3 for research paper writing. Authors feel that it can be usedas an adjunct tool while writing research papers.
References
- McCulloch W S, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biology 1943; 5: 115-133.
- LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. In: IEEE 1998; 86(11): 2278-2324.
- Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks. Communications of the ACM 2017; 60(6): 84-90.
- Devlin J, Chang M W, Lee K, Toutanova K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint 2018; arXiv:1810.04805.
- Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R R, Le Q V. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 2019; 32.
- Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, et al. Language models are few-shot learners. Advances in neural information processing systems 2020; 33: 1877-1901.
- Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I. Language models are unsupervised multitask learners. OpenAI blog 2019; 1(8): 9.
- LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015; 521(7553): 436–444.
- Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. In: IEEE transactions on pattern analysis and machine intelligence 2013; 35(8): 1798–1828.
- Goodfellow I, Bengio Y, Courville A. Deep learning 2016; MIT press.
- Esteva A, Kuprel B, Novoa R A, Ko J, Swetter S M, Blau H M, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017; 542(7639): 115–118.
- Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, et al. Attention is all you need. Advances in neural information processing systems 2017; 30.
- Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation 1997; 9(8): 1735-1780.
- Transformer G G P, Thunström A O, Steingrimsson S. Can GPT-3 write an academic paper on itself, with minimal human input? 2022
Araştırma Makalesi Yazımında GPT-3 Yapay Zeka Dil Modeli Değerlendirmesi
Year 2023,
Volume: 18 Issue: 2, 311 - 318, 01.09.2023
Oğuzhan Katar
,
Dilek Özkan
,
Gpt -3
,
Özal Yıldırım
,
U Rajendra Acharya
Abstract
Yapay zeka (YZ), herhangi bir insan hatası olmaksızın doğru, hızlı, sağlam sonuçlar alınmasına yardımcı olmaktadır. Bu nedenle günlük hayatımızda çeşitli uygulamalarda etkin bir şekilde kullanım alanı bulmuştur. Turing testi, AI sistemlerinin üstesinden gelmeyi hedeflediği temel bir problem olmuştur. Son zamanlarda geliştirilen çeşitli doğal dil işleme (DDİ) modelleri önemli performanslar göstermiştir. YZ dil modelleri; çeviri, dijital asistan ve duygu analizi gibi alanlarda efektif kullanılarak yaşam kalitemizi artırmaya yardımcı olmaktadır. Bu modeller binlerce belgeyi saniyeler içinde tarayabilir ve uygun cümle yapılarını kurarak raporlayabilir. Üretici Ön-Eğitimli Dönüştürücü (ChatGPT), son zamanlarda geliştirilen ve birçok uygulama için kullanılan popüler bir modeldir. Bu modelin kullanıcıları, çeşitli uygulamalarda şaşırtıcı sonuçlar elde etmiş ve bunları çeşitli sosyal medya platformlarında paylaşmıştır. Bu çalışmada, ChatGPT-3 modelinin akademik bir makale yazmadaki performansının değerlendirilmesi amaçlanmıştır. Bu nedenle makalenin içeriği akademik makale yazımında yapay zekaya dayalı araçlar olarak belirlenmiştir. ChatGPT-3'teki organize sorgular bu makalenin akışını oluşturmuştur. Bu makalede, akademik bir araştırma makalesi yazmak için ChatGPT-3 kullanmanın avantajları ve sınırlamaları vurgulanmıştır. Elde edilen bulgular sonucunda akademik araştırma makaleleri yazarken ChatGPT’nin yardımcı bir araç olarak kullanılabileceğini gözlemlenmiştir.
References
- McCulloch W S, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biology 1943; 5: 115-133.
- LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. In: IEEE 1998; 86(11): 2278-2324.
- Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks. Communications of the ACM 2017; 60(6): 84-90.
- Devlin J, Chang M W, Lee K, Toutanova K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint 2018; arXiv:1810.04805.
- Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R R, Le Q V. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems 2019; 32.
- Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, et al. Language models are few-shot learners. Advances in neural information processing systems 2020; 33: 1877-1901.
- Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I. Language models are unsupervised multitask learners. OpenAI blog 2019; 1(8): 9.
- LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015; 521(7553): 436–444.
- Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. In: IEEE transactions on pattern analysis and machine intelligence 2013; 35(8): 1798–1828.
- Goodfellow I, Bengio Y, Courville A. Deep learning 2016; MIT press.
- Esteva A, Kuprel B, Novoa R A, Ko J, Swetter S M, Blau H M, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017; 542(7639): 115–118.
- Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, et al. Attention is all you need. Advances in neural information processing systems 2017; 30.
- Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation 1997; 9(8): 1735-1780.
- Transformer G G P, Thunström A O, Steingrimsson S. Can GPT-3 write an academic paper on itself, with minimal human input? 2022