Araştırma Makalesi
BibTex RIS Kaynak Göster

TRANSFORMER MİMARİSİ TABANLI METİN ÖZETLEME VE DOĞAL DİL ANLAMLANDIRMA

Yıl 2024, Cilt: 32 Sayı: 1, 1140 - 1151, 22.04.2024
https://doi.org/10.31796/ogummf.1303569

Öz

Hızla büyüyen yüksek hacimli bilgi kaynaklarından faydalı bilgilere erişim, giderek zorlaşmaktadır. Bu soruna çözüm olarak geliştirilen metin özetleme yöntemleri, yüksek hacimli belgelerden önemli bilgilerin çıkarılmasında önemli bir role sahiptir. Belgeleri filtreleme ve ilgili bilgileri çıkarma amacıyla çeşitli teknikler mevcuttur. Bu çalışmada, BBC News ve CNN/DailyMail veri setleri üzerinde, geleneksel yaklaşımlar ile en güncel yöntemlerin karşılaştırmalı analizini sunmaktadır. Araştırmacılara ilerlemelerine katkı sağlayacak değerli bilgiler sunmakta ve uygulayıcıların özel kullanım durumlarına en uygun teknikleri seçmelerine yardımcı olmaktadır.

Kaynakça

  • Abdel-Salam, S., & Rafea, A. (2022). Performance study on extractive text summarization using BERT models. Information, 13(2), 67. https://doi.org/10.3390/info13020067.
  • Abdelaleem, N. M., Kader, H. A., & Salem, R. (2019). A brief survey on text summarization techniques. IJ of Electronics and Information Engineering, 10(2), 103-116.
  • Altmami, N. I., & Menai, M. E. B. (2022). Automatic summarization of scientific articles: A survey. Journal of King Saud University-Computer and Information Sciences, 34(4), 1011-1028.
  • Bansal, S., Kamper, H., Livescu, K., Lopez, A., & Goldwater, S. (2018). Low-resource speech-to-text translation. arXiv preprint arXiv:1803.09164. https://doi.org/10.48550/arXiv.1803.0916.
  • Bhandari, M., Gour, P., Ashfaq, A., Liu, P., & Neubig, G. (2020). Re-evaluating evaluation in text summarization. arXiv preprint arXiv:2010.07100. https://doi.org/10.48550/arXiv.2010.07100.
  • Cagliero, L., Garza, P., & Baralis, E. (2019). ELSA: A multilingual document summarization algorithm based on frequent itemsets and latent semantic analysis. ACM Transactions on Information Systems (TOIS), 37(2), 1-33. https://doi.org/10.1145/3298987.
  • Cai, T., Shen, M., Peng, H., Jiang, L., & Dai, Q. (2019). Improving transformer with sequential context representations for abstractive text summarization. In Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part I (pp. 512-524). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-32233-5_40.
  • El-Kassas, W. S., Salama, C. R., Rafea, A. A., & Mohamed, H. K. (2021). Automatic text summarization: A comprehensive survey. Expert systems with applications, 165, 113679. https://doi.org/10.1016/j.eswa.2020.113679.
  • Goutte, C., & Gaussier, E. (2005). A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In Advances in Information Retrieval: 27th European Conference on IR Research, ECIR 2005, Santiago de Compostela, Spain, March 21-23, 2005. Proceedings 27 (pp. 345-359). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-31865-1_25.
  • Greene, D., & Cunningham, P. (2006). Practical solutions to the problem of diagonal dominance in kernel document clustering. In Proceedings of the 23rd international conference on Machine learning (pp. 377-384). https://doi.org/10.1145/1143844.1143892.
  • Gupta, S., & Gupta, S. K. (2019). Abstractive summarization: An overview of the state of the art. Expert Systems with Applications, 121, 49-65. https://doi.org/10.1016/j.eswa.2018.12.011.
  • Hermann, K. M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., & Blunsom, P. (2015). Teaching machines to read and comprehend. Advances in neural information processing systems, 28.
  • Isikdemir, Y. E., & Yavuz, H. S. (2022). The scalable fuzzy inference-based ensemble method for sentiment analysis. Computational Intelligence and Neuroscience, 2022.
  • Joshi, A., Fidalgo, E., Alegre, E., & Fernández-Robles, L. (2019). SummCoder: An unsupervised framework for extractive text summarization based on deep auto-encoders. Expert Systems with Applications, 129, 200-215. https://doi.org/10.1016/j.eswa.2019.03.045.
  • Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., ... & Zettlemoyer, L. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. https://doi.org/10.48550/arXiv.1910.13461.
  • Li, C., Xu, W., Li, S., & Gao, S. (2018). Guiding generation for abstractive text summarization based on key information guide network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) (pp. 55-60). https://doi.org/10.18653/v1/N18-2009.
  • Lin, C. Y. (2004). Rouge: A package for automatic evaluation of summaries. In Text summarization branches out (pp. 74-81).
  • Liu, S. H., Chen, K. Y., & Chen, B. (2020). Enhanced language modeling with proximity and sentence relatedness information for extractive broadcast news summarization. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 19(3), 1-19. https://doi.org/10.1145/3377407.
  • Liu, Y., & Lapata, M. (2019). Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345. https://doi.org/10.48550/arXiv.1908.08345.
  • Liu, Y. (2019). Fine-tune BERT for extractive summarization. arXiv preprint arXiv:1903.10318. https://doi.org/10.48550/arXiv.1903.10318.
  • Manojkumar, V. K., Mathi, S., & Gao, X. Z. (2023). An Experimental Investigation on Unsupervised Text Summarization for Customer Reviews. Procedia Computer Science, 218, 1692-1701. https://doi.org/10.1016/j.procs.2023.01.147.
  • Nazari, N., & Mahdavi, M. A. (2019). A survey on automatic text summarization. Journal of AI and Data Mining, 7(1), 121-135. https://doi.org/10.22044/jadm.2018.6139.1726.
  • Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. (2002). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics (pp. 311-318).
  • Ramesh, A., Srinivasa, K. G., & Pramod, N. (2014). SentenceRank—a graph based approach to summarize text. In The Fifth International Conference on the Applications of Digital Information and Web Technologies (ICADIWT 2014) (pp. 177-182). IEEE. https://doi.org/10.1109/ICADIWT.2014.6814680.
  • Ramina, M., Darnay, N., Ludbe, C., & Dhruv, A. (2020). Topic level summary generation using BERT induced Abstractive Summarization Model. In 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS) (pp. 747-752). IEEE. https://doi.org/10.1109/ICICCS48265.2020.9120997.
  • Rodríguez-Vidal, J., Carrillo-de-Albornoz, J., Amigó, E., Plaza, L., Gonzalo, J., & Verdejo, F. (2020). Automatic generation of entity-oriented summaries for reputation management. Journal of Ambient Intelligence and Humanized Computing, 11, 1577-1591. https://doi.org/10.1007/s12652-019-01255-9.
  • See, A., Liu, P. J., & Manning, C. D. (2017). Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. https://doi.org/10.48550/arXiv.1704.04368.
  • Shi, T., Keneshloo, Y., Ramakrishnan, N., & Reddy, C. K. (2021). Neural abstractive text summarization with sequence-to-sequence models. ACM Transactions on Data Science, 2(1), 1-37. https://doi.org/10.1145/3419106.
  • Syed, A. A., Gaol, F. L., & Matsuo, T. (2021). A survey of the state-of-the-art models in neural abstractive text summarization. IEEE Access, 9, 13248-13265. https://doi.org/10.1109/ACCESS.2021.3052783.
  • Verma, P., & Om, H. (2019). A novel approach for text summarization using optimal combination of sentence scoring methods. Sādhanā, 44, 1-15. https://doi.org/10.1007/s12046-019-1082-4.
  • Verma, S., Gupta, N., Anil, B. C., & Chauhan, R. (2022). A Novel Framework for Ancient Text Translation Using Artificial Intelligence. ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal, 11(4), 411-425. https://doi.org/10.14201/adcaij.28380.
  • Vhatkar, A., Bhattacharyya, P., & Arya, K. (2020). Knowledge graph and deep neural network for extractive text summarization by utilizing triples. In Proceedings of the 1st joint workshop on financial narrative processing and multiling financial summarisation (pp. 130-136).
  • Wang, Q., Liu, P., Zhu, Z., Yin, H., Zhang, Q., & Zhang, L. (2019). A text abstraction summary model based on BERT word embedding and reinforcement learning. Applied Sciences, 9(21), 4701. https://doi.org/10.3390/app9214701.
  • Widjanarko, A., Kusumaningrum, R., & Surarso, B. (2018). Multi document summarization for the Indonesian language based on latent dirichlet allocation and significance sentence. In 2018 International Conference on Information and Communications Technology (ICOIACT) (pp. 520-524). IEEE. https://doi.org/10.1109/ICOIACT.2018.8350668.
  • Widyassari, A. P., Rustad, S., Shidik, G. F., Noersasongko, E., Syukur, A., & Affandy, A. (2022). Review of automatic text summarization techniques & methods. Journal of King Saud University-Computer and Information Sciences, 34(4), 1029-1046. https://doi.org/10.1016/j.jksuci.2020.05.006.
  • Wu, Z., Lei, L., Li, G., Huang, H., Zheng, C., Chen, E., & Xu, G. (2017). A topic modeling based approach to novel document automatic summarization. Expert Systems with Applications, 84, 12-23. https://doi.org/10.1016/j.eswa.2017.04.054.
  • Yao, K., Zhang, L., Luo, T., & Wu, Y. (2018). Deep reinforcement learning for extractive document summarization. Neurocomputing, 284, 52-62. https://doi.org/10.1016/j.neucom.2018.01.020.
  • Zhang, X., Wei, F., & Zhou, M. (2019). HIBERT: Document level pre-training of hierarchical bidirectional transformers for document summarization. arXiv preprint arXiv:1905.06566. https://doi.org/10.48550/arXiv.1905.06566.
  • Zhang, J., Zhao, Y., Saleh, M., & Liu, P. (2020). Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning (pp. 11328-11339). PMLR.
  • Zheng, J., Zhao, Z., Song, Z., Yang, M., Xiao, J., & Yan, X. (2020). Abstractive meeting summarization by hierarchical adaptive segmental network learning with multiple revising steps. Neurocomputing, 378, 179-188. https://doi.org/10.1016/j.neucom.2019.10.019.
  • Zhong, M., Liu, P., Chen, Y., Wang, D., Qiu, X., & Huang, X. (2020). Extractive summarization as text matching. arXiv preprint arXiv:2004.08795. https://doi.org/10.48550/arXiv.2004.08795.

NLP TRANSFORMERS: ANALYSIS OF LLMS AND TRADITIONAL APPROACHES FOR ENHANCED TEXT SUMMARIZATION

Yıl 2024, Cilt: 32 Sayı: 1, 1140 - 1151, 22.04.2024
https://doi.org/10.31796/ogummf.1303569

Öz

As the amount of the available information continues to grow, finding the relevant information has become increasingly challenging. As a solution, text summarization has emerged as a vital method for extracting essential information from lengthy documents. There are various techniques available for filtering documents and extracting the pertinent information. In this study, a comparative analysis is conducted to evaluate traditional approaches and state-of-the-art methods on the BBC News and CNN/DailyMail datasets. This study offers valuable insights for researchers to advance their research and helps practitioners in selecting the most suitable techniques for their specific use cases.

Kaynakça

  • Abdel-Salam, S., & Rafea, A. (2022). Performance study on extractive text summarization using BERT models. Information, 13(2), 67. https://doi.org/10.3390/info13020067.
  • Abdelaleem, N. M., Kader, H. A., & Salem, R. (2019). A brief survey on text summarization techniques. IJ of Electronics and Information Engineering, 10(2), 103-116.
  • Altmami, N. I., & Menai, M. E. B. (2022). Automatic summarization of scientific articles: A survey. Journal of King Saud University-Computer and Information Sciences, 34(4), 1011-1028.
  • Bansal, S., Kamper, H., Livescu, K., Lopez, A., & Goldwater, S. (2018). Low-resource speech-to-text translation. arXiv preprint arXiv:1803.09164. https://doi.org/10.48550/arXiv.1803.0916.
  • Bhandari, M., Gour, P., Ashfaq, A., Liu, P., & Neubig, G. (2020). Re-evaluating evaluation in text summarization. arXiv preprint arXiv:2010.07100. https://doi.org/10.48550/arXiv.2010.07100.
  • Cagliero, L., Garza, P., & Baralis, E. (2019). ELSA: A multilingual document summarization algorithm based on frequent itemsets and latent semantic analysis. ACM Transactions on Information Systems (TOIS), 37(2), 1-33. https://doi.org/10.1145/3298987.
  • Cai, T., Shen, M., Peng, H., Jiang, L., & Dai, Q. (2019). Improving transformer with sequential context representations for abstractive text summarization. In Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part I (pp. 512-524). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-32233-5_40.
  • El-Kassas, W. S., Salama, C. R., Rafea, A. A., & Mohamed, H. K. (2021). Automatic text summarization: A comprehensive survey. Expert systems with applications, 165, 113679. https://doi.org/10.1016/j.eswa.2020.113679.
  • Goutte, C., & Gaussier, E. (2005). A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In Advances in Information Retrieval: 27th European Conference on IR Research, ECIR 2005, Santiago de Compostela, Spain, March 21-23, 2005. Proceedings 27 (pp. 345-359). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-31865-1_25.
  • Greene, D., & Cunningham, P. (2006). Practical solutions to the problem of diagonal dominance in kernel document clustering. In Proceedings of the 23rd international conference on Machine learning (pp. 377-384). https://doi.org/10.1145/1143844.1143892.
  • Gupta, S., & Gupta, S. K. (2019). Abstractive summarization: An overview of the state of the art. Expert Systems with Applications, 121, 49-65. https://doi.org/10.1016/j.eswa.2018.12.011.
  • Hermann, K. M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., & Blunsom, P. (2015). Teaching machines to read and comprehend. Advances in neural information processing systems, 28.
  • Isikdemir, Y. E., & Yavuz, H. S. (2022). The scalable fuzzy inference-based ensemble method for sentiment analysis. Computational Intelligence and Neuroscience, 2022.
  • Joshi, A., Fidalgo, E., Alegre, E., & Fernández-Robles, L. (2019). SummCoder: An unsupervised framework for extractive text summarization based on deep auto-encoders. Expert Systems with Applications, 129, 200-215. https://doi.org/10.1016/j.eswa.2019.03.045.
  • Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., ... & Zettlemoyer, L. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. https://doi.org/10.48550/arXiv.1910.13461.
  • Li, C., Xu, W., Li, S., & Gao, S. (2018). Guiding generation for abstractive text summarization based on key information guide network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) (pp. 55-60). https://doi.org/10.18653/v1/N18-2009.
  • Lin, C. Y. (2004). Rouge: A package for automatic evaluation of summaries. In Text summarization branches out (pp. 74-81).
  • Liu, S. H., Chen, K. Y., & Chen, B. (2020). Enhanced language modeling with proximity and sentence relatedness information for extractive broadcast news summarization. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 19(3), 1-19. https://doi.org/10.1145/3377407.
  • Liu, Y., & Lapata, M. (2019). Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345. https://doi.org/10.48550/arXiv.1908.08345.
  • Liu, Y. (2019). Fine-tune BERT for extractive summarization. arXiv preprint arXiv:1903.10318. https://doi.org/10.48550/arXiv.1903.10318.
  • Manojkumar, V. K., Mathi, S., & Gao, X. Z. (2023). An Experimental Investigation on Unsupervised Text Summarization for Customer Reviews. Procedia Computer Science, 218, 1692-1701. https://doi.org/10.1016/j.procs.2023.01.147.
  • Nazari, N., & Mahdavi, M. A. (2019). A survey on automatic text summarization. Journal of AI and Data Mining, 7(1), 121-135. https://doi.org/10.22044/jadm.2018.6139.1726.
  • Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. (2002). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics (pp. 311-318).
  • Ramesh, A., Srinivasa, K. G., & Pramod, N. (2014). SentenceRank—a graph based approach to summarize text. In The Fifth International Conference on the Applications of Digital Information and Web Technologies (ICADIWT 2014) (pp. 177-182). IEEE. https://doi.org/10.1109/ICADIWT.2014.6814680.
  • Ramina, M., Darnay, N., Ludbe, C., & Dhruv, A. (2020). Topic level summary generation using BERT induced Abstractive Summarization Model. In 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS) (pp. 747-752). IEEE. https://doi.org/10.1109/ICICCS48265.2020.9120997.
  • Rodríguez-Vidal, J., Carrillo-de-Albornoz, J., Amigó, E., Plaza, L., Gonzalo, J., & Verdejo, F. (2020). Automatic generation of entity-oriented summaries for reputation management. Journal of Ambient Intelligence and Humanized Computing, 11, 1577-1591. https://doi.org/10.1007/s12652-019-01255-9.
  • See, A., Liu, P. J., & Manning, C. D. (2017). Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. https://doi.org/10.48550/arXiv.1704.04368.
  • Shi, T., Keneshloo, Y., Ramakrishnan, N., & Reddy, C. K. (2021). Neural abstractive text summarization with sequence-to-sequence models. ACM Transactions on Data Science, 2(1), 1-37. https://doi.org/10.1145/3419106.
  • Syed, A. A., Gaol, F. L., & Matsuo, T. (2021). A survey of the state-of-the-art models in neural abstractive text summarization. IEEE Access, 9, 13248-13265. https://doi.org/10.1109/ACCESS.2021.3052783.
  • Verma, P., & Om, H. (2019). A novel approach for text summarization using optimal combination of sentence scoring methods. Sādhanā, 44, 1-15. https://doi.org/10.1007/s12046-019-1082-4.
  • Verma, S., Gupta, N., Anil, B. C., & Chauhan, R. (2022). A Novel Framework for Ancient Text Translation Using Artificial Intelligence. ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal, 11(4), 411-425. https://doi.org/10.14201/adcaij.28380.
  • Vhatkar, A., Bhattacharyya, P., & Arya, K. (2020). Knowledge graph and deep neural network for extractive text summarization by utilizing triples. In Proceedings of the 1st joint workshop on financial narrative processing and multiling financial summarisation (pp. 130-136).
  • Wang, Q., Liu, P., Zhu, Z., Yin, H., Zhang, Q., & Zhang, L. (2019). A text abstraction summary model based on BERT word embedding and reinforcement learning. Applied Sciences, 9(21), 4701. https://doi.org/10.3390/app9214701.
  • Widjanarko, A., Kusumaningrum, R., & Surarso, B. (2018). Multi document summarization for the Indonesian language based on latent dirichlet allocation and significance sentence. In 2018 International Conference on Information and Communications Technology (ICOIACT) (pp. 520-524). IEEE. https://doi.org/10.1109/ICOIACT.2018.8350668.
  • Widyassari, A. P., Rustad, S., Shidik, G. F., Noersasongko, E., Syukur, A., & Affandy, A. (2022). Review of automatic text summarization techniques & methods. Journal of King Saud University-Computer and Information Sciences, 34(4), 1029-1046. https://doi.org/10.1016/j.jksuci.2020.05.006.
  • Wu, Z., Lei, L., Li, G., Huang, H., Zheng, C., Chen, E., & Xu, G. (2017). A topic modeling based approach to novel document automatic summarization. Expert Systems with Applications, 84, 12-23. https://doi.org/10.1016/j.eswa.2017.04.054.
  • Yao, K., Zhang, L., Luo, T., & Wu, Y. (2018). Deep reinforcement learning for extractive document summarization. Neurocomputing, 284, 52-62. https://doi.org/10.1016/j.neucom.2018.01.020.
  • Zhang, X., Wei, F., & Zhou, M. (2019). HIBERT: Document level pre-training of hierarchical bidirectional transformers for document summarization. arXiv preprint arXiv:1905.06566. https://doi.org/10.48550/arXiv.1905.06566.
  • Zhang, J., Zhao, Y., Saleh, M., & Liu, P. (2020). Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning (pp. 11328-11339). PMLR.
  • Zheng, J., Zhao, Z., Song, Z., Yang, M., Xiao, J., & Yan, X. (2020). Abstractive meeting summarization by hierarchical adaptive segmental network learning with multiple revising steps. Neurocomputing, 378, 179-188. https://doi.org/10.1016/j.neucom.2019.10.019.
  • Zhong, M., Liu, P., Chen, Y., Wang, D., Qiu, X., & Huang, X. (2020). Extractive summarization as text matching. arXiv preprint arXiv:2004.08795. https://doi.org/10.48550/arXiv.2004.08795.
Toplam 41 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Bilgisayar Yazılımı
Bölüm Araştırma Makaleleri
Yazarlar

Yunus Emre Işıkdemir 0000-0001-7022-2854

Erken Görünüm Tarihi 22 Nisan 2024
Yayımlanma Tarihi 22 Nisan 2024
Kabul Tarihi 12 Mart 2024
Yayımlandığı Sayı Yıl 2024 Cilt: 32 Sayı: 1

Kaynak Göster

APA Işıkdemir, Y. E. (2024). NLP TRANSFORMERS: ANALYSIS OF LLMS AND TRADITIONAL APPROACHES FOR ENHANCED TEXT SUMMARIZATION. Eskişehir Osmangazi Üniversitesi Mühendislik Ve Mimarlık Fakültesi Dergisi, 32(1), 1140-1151. https://doi.org/10.31796/ogummf.1303569
AMA Işıkdemir YE. NLP TRANSFORMERS: ANALYSIS OF LLMS AND TRADITIONAL APPROACHES FOR ENHANCED TEXT SUMMARIZATION. ESOGÜ Müh Mim Fak Derg. Nisan 2024;32(1):1140-1151. doi:10.31796/ogummf.1303569
Chicago Işıkdemir, Yunus Emre. “NLP TRANSFORMERS: ANALYSIS OF LLMS AND TRADITIONAL APPROACHES FOR ENHANCED TEXT SUMMARIZATION”. Eskişehir Osmangazi Üniversitesi Mühendislik Ve Mimarlık Fakültesi Dergisi 32, sy. 1 (Nisan 2024): 1140-51. https://doi.org/10.31796/ogummf.1303569.
EndNote Işıkdemir YE (01 Nisan 2024) NLP TRANSFORMERS: ANALYSIS OF LLMS AND TRADITIONAL APPROACHES FOR ENHANCED TEXT SUMMARIZATION. Eskişehir Osmangazi Üniversitesi Mühendislik ve Mimarlık Fakültesi Dergisi 32 1 1140–1151.
IEEE Y. E. Işıkdemir, “NLP TRANSFORMERS: ANALYSIS OF LLMS AND TRADITIONAL APPROACHES FOR ENHANCED TEXT SUMMARIZATION”, ESOGÜ Müh Mim Fak Derg, c. 32, sy. 1, ss. 1140–1151, 2024, doi: 10.31796/ogummf.1303569.
ISNAD Işıkdemir, Yunus Emre. “NLP TRANSFORMERS: ANALYSIS OF LLMS AND TRADITIONAL APPROACHES FOR ENHANCED TEXT SUMMARIZATION”. Eskişehir Osmangazi Üniversitesi Mühendislik ve Mimarlık Fakültesi Dergisi 32/1 (Nisan 2024), 1140-1151. https://doi.org/10.31796/ogummf.1303569.
JAMA Işıkdemir YE. NLP TRANSFORMERS: ANALYSIS OF LLMS AND TRADITIONAL APPROACHES FOR ENHANCED TEXT SUMMARIZATION. ESOGÜ Müh Mim Fak Derg. 2024;32:1140–1151.
MLA Işıkdemir, Yunus Emre. “NLP TRANSFORMERS: ANALYSIS OF LLMS AND TRADITIONAL APPROACHES FOR ENHANCED TEXT SUMMARIZATION”. Eskişehir Osmangazi Üniversitesi Mühendislik Ve Mimarlık Fakültesi Dergisi, c. 32, sy. 1, 2024, ss. 1140-51, doi:10.31796/ogummf.1303569.
Vancouver Işıkdemir YE. NLP TRANSFORMERS: ANALYSIS OF LLMS AND TRADITIONAL APPROACHES FOR ENHANCED TEXT SUMMARIZATION. ESOGÜ Müh Mim Fak Derg. 2024;32(1):1140-51.

20873 13565 13566 15461 13568  14913