Araştırma Makalesi
BibTex RIS Kaynak Göster

CODE GENERATION USING TRANSFORMER BASED LANGUAGE MODEL

Yıl 2022, Sayı: 049, 49 - 61, 30.06.2022

Öz

Machine Learning has attracted researchers in the last decades and has been applied to different problems in many fields. Deep Learning methods which is a subfield of Machine Learning have started to be utilized to solve complex and hard problems with the improvement of computer technologies. Natural language processing is one of the challenging tasks that still needs to be improved for different applications such as code generation. Recently, general-purpose transformer based autoregressive language models achieved promising results on natural language generation tasks. Code generation from natural utterance using deep learning methods could be a promising development in terms of decreasing mental effort and time spent. In this study, a layered approach to generate Cascading Styles Sheets rules is proposed. The abstract data is obtained using a large-scale language model from natural utterances. Then the information is encoded into Abstract Syntax Tree. Finally, Abstract Syntax Tree structure is decoded in order to generate the Cascading Styles Sheets rules. In order to measure the performance of the proposed method an experimental procedure is constructed. Using pre-trained transformers and generated training data for Cascading Styles Sheets rules, different tests are applied to different datasets and the accuracies are obtained. Promising results for Cascading Styles Sheets code generation tasks using structural and natural prompt design are achieved. 46.98% and 66.07% overall accuracies are obtained for structural and natural prompt designs, respectively.

Destekleyen Kurum

Eskisehir Technical University

Proje Numarası

21GAP084

Teşekkür

This work is supported by Eskisehir Technical University scientific research projects with the project number of 21GAP084. We acknowledge the support provided by Greg Brockman from OpenAI, for allowing academic access to the GPT-3 beta program.

Kaynakça

  • [1] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I., (2017), Attention is all you need. Advances in Neural Information Processing Systems, 30.
  • [2] Yin, P., and Neubig, G., (2019), Reranking for neural semantic parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.
  • [3] Zhang, J., Wang, X., Zhang, H., Sun, H., Wang, K., and Liu, X., (2019), A novel neural source code representation based on abstract syntax tree. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), 783-794, IEEE.
  • [4] Uma, M., Sneha, V., Sneha, G., Bhuvana, J., and Bharathi, B., (2019), Formation of SQL from natural language query using NLP. In 2019 International Conference on Computational Intelligence in Data Science (ICCIDS), 1-5, IEEE.
  • [5] Galassi, A., Lippi, M., and Torroni, P., (2020), Attention in natural language processing. IEEE Transactions on Neural Networks and Learning Systems, 32, 4291-4308.
  • [6] Sun, Z., Zhu, Q., Mou, L., Xiong, Y., Li, G., and Zhang, L., (2019), A grammar-based structural cnn decoder for code generation. In Proceedings of the AAAI Conference on Artificial Intelligence, 7055-7062. [7] Shiv, V., and Quirk, C., (2019), Novel positional encodings to enable tree-based transformers. Advances in Neural Information Processing Systems, 32.
  • [8] Sun, Z., Zhu, Q., Xiong, Y., Sun, Y., Mou, L., and Zhang, L., (2020), Treegen: A tree-based transformer architecture for code generation. In Proceedings of the AAAI Conference on Artificial Intelligence, 8984-8991.
  • [9] Quirk, C., Mooney, R., and Galley, M., (2015), Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 878-888.
  • [10] Kim, S., Zhao, J., Tian, Y., and Chandra, S., (2021), Code prediction by feeding trees to transformers. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), 150-162.
  • [11] Ferraro, G., and Suominen, H., (2020), Transformer semantic parsing. In Proceedings of the The 18th Annual Workshop of the Australasian Language Technology Association, 121-126.
  • [12] Shin, R., Lin, C. H., Thomson, S., Chen, C., Roy, S., Platanios, E. A., Pauls, A., Klein, D., Eisner, J., and Van Durme, B., (2021), Constrained language models yield few-shot semantic parsers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 7699-7715.
  • [13] Shah, M., Shenoy, R., and Shankarmani, R., (2021), Natural language to python source code using transformers. In 2021 International Conference on Intelligent Technologies (CONIT), 1-4, IEEE.
  • [14] Svyatkovskiy, A., Deng, S. K., Fu, S., and Sundaresan, N., (2020), Intellicode compose: Code generation using transformer. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 1433-1443.
  • [15] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., and Askell, A., (2020), Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.
  • [16] Liddy, E. D, (2001), Natural language processing. In Encyclopedia of Library and Information Science, (2nd Ed.) NY. Marcel Decker, Inc.
  • [17] Egonmwan, E., and Chali, Y., (2019), Transformer and seq2seq model for paraphrase generation. In Proceedings of the 3rd Workshop on Neural Generation and Translation, 249-255.
  • [18] Narayanan, D., Shoeybi, M., Casper, J., LeGresley, P., Patwary, M., Korthikanti, V., Vainbrand, D., Kashinkunti, P., Bernauer, J., Catanzaro, B., Phanishayee, A., and Zaharia, M., (2021), Efficient large-scale language model training on GPU clusters using megatron-LM. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, p Article 58, Association for Computing Machinery, St. Louis, Missouri.
  • [19] Xu, F. F., Alon, U., Neubig, G., and Hellendoorn, V. J., (2022), A systematic evaluation of large language models of code. arXiv preprint arXiv:2202.13169.
  • [20] Poesia, G., Polozov, O., Le, V., Tiwari, A., Soares, G., Meek, C., and Gulwani, S., (2022), Synchromesh: Reliable code generation from pre-trained language models. arXiv preprint arXiv:2201.11227.
  • [21] Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. d. O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., and Brockman, G., (2021), Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
Yıl 2022, Sayı: 049, 49 - 61, 30.06.2022

Öz

Proje Numarası

21GAP084

Kaynakça

  • [1] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I., (2017), Attention is all you need. Advances in Neural Information Processing Systems, 30.
  • [2] Yin, P., and Neubig, G., (2019), Reranking for neural semantic parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.
  • [3] Zhang, J., Wang, X., Zhang, H., Sun, H., Wang, K., and Liu, X., (2019), A novel neural source code representation based on abstract syntax tree. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), 783-794, IEEE.
  • [4] Uma, M., Sneha, V., Sneha, G., Bhuvana, J., and Bharathi, B., (2019), Formation of SQL from natural language query using NLP. In 2019 International Conference on Computational Intelligence in Data Science (ICCIDS), 1-5, IEEE.
  • [5] Galassi, A., Lippi, M., and Torroni, P., (2020), Attention in natural language processing. IEEE Transactions on Neural Networks and Learning Systems, 32, 4291-4308.
  • [6] Sun, Z., Zhu, Q., Mou, L., Xiong, Y., Li, G., and Zhang, L., (2019), A grammar-based structural cnn decoder for code generation. In Proceedings of the AAAI Conference on Artificial Intelligence, 7055-7062. [7] Shiv, V., and Quirk, C., (2019), Novel positional encodings to enable tree-based transformers. Advances in Neural Information Processing Systems, 32.
  • [8] Sun, Z., Zhu, Q., Xiong, Y., Sun, Y., Mou, L., and Zhang, L., (2020), Treegen: A tree-based transformer architecture for code generation. In Proceedings of the AAAI Conference on Artificial Intelligence, 8984-8991.
  • [9] Quirk, C., Mooney, R., and Galley, M., (2015), Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 878-888.
  • [10] Kim, S., Zhao, J., Tian, Y., and Chandra, S., (2021), Code prediction by feeding trees to transformers. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), 150-162.
  • [11] Ferraro, G., and Suominen, H., (2020), Transformer semantic parsing. In Proceedings of the The 18th Annual Workshop of the Australasian Language Technology Association, 121-126.
  • [12] Shin, R., Lin, C. H., Thomson, S., Chen, C., Roy, S., Platanios, E. A., Pauls, A., Klein, D., Eisner, J., and Van Durme, B., (2021), Constrained language models yield few-shot semantic parsers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 7699-7715.
  • [13] Shah, M., Shenoy, R., and Shankarmani, R., (2021), Natural language to python source code using transformers. In 2021 International Conference on Intelligent Technologies (CONIT), 1-4, IEEE.
  • [14] Svyatkovskiy, A., Deng, S. K., Fu, S., and Sundaresan, N., (2020), Intellicode compose: Code generation using transformer. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 1433-1443.
  • [15] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., and Askell, A., (2020), Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.
  • [16] Liddy, E. D, (2001), Natural language processing. In Encyclopedia of Library and Information Science, (2nd Ed.) NY. Marcel Decker, Inc.
  • [17] Egonmwan, E., and Chali, Y., (2019), Transformer and seq2seq model for paraphrase generation. In Proceedings of the 3rd Workshop on Neural Generation and Translation, 249-255.
  • [18] Narayanan, D., Shoeybi, M., Casper, J., LeGresley, P., Patwary, M., Korthikanti, V., Vainbrand, D., Kashinkunti, P., Bernauer, J., Catanzaro, B., Phanishayee, A., and Zaharia, M., (2021), Efficient large-scale language model training on GPU clusters using megatron-LM. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, p Article 58, Association for Computing Machinery, St. Louis, Missouri.
  • [19] Xu, F. F., Alon, U., Neubig, G., and Hellendoorn, V. J., (2022), A systematic evaluation of large language models of code. arXiv preprint arXiv:2202.13169.
  • [20] Poesia, G., Polozov, O., Le, V., Tiwari, A., Soares, G., Meek, C., and Gulwani, S., (2022), Synchromesh: Reliable code generation from pre-trained language models. arXiv preprint arXiv:2201.11227.
  • [21] Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. d. O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., and Brockman, G., (2021), Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
Toplam 20 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Research Articles
Yazarlar

Umut Can Alaçam Bu kişi benim 0000-0002-6376-0352

Çağla Gökgöz Bu kişi benim 0000-0003-1214-3546

Cahit Perkgöz 0000-0003-0424-7046

Proje Numarası 21GAP084
Yayımlanma Tarihi 30 Haziran 2022
Gönderilme Tarihi 9 Mart 2022
Yayımlandığı Sayı Yıl 2022 Sayı: 049

Kaynak Göster

IEEE U. C. Alaçam, Ç. Gökgöz, ve C. Perkgöz, “CODE GENERATION USING TRANSFORMER BASED LANGUAGE MODEL”, JSR-A, sy. 049, ss. 49–61, Haziran 2022.