Research Article
BibTex RIS Cite

Performance Trade-Off for Bert Based Multi-Domain Multilingual Chatbot Architectures

Year 2021, Volume: 1 Issue: 2, 144 - 149, 30.12.2021

Abstract

Text classification is a natural language processing (NLP) problem that aims to classify previously unseen texts. In this study, Bidirectional Encoder Representations for Transformers (BERT) architecture is preferred for text classification. The classification is aimed explicitly at a chatbot that can give automated responses to website visitors' queries. BERT is trained to reduce the need for RAM and storage by replacing multiple separate models for different chatbots on a server with a single model. Moreover, since a pre-trained multilingual BERT model is preferred, the system reduces the need for system resources. It handles multiple chatbots with multiple languages simultaneously. The model mainly determines a class for a given input text. The classes correspond to specific answers from a database, and the bot selects an answer and replies back. For multiple chatbots, a special masking operation is performed to select a response from within the corresponding bank answers of a chatbot. We tested the proposed model for 13 simultaneous classification problems on a data set of three different languages, Turkish, English, and German, with 333 classes. We reported the accuracies for individually trained models and the proposed model together with the savings in the system resources.

References

  • [1] P. Muangkammuen, N. Intiruk, and K. R. Saikaew, “Automated Thai-FAQ Chatbot using RNN-LSTM,” 2018 22nd International Computer Science and Engineering Conference (ICSEC), 2018.
  • [2] S. Ozan and D. E. Tasar, “Auto-tagging of short conversational sentences using natural language processing methods,” 2021 29th Signal Processing and Communications Applications Conference (SIU), 2021.
  • [3] D.E. Taşar, Ş. Ozan., U. Özdil, M.F.Akca, O. Ölmez, S. Gülüm, S. Kutal, and C. Belhan, “Auto-tagging of Short Conversational Sentences using Transformer Methods”. arXiv preprint arXiv:2106.01735, 2021.
  • [4] D.E. Taşar, Ş. Ozan., M.F.Akca, O. Ölmez, S. Gülüm, S. Kutal, and C. Belhan,“Çok Alanlı Chatbot Mimarilerinde Avantajlı Performans ve Bellek Takası”, presented at ICADA, Online, 26-28 Nov. 2021.
  • [5] E. S. Tellez, S. Miranda-Jiménez, M. Graff, D. Moctezuma, R. R. Suárez, and O. S. Siordia, “A simple approach to multilingual polarity classification in Twitter,” Pattern Recognition Letters, vol. 94, pp. 68–74, 2017.
  • [6] X. Ni, J.-T. Sun, J. Hu, and Z. Chen, “Cross lingual text classification by mining multilingual topics from Wikipedia,” Proceedings of the fourth ACM international conference on Web search and data mining - WSDM '11, 2011.
  • [7] H. Schwenk and X. Li, “A corpus for multilingual document classification in eight languages”, arXiv preprint arXiv:1805.09821, 2018.
  • [8] X. Zhou, X. Wan, and J. Xiao, “Attention-based LSTM network for cross-lingual sentiment classification,” Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2016.
  • [9] I. Casanueva, T. Temcinas, D. Gerz, M. Henderson ve I. Vulic, "Efficient Intent Detection with Dual Sentence Encoders," 2020.
  • [10] Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195.
  • [11] Xingkun Liu, Pawel Swietojanski, and Verena Rieser. "Benchmarking Natural Language Understanding Services for building Conversational Agents." . In Proceedings of the Tenth International Workshop on Spoken Dialogue Systems Technology (IWSDS) (pp. xxx–xxx). Springer, 2019.
  • [12] Ishant, “Emotions in text,” Kaggle, 18-Nov-2020. [Online]. Available: https://www.kaggle.com/ishantjuyal/emotions-in-text. [Accessed: 12-July-2021].
  • [13] J. Devlin, M.-W. Chang, K. Lee ve K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
Year 2021, Volume: 1 Issue: 2, 144 - 149, 30.12.2021

Abstract

References

  • [1] P. Muangkammuen, N. Intiruk, and K. R. Saikaew, “Automated Thai-FAQ Chatbot using RNN-LSTM,” 2018 22nd International Computer Science and Engineering Conference (ICSEC), 2018.
  • [2] S. Ozan and D. E. Tasar, “Auto-tagging of short conversational sentences using natural language processing methods,” 2021 29th Signal Processing and Communications Applications Conference (SIU), 2021.
  • [3] D.E. Taşar, Ş. Ozan., U. Özdil, M.F.Akca, O. Ölmez, S. Gülüm, S. Kutal, and C. Belhan, “Auto-tagging of Short Conversational Sentences using Transformer Methods”. arXiv preprint arXiv:2106.01735, 2021.
  • [4] D.E. Taşar, Ş. Ozan., M.F.Akca, O. Ölmez, S. Gülüm, S. Kutal, and C. Belhan,“Çok Alanlı Chatbot Mimarilerinde Avantajlı Performans ve Bellek Takası”, presented at ICADA, Online, 26-28 Nov. 2021.
  • [5] E. S. Tellez, S. Miranda-Jiménez, M. Graff, D. Moctezuma, R. R. Suárez, and O. S. Siordia, “A simple approach to multilingual polarity classification in Twitter,” Pattern Recognition Letters, vol. 94, pp. 68–74, 2017.
  • [6] X. Ni, J.-T. Sun, J. Hu, and Z. Chen, “Cross lingual text classification by mining multilingual topics from Wikipedia,” Proceedings of the fourth ACM international conference on Web search and data mining - WSDM '11, 2011.
  • [7] H. Schwenk and X. Li, “A corpus for multilingual document classification in eight languages”, arXiv preprint arXiv:1805.09821, 2018.
  • [8] X. Zhou, X. Wan, and J. Xiao, “Attention-based LSTM network for cross-lingual sentiment classification,” Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2016.
  • [9] I. Casanueva, T. Temcinas, D. Gerz, M. Henderson ve I. Vulic, "Efficient Intent Detection with Dual Sentence Encoders," 2020.
  • [10] Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195.
  • [11] Xingkun Liu, Pawel Swietojanski, and Verena Rieser. "Benchmarking Natural Language Understanding Services for building Conversational Agents." . In Proceedings of the Tenth International Workshop on Spoken Dialogue Systems Technology (IWSDS) (pp. xxx–xxx). Springer, 2019.
  • [12] Ishant, “Emotions in text,” Kaggle, 18-Nov-2020. [Online]. Available: https://www.kaggle.com/ishantjuyal/emotions-in-text. [Accessed: 12-July-2021].
  • [13] J. Devlin, M.-W. Chang, K. Lee ve K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
There are 13 citations in total.

Details

Primary Language English
Subjects Artificial Intelligence
Journal Section Research Articles
Authors

Davut Emre Taşar

Şükrü Ozan

Seçilay Kutal

Oğuzhan Ölmez

Semih Gülüm

Fatih Akca

Ceren Belhan This is me

Publication Date December 30, 2021
Submission Date November 30, 2021
Published in Issue Year 2021 Volume: 1 Issue: 2

Cite

IEEE D. E. Taşar, Ş. Ozan, S. Kutal, O. Ölmez, S. Gülüm, F. Akca, and C. Belhan, “Performance Trade-Off for Bert Based Multi-Domain Multilingual Chatbot Architectures”, Journal of Artificial Intelligence and Data Science, vol. 1, no. 2, pp. 144–149, 2021.

All articles published by JAIDA are licensed under a Creative Commons Attribution 4.0 International License.

88x31.png