Araştırma Makalesi
BibTex RIS Kaynak Göster

Semantik Arama İçin Bilgi Çerçevesi Tasarlanması

Yıl 2021, Sayı: 32, 682 - 689, 31.12.2021
https://doi.org/10.31590/ejosat.1043441

Öz

Yeni nesil bilgi arama prosedürleri, arama motorlarının tasarımını yeniden şekillendirmede karmaşık araçlar sağlamaktadır. Anlam tabanlı analiz profesyonel uygulamalarda kademeli olarak benimsense dahi, bilginin arkasındaki karmaşık birikimin davranışı, kademeli veri öğrenme modellerini gerektirmektedir. Metin modelleri sözlük tabanlı özelliklere dayalı olarak kullanılmaktadır. Sözlüksel yöntemlere sahip arama motorları, bağlamsal ve anlamsal bilgilerden yoksundur. Bu engel derin öğrenme yöntemlerinin geliştirilmesiyle aşılmaktadır. Metin, resim, video gibi farklı içerik türlerinin bağlamsal bilgileri sinir ağı modelleriyle elde edilerek daha doğru sonuçlara ulaşılabilir. Bu çalışmada, sözlüksel ve anlamsal özellikler üzerinden arama motorlarına geniş bir perspektiften bakılmıştır. Anlamsal arama yöntemleri denenmiş ve bilimsel dokümanlardan oluşan veri setlerinde sözlüksel yöntemlerle karşılaştırılmıştır. Bilimsel belgeler nispeten iyi biçimlendirilmiş veri kümeleri olduğundan bağlam dışı veriler ve anlamsal çatışmalarla uğraşmadan, çalışma boyunca anlamsal arama yöntemlerini ve sinir modellerini karşılaştırmaya odaklanıldı. Böylelikle, anlamsal aramanın sözcüksel aramadan daha iyi performans gösterdiği gözlenmektedir. Mevcut bilgi arama-bulma görevlerinin, çok modlu veri kümelerinin derin öğrenme stratejileriyle işlendiği anlambilimde yeni bakış açıları gerektirdiği sonucuna varılmıştır.

Kaynakça

  • Bajaj, P., Campos, D., Craswell, N., Deng, L., Gao, J., Liu, X., Majumder, R., McNamara, A., Mitra, B., Nguyen, T., Rosenberg, M., Song, X., Stoica, A., Tiwary, S., Wang, T. (2016). MS MARCO: A Human Generated MAchine Reading COmprehension Dataset.
  • Bojanowski, P., Grave, E., Joulin, A., Mikolov, T. (2016). Enriching Word Vectors with Subword Information.
  • Boteva, V., Gholipour, D., Sokolov, A., & Riezler, S. (2016). A full-text learning to rank dataset for medical information retrieval. Lecture Notes in Computer Science, 716-722. doi:10.1007/978-3-319-30671-1_58
  • Clark, K., Luong, M., Le, Q., Manning, C. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.
  • Cohan, A., Feldman, S., Beltagy, I., Downey, D., Weld, D. (2020). SPECTER: Document-level Representation Learning using Citation-informed Transformers.
  • Devlin, J., Chang, M., Lee, K., Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Hofstätter, S., Althammer, S., Schröder, M., Sertkan, M., Hanbury, A. (2020). Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation.
  • Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach.
  • Macdonald, C., & Tonellotto, N. (2020). Declarative Experimentation in Information Retrieval using PyTerrier. Proceedings Of The 2020 ACM SIGIR On International Conference On Theory Of Information Retrieval. doi: 10.1145/3409256.3409829
  • Maia, M., Handschuh, S., Freitas, A., Davis, B., McDermott, R., Zarrouk, M., & Balahur, A. (2018). WWW'18 Open Challenge. Companion Of The The Web Conference 2018 On The Web Conference 2018 - WWW '18. doi: 10.1145/3184558.3192301
  • Mikolov, T., Chen, K., Corrado, G., Dean, J. (2016). Efficient Estimation of Word Representations in Vector Space.
  • Reimers, N., Gurevych, I. (2019). Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.
  • Robertson, S., Walker, S., & Beaulieu, M. (2000). Experimentation as a way of life: Okapi at TREC. Information Processing & Management, 36(1), 95-108. doi:10.1016/s0306-4573(99)00046-1
  • Robertson, S., & Zaragoza, H. (2009). The Probabilistic Relevance Framework: BM25 and Beyond. Foundations And Trends® In Information Retrieval, 3(4), 333-389. doi: 10.1561/1500000019
  • Sanh, V., Debut, L., Chaumond, J., Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.
  • Song, K., Tan, X., Qin, T., Lu, J., Liu, T. (2020). MPNet: Masked and Permuted Pre-training for Language Understanding.
  • Thakur, N., Reimers, N., Rücklé, A., Srivastava, A., Gurevych, I. (2021). BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, L., Polosukhin, I. (2017, June 12). Attention Is All You Need.
  • Voorhees, E., Alam, T., Bedrick, S., Demner-Fushman, D., Hersh, W., Lo, K., Roberts, K., Soboroff, I., Wang, L. (2021). TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection.
  • Wachsmuth, H., Syed, S., & Stein, B. (2018). Retrieval of the best counterargument without prior topic knowledge. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). doi:10.18653/v1/p18-1023
  • Wadden, D., Lin, S., Lo, K., Wang, L., Zuylen, M., Cohan, A., Hajishirzi, H. (2020). Fact or Fiction: Verifying Scientific Claims. Retrieved November 28, 2021, from the arXiv database.
  • Wang, W., Wei, F., Dong, L., Bao, H., Yang, N., Zhou, M. (2020). MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers.

Designing An Information Framework For Semantic Search

Yıl 2021, Sayı: 32, 682 - 689, 31.12.2021
https://doi.org/10.31590/ejosat.1043441

Öz

New generation information retrieval procedures provide complex tools to remodel the design of search engines. Even though semantic analysis is gradually adopted by corporations, complex behavior of knowledge behind the information entails subsequent data learning models. Text models are currently in use through lexical features. Search engines with lexical methods lack contextual and semantic information. This barrier has been overcome with the development of deep learning methods. More accurate results can be retrieved by obtaining contextual information of different types of content such as text, image, video with neural models. In this study, a broad perspective of search engines was considered through lexical and semantic features. Semantic search methods were experimented then compared with lexical methods in data sets consisting of scientific documents. Since scientific documents are relatively well-formatted datasets and do not contain irrelevant content, the focus was on comparing semantic search methods and neural models throughout the study, without dealing with out-of-context data and semantic conflicts. As a result, semantic search methods performed better than lexical search. We conclude that current search-retrieval tasks require new perspectives in semantics where multimodal information is handled with deep learning strategies.

Kaynakça

  • Bajaj, P., Campos, D., Craswell, N., Deng, L., Gao, J., Liu, X., Majumder, R., McNamara, A., Mitra, B., Nguyen, T., Rosenberg, M., Song, X., Stoica, A., Tiwary, S., Wang, T. (2016). MS MARCO: A Human Generated MAchine Reading COmprehension Dataset.
  • Bojanowski, P., Grave, E., Joulin, A., Mikolov, T. (2016). Enriching Word Vectors with Subword Information.
  • Boteva, V., Gholipour, D., Sokolov, A., & Riezler, S. (2016). A full-text learning to rank dataset for medical information retrieval. Lecture Notes in Computer Science, 716-722. doi:10.1007/978-3-319-30671-1_58
  • Clark, K., Luong, M., Le, Q., Manning, C. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.
  • Cohan, A., Feldman, S., Beltagy, I., Downey, D., Weld, D. (2020). SPECTER: Document-level Representation Learning using Citation-informed Transformers.
  • Devlin, J., Chang, M., Lee, K., Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Hofstätter, S., Althammer, S., Schröder, M., Sertkan, M., Hanbury, A. (2020). Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation.
  • Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach.
  • Macdonald, C., & Tonellotto, N. (2020). Declarative Experimentation in Information Retrieval using PyTerrier. Proceedings Of The 2020 ACM SIGIR On International Conference On Theory Of Information Retrieval. doi: 10.1145/3409256.3409829
  • Maia, M., Handschuh, S., Freitas, A., Davis, B., McDermott, R., Zarrouk, M., & Balahur, A. (2018). WWW'18 Open Challenge. Companion Of The The Web Conference 2018 On The Web Conference 2018 - WWW '18. doi: 10.1145/3184558.3192301
  • Mikolov, T., Chen, K., Corrado, G., Dean, J. (2016). Efficient Estimation of Word Representations in Vector Space.
  • Reimers, N., Gurevych, I. (2019). Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.
  • Robertson, S., Walker, S., & Beaulieu, M. (2000). Experimentation as a way of life: Okapi at TREC. Information Processing & Management, 36(1), 95-108. doi:10.1016/s0306-4573(99)00046-1
  • Robertson, S., & Zaragoza, H. (2009). The Probabilistic Relevance Framework: BM25 and Beyond. Foundations And Trends® In Information Retrieval, 3(4), 333-389. doi: 10.1561/1500000019
  • Sanh, V., Debut, L., Chaumond, J., Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.
  • Song, K., Tan, X., Qin, T., Lu, J., Liu, T. (2020). MPNet: Masked and Permuted Pre-training for Language Understanding.
  • Thakur, N., Reimers, N., Rücklé, A., Srivastava, A., Gurevych, I. (2021). BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, L., Polosukhin, I. (2017, June 12). Attention Is All You Need.
  • Voorhees, E., Alam, T., Bedrick, S., Demner-Fushman, D., Hersh, W., Lo, K., Roberts, K., Soboroff, I., Wang, L. (2021). TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection.
  • Wachsmuth, H., Syed, S., & Stein, B. (2018). Retrieval of the best counterargument without prior topic knowledge. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). doi:10.18653/v1/p18-1023
  • Wadden, D., Lin, S., Lo, K., Wang, L., Zuylen, M., Cohan, A., Hajishirzi, H. (2020). Fact or Fiction: Verifying Scientific Claims. Retrieved November 28, 2021, from the arXiv database.
  • Wang, W., Wei, F., Dong, L., Bao, H., Yang, N., Zhou, M. (2020). MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers.
Toplam 21 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Makaleler
Yazarlar

İsmail Burak Parlak 0000-0002-0887-4226

Alper Mıtıncık 0000-0003-3602-5237

Yayımlanma Tarihi 31 Aralık 2021
Yayımlandığı Sayı Yıl 2021 Sayı: 32

Kaynak Göster

APA Parlak, İ. B., & Mıtıncık, A. (2021). Designing An Information Framework For Semantic Search. Avrupa Bilim Ve Teknoloji Dergisi(32), 682-689. https://doi.org/10.31590/ejosat.1043441