Text clustering is the process of collecting similar sentences in texts of variable size in the same group. Text clustering methods are an important area used for data analysis and for extracting information from these data. Many studies have been carried out in this area using different approaches and methods. In this study, the results of BERT (Bidirectional encoder representations from transformers), ROBERTa (Robustly optimized BERT pretraining approach), ALBERT(A lite BERT) and MPNet (Masked and permuted pre-training for language understanding) models, which are pre-trained models, and the TF-IDF (Term frequency-inverse document frequency) method, which is traditional statistical feature extraction, were compared while performing text representation. After the feature extraction stage, performance measurements were made by clustering with K-means, BIRCH (Balanced iterative reducing and clustering using hierarchies), Agglomerative clustering and Mini-batch K-means algorithms. When the measurements are evaluated, it has been reported that the pre-trained models give superior clustering results compared to the classical models.
Natural language processing sentence transformers pre-trained models text clustering representation
Primary Language | English |
---|---|
Subjects | Computer Software |
Journal Section | Research Articles |
Authors | |
Publication Date | December 30, 2024 |
Submission Date | November 1, 2024 |
Acceptance Date | December 6, 2024 |
Published in Issue | Year 2024 Volume: 5 Issue: 2 |