Feature selection is a dimension reduction technique used to select features that are relevant to machine learning tasks. Reducing the dataset size by eliminating redundant and irrelevant features plays a pivotal role in increasing the performance of machine learning algorithms, speeding up the learning process, and building simple models. The apparent need for feature selection has aroused considerable interest amongst researchers and has caused feature selection to find a wide range of application domains including text mining, pattern recognition, cybersecurity, bioinformatics, and big data. As a result, over the years, a substantial amount of literature has been published on feature selection and a wide variety of feature selection methods have been proposed. The quality of feature selection algorithms is measured not only by evaluating the quality of the models built using the features they select, or by the clustering tendencies of the features they select, but also by their stability. Therefore, this study focused on feature selection and feature selection stability. In the pages that follow, general concepts and methods of feature selection, feature selection stability, stability measures, and reasons and solutions for instability are discussed.
Feature selection Dimensionality reduction Types of feature selection Feature selection stability Stability measures
Primary Language | English |
---|---|
Subjects | Engineering |
Journal Section | Computer Engineering |
Authors | |
Publication Date | December 1, 2023 |
Published in Issue | Year 2023 |