Support Vector Machine (SVM) is a supervised machine learning method used for classification and regression. It is based on the Vapnik-Chervonenkis (VC) theory and Structural Risk Minimization (SRM) principle. Thanks to its strong theoretical background, SVM exhibits a high performance compared to many other machine learning methods. The selection of hyperparameters and the kernel functions is an important task in the presence of SVM problems. In this study, the effect of tuning hyperparameters and sample size for the kernel functions on SVM classification accuracy was investigated. For this, UCI datasets of different sizes and with different correlations were simulated. Grid search and 10-fold Cross-Validation methods were used to tune the hyperparameters. Then, SVM classification process was performed using three kernel functions, and classification accuracy values were examined.
Primary Language | English |
---|---|
Subjects | Applied Mathematics |
Journal Section | Research Article |
Authors | |
Publication Date | March 30, 2021 |
Submission Date | February 8, 2021 |
Published in Issue | Year 2021 Issue: 34 |
As of 2021, JNT is licensed under a Creative Commons Attribution-NonCommercial 4.0 International Licence (CC BY-NC). |