Research Article
BibTex RIS Cite

DÜZGÜN OLMAYAN DESTEK VEKTÖR REGRESYONU KISITLI İKİNCİL PROBLEMİNİ ÇÖZMEK İÇİN İKİNCİ DERECEDEN BENZER BİLGİLERE SAHİP BİR ARDIŞIK ASGARİ ENİYİLEME ALGORİTMASI

Year 2021, Volume: 26 Issue: 3, 1111 - 1120, 31.12.2021
https://doi.org/10.17482/uumfd.974353

Abstract

ε-duyarsız Destek Vektör Regresyonu (ε-DVR), ε-duyarsızlık özelliğine sahip düzenlenmiş 𝑙1 hata kayıp fonksiyonu ile ifade edilir ve 𝑙1 kayıp fonksiyonunun sahip olduğu gürbüz olma özelliği yanında küçük hatalara karşı duyarsız olma özelliğine de sahiptir. Ayrıca, düzenlenmiş hata ile çözümün düzlüğü üzerinde kontrol sağlanır. Bu çalışmada, ε-DVR ikincil problemi, klasik pürüzsüz DVR ikincil probleminin yarısı kadar eniyileme değişkenine sahip olma avantajıyla eşitlik ve eşitsizlik kısıtları altında düzgün olmayan dışbükey parçalı ikinci dereceden problem olarak türetilmiştir. Türetilen bu dışbükey düzgün olmayan ikincil eniyileme problemi, ardışık kayıp fonksiyonu değerleri arasındaki farka ilişkin bir üst sınırın en aza indirilmesine dayanan bir çalışma kümesi seçimi (ÇKS) kullanan verimli bir Ardışık Asgari Eniyileme (AAE) algoritması ile çözülmüştür. Daha önce düzgün olmayan ikincil ε-DVR probleminin AAE algoritması ile çözümünde ÇKS için Karush-KuhnTucker (KKT) koşullarını en fazla ihlal eden çiftler alınarak birinci dereceden bilgiler kullanılmıştır. Önerilen ÇKS’de ise ikinci dereceden benzer bilgiler kullanılmaktadır ve bu düzgün olmayan eniyileme problemini çözmek için birinci dereceden emsaline göre üstünlüğü bir dizi gerçek dünya veri kümesi üzerinde elde edilen sonuçlarla gösterilmiştir. Ayrıca, sonuçlar klasik pürüzsüz DVR ile de karşılaştırılmıştır

References

  • 1. Abe, S. (2015) Optimizing working sets for training support vector regressors by Newton’s method, International joint conference on neural networks, IJCNN, Killarney, Ireland. doi:10.1109/IJCNN.2015.7280309
  • 2. Abe, S. (2016) Fusing sequential minimal optimization and Newton’s method for support vector training, International Journal of Machine Learning and Cybernetics, 7(3), 345–364. doi:10.1007/s13042-014-0265-x
  • 3. Barbero, A., Lopez, J, ve Dorronsoro, J.R. (2009) Cycle-breaking acceleration of SVM training, Neurocomputing, 72(7–9), 1398–1406. doi:10.1016/j.neucom.2008.12.014
  • 4. Barbero, A. ve Dorronsoro, J.R. (2011) Momentum sequential minimal optimization: an accelerated method for support vector machine training, International joint conference on neural networks, IJCNN. doi:10.1109/IJCNN.2011.6033245
  • 5. Boser, B., Guyon, I. ve Vapnik, V. (1992) A training algorithm for optimal margin classifiers, Proceedings of the fifth annual workshop on Computational Learning Theory, 144–152. doi:10.1145/130385.130401
  • 6. Bottou, L. ve Lin C.J. (2007) Support vector machine solvers in Large scale kernel machines, MIT Press, Cambridge.
  • 7. Cortes, C. ve Vapnik, V. (1995) Support-vector network, Machine Learning, 20, 273–297.
  • 8. Fan, R.E., Chen, P.H. ve Lin, C.J. (2005) Working set selection using second order information for training support vector machines, Journal of Machine Learning Research, 6, 1889–1918.
  • 9. Flake, G.W. ve Lawrence, S. (2001) Efficient SVM regression training with SMO, Machine Learning, 46(1), 271–290. doi:10.1023/A:1012474916001
  • 10. Guo, J., Takahashi, N. ve Nishi, T. (2006) A novel sequential minimal optimization algorithm for support vector regression, Lecture Notes in Computer Science, 4232, 827–836. doi:10.1007/11893028_92
  • 11. Keerthi, S.S., Shevade, S.K., Bhattacharyya, C. ve Murthy, K.R.K. (2001) Improvements to Platt’s SMO algorithm for SVM classifier design, Neural Computation, 13(3), 637–649. doi:10.1162/089976601300014493
  • 12. Kocaoğlu, A. (2019) An Efficient SMO Algorithm for Solving Non-smooth Problem Arising in ε-Insensitive Support Vector Regression, Neural Processing Letters, 50, 933–955. doi:10.1007/s11063-018-09975-3
  • 13. Kumar, B., Sinha, A., Chakrabarti, S. ve Vyas, O. P. (2021) A fast learning algorithm for One-Class Slab Support Vector Machines, arXiv: 2011.03243. https://arxiv.org/abs/2011.03243 doi:10.1016/j.knosys.2021.107267
  • 14. Lin, C.J. (2001) On the convergence of the decomposition method for support vector machines, IEEE Transactions on Neural Network, 12(6), 1288–1298. doi: 10.1109/72.963765
  • 15. Lopez, J. ve Dorronsoro, J.R. (2012) Simple proof of convergence of the SMO algorithm for different SVM variants, IEEE Transactions on Neural Networks and Learning Systems, 23(7), 1142–1147. doi:10.1109/TNNLS.2012.2195198
  • 16. Noronha, D.H., Torquato, M.F. ve Fernandes, M.A.C. (2019) A parallel implementation of sequential minimal optimization on FPGA, Microprocessors and Microsystems, 69, 138-151. doi: 10.1016/j.micpro.2019.06.007
  • 17. Platt, J.C. (1998) Fast training of support vector machines using sequential minimal optimization in Kernel methods: support vector machines, MIT Press, Cambridge. 18. Ruszczynski, A. (2006) Nonlinear Optimization, Princeton, NJ: Princeton University Press. ISBN 978-0691119151. MR 2199043.
  • 19. Shawe Taylor, J. ve Sun, S. (2011) A review of optimization methodologies in support vector machines, Neurocomputing, 74(17), 3609-3618. doi: 10.1016/j.neucom.2011.06.026
  • 20. Smola, A.J. ve Schölkopf, B. (2004) A tutorial on support vector regression, Statistics and Computing, 14(3), 199–222. doi: 10.1023/B:STCO.0000035301.49549.88
  • 21. Takahashi, N., Guo, J. ve Nishi, T. (2006) Global convergence of SMO algorithm for support vector regression, IEEE Transactions on Neural Network, 19(6), 971–982. doi:10.1109/TNN.2007.915116
  • 22. Vapnik, V.N. (1998) Statistical Learning Theory, Wiley, New York, USA.

A Sequential Minimal Optimization Algorithm with Second-Order like Information to solve a Non-Smooth Support Vector Regression Constrained Dual Problem

Year 2021, Volume: 26 Issue: 3, 1111 - 1120, 31.12.2021
https://doi.org/10.17482/uumfd.974353

Abstract

The ε-insensitive Support Vector Regression (ε-SVR) is expressed by the ε-insensitive regularized 𝑙1 error loss function, and has the robustness of the 𝑙1 loss function, as well as being insensitive to small errors. Also, the regularization of error loss provides a control over the flatness of the solution. In this study, the ε-SVR dual problem is derived as a non-smooth convex piecewise quadratic problem under the equality and inequality constraints, with the advantage of having half the optimization variables of the classical smooth SVR dual problem. This convex non-smooth dual optimization problem is solved by an efficient Sequential Minimal Optimization (SMO) algorithm using a working set selection (WSS) based on minimizing an upper bound for the difference between consecutive loss function values. Previously, in the solution of the non-smooth dual ε-SVR problem with SMO algorithm, first order information was used by selecting the pairs in a manner that violates the Karush-Kuhn-Tucker conditions the most. In the proposed WSS, second-order like information is employed and its superiority over the first-order counterpart to solve this non smooth optimization problem has been demonstrated by the results obtained on a set of real-world datasets. Additionally, the results are compared with the classic smooth SVR problem.

References

  • 1. Abe, S. (2015) Optimizing working sets for training support vector regressors by Newton’s method, International joint conference on neural networks, IJCNN, Killarney, Ireland. doi:10.1109/IJCNN.2015.7280309
  • 2. Abe, S. (2016) Fusing sequential minimal optimization and Newton’s method for support vector training, International Journal of Machine Learning and Cybernetics, 7(3), 345–364. doi:10.1007/s13042-014-0265-x
  • 3. Barbero, A., Lopez, J, ve Dorronsoro, J.R. (2009) Cycle-breaking acceleration of SVM training, Neurocomputing, 72(7–9), 1398–1406. doi:10.1016/j.neucom.2008.12.014
  • 4. Barbero, A. ve Dorronsoro, J.R. (2011) Momentum sequential minimal optimization: an accelerated method for support vector machine training, International joint conference on neural networks, IJCNN. doi:10.1109/IJCNN.2011.6033245
  • 5. Boser, B., Guyon, I. ve Vapnik, V. (1992) A training algorithm for optimal margin classifiers, Proceedings of the fifth annual workshop on Computational Learning Theory, 144–152. doi:10.1145/130385.130401
  • 6. Bottou, L. ve Lin C.J. (2007) Support vector machine solvers in Large scale kernel machines, MIT Press, Cambridge.
  • 7. Cortes, C. ve Vapnik, V. (1995) Support-vector network, Machine Learning, 20, 273–297.
  • 8. Fan, R.E., Chen, P.H. ve Lin, C.J. (2005) Working set selection using second order information for training support vector machines, Journal of Machine Learning Research, 6, 1889–1918.
  • 9. Flake, G.W. ve Lawrence, S. (2001) Efficient SVM regression training with SMO, Machine Learning, 46(1), 271–290. doi:10.1023/A:1012474916001
  • 10. Guo, J., Takahashi, N. ve Nishi, T. (2006) A novel sequential minimal optimization algorithm for support vector regression, Lecture Notes in Computer Science, 4232, 827–836. doi:10.1007/11893028_92
  • 11. Keerthi, S.S., Shevade, S.K., Bhattacharyya, C. ve Murthy, K.R.K. (2001) Improvements to Platt’s SMO algorithm for SVM classifier design, Neural Computation, 13(3), 637–649. doi:10.1162/089976601300014493
  • 12. Kocaoğlu, A. (2019) An Efficient SMO Algorithm for Solving Non-smooth Problem Arising in ε-Insensitive Support Vector Regression, Neural Processing Letters, 50, 933–955. doi:10.1007/s11063-018-09975-3
  • 13. Kumar, B., Sinha, A., Chakrabarti, S. ve Vyas, O. P. (2021) A fast learning algorithm for One-Class Slab Support Vector Machines, arXiv: 2011.03243. https://arxiv.org/abs/2011.03243 doi:10.1016/j.knosys.2021.107267
  • 14. Lin, C.J. (2001) On the convergence of the decomposition method for support vector machines, IEEE Transactions on Neural Network, 12(6), 1288–1298. doi: 10.1109/72.963765
  • 15. Lopez, J. ve Dorronsoro, J.R. (2012) Simple proof of convergence of the SMO algorithm for different SVM variants, IEEE Transactions on Neural Networks and Learning Systems, 23(7), 1142–1147. doi:10.1109/TNNLS.2012.2195198
  • 16. Noronha, D.H., Torquato, M.F. ve Fernandes, M.A.C. (2019) A parallel implementation of sequential minimal optimization on FPGA, Microprocessors and Microsystems, 69, 138-151. doi: 10.1016/j.micpro.2019.06.007
  • 17. Platt, J.C. (1998) Fast training of support vector machines using sequential minimal optimization in Kernel methods: support vector machines, MIT Press, Cambridge. 18. Ruszczynski, A. (2006) Nonlinear Optimization, Princeton, NJ: Princeton University Press. ISBN 978-0691119151. MR 2199043.
  • 19. Shawe Taylor, J. ve Sun, S. (2011) A review of optimization methodologies in support vector machines, Neurocomputing, 74(17), 3609-3618. doi: 10.1016/j.neucom.2011.06.026
  • 20. Smola, A.J. ve Schölkopf, B. (2004) A tutorial on support vector regression, Statistics and Computing, 14(3), 199–222. doi: 10.1023/B:STCO.0000035301.49549.88
  • 21. Takahashi, N., Guo, J. ve Nishi, T. (2006) Global convergence of SMO algorithm for support vector regression, IEEE Transactions on Neural Network, 19(6), 971–982. doi:10.1109/TNN.2007.915116
  • 22. Vapnik, V.N. (1998) Statistical Learning Theory, Wiley, New York, USA.
There are 21 citations in total.

Details

Primary Language Turkish
Subjects Electrical Engineering
Journal Section Research Articles
Authors

Aykut Kocaoğlu 0000-0001-5151-0463

Publication Date December 31, 2021
Submission Date July 25, 2021
Acceptance Date November 13, 2021
Published in Issue Year 2021 Volume: 26 Issue: 3

Cite

APA Kocaoğlu, A. (2021). DÜZGÜN OLMAYAN DESTEK VEKTÖR REGRESYONU KISITLI İKİNCİL PROBLEMİNİ ÇÖZMEK İÇİN İKİNCİ DERECEDEN BENZER BİLGİLERE SAHİP BİR ARDIŞIK ASGARİ ENİYİLEME ALGORİTMASI. Uludağ Üniversitesi Mühendislik Fakültesi Dergisi, 26(3), 1111-1120. https://doi.org/10.17482/uumfd.974353
AMA Kocaoğlu A. DÜZGÜN OLMAYAN DESTEK VEKTÖR REGRESYONU KISITLI İKİNCİL PROBLEMİNİ ÇÖZMEK İÇİN İKİNCİ DERECEDEN BENZER BİLGİLERE SAHİP BİR ARDIŞIK ASGARİ ENİYİLEME ALGORİTMASI. UUJFE. December 2021;26(3):1111-1120. doi:10.17482/uumfd.974353
Chicago Kocaoğlu, Aykut. “DÜZGÜN OLMAYAN DESTEK VEKTÖR REGRESYONU KISITLI İKİNCİL PROBLEMİNİ ÇÖZMEK İÇİN İKİNCİ DERECEDEN BENZER BİLGİLERE SAHİP BİR ARDIŞIK ASGARİ ENİYİLEME ALGORİTMASI”. Uludağ Üniversitesi Mühendislik Fakültesi Dergisi 26, no. 3 (December 2021): 1111-20. https://doi.org/10.17482/uumfd.974353.
EndNote Kocaoğlu A (December 1, 2021) DÜZGÜN OLMAYAN DESTEK VEKTÖR REGRESYONU KISITLI İKİNCİL PROBLEMİNİ ÇÖZMEK İÇİN İKİNCİ DERECEDEN BENZER BİLGİLERE SAHİP BİR ARDIŞIK ASGARİ ENİYİLEME ALGORİTMASI. Uludağ Üniversitesi Mühendislik Fakültesi Dergisi 26 3 1111–1120.
IEEE A. Kocaoğlu, “DÜZGÜN OLMAYAN DESTEK VEKTÖR REGRESYONU KISITLI İKİNCİL PROBLEMİNİ ÇÖZMEK İÇİN İKİNCİ DERECEDEN BENZER BİLGİLERE SAHİP BİR ARDIŞIK ASGARİ ENİYİLEME ALGORİTMASI”, UUJFE, vol. 26, no. 3, pp. 1111–1120, 2021, doi: 10.17482/uumfd.974353.
ISNAD Kocaoğlu, Aykut. “DÜZGÜN OLMAYAN DESTEK VEKTÖR REGRESYONU KISITLI İKİNCİL PROBLEMİNİ ÇÖZMEK İÇİN İKİNCİ DERECEDEN BENZER BİLGİLERE SAHİP BİR ARDIŞIK ASGARİ ENİYİLEME ALGORİTMASI”. Uludağ Üniversitesi Mühendislik Fakültesi Dergisi 26/3 (December 2021), 1111-1120. https://doi.org/10.17482/uumfd.974353.
JAMA Kocaoğlu A. DÜZGÜN OLMAYAN DESTEK VEKTÖR REGRESYONU KISITLI İKİNCİL PROBLEMİNİ ÇÖZMEK İÇİN İKİNCİ DERECEDEN BENZER BİLGİLERE SAHİP BİR ARDIŞIK ASGARİ ENİYİLEME ALGORİTMASI. UUJFE. 2021;26:1111–1120.
MLA Kocaoğlu, Aykut. “DÜZGÜN OLMAYAN DESTEK VEKTÖR REGRESYONU KISITLI İKİNCİL PROBLEMİNİ ÇÖZMEK İÇİN İKİNCİ DERECEDEN BENZER BİLGİLERE SAHİP BİR ARDIŞIK ASGARİ ENİYİLEME ALGORİTMASI”. Uludağ Üniversitesi Mühendislik Fakültesi Dergisi, vol. 26, no. 3, 2021, pp. 1111-20, doi:10.17482/uumfd.974353.
Vancouver Kocaoğlu A. DÜZGÜN OLMAYAN DESTEK VEKTÖR REGRESYONU KISITLI İKİNCİL PROBLEMİNİ ÇÖZMEK İÇİN İKİNCİ DERECEDEN BENZER BİLGİLERE SAHİP BİR ARDIŞIK ASGARİ ENİYİLEME ALGORİTMASI. UUJFE. 2021;26(3):1111-20.

Announcements:

30.03.2021-Beginning with our April 2021 (26/1) issue, in accordance with the new criteria of TR-Dizin, the Declaration of Conflict of Interest and the Declaration of Author Contribution forms fulfilled and signed by all authors are required as well as the Copyright form during the initial submission of the manuscript. Furthermore two new sections, i.e. ‘Conflict of Interest’ and ‘Author Contribution’, should be added to the manuscript. Links of those forms that should be submitted with the initial manuscript can be found in our 'Author Guidelines' and 'Submission Procedure' pages. The manuscript template is also updated. For articles reviewed and accepted for publication in our 2021 and ongoing issues and for articles currently under review process, those forms should also be fulfilled, signed and uploaded to the system by authors.