Araştırma Makalesi
BibTex RIS Kaynak Göster

Vote-Based: Ensemble Approach

Yıl 2021, , 858 - 866, 30.06.2021
https://doi.org/10.16984/saufenbilder.901960

Öz

Vote-based is one of the ensembles learning methods in which the individual classifier is situated on numerous weighted categories of the training datasets. In designing a method, training, validation and test sets are applied in terms of an ensemble approach to developing an efficient and robust binary classification model. Similarly, ensemble learning is the most prominent and broad research area of Machine Learning (ML) and image recognition, which assists in enhancing the capability of performance. In most cases, the ensemble learning algorithm yields better performance than ML algorithms. Unlike existing methods, the proposed technique aggregates an ensemble classifier, known as vote-based, to employ and integrate the advantage of ML classifiers, which are Artificial Neural Network (ANN), Naive Bayes (NB) and Logistic Model Tree (LMT). This paper proposes an ensemble framework that aims to evaluate datasets from the UCI ML repository by adopting performance analysis. Furthermore, the experimental outcomes indicate that the proposed method provides more accurate results according to the base learner approaches in terms of accuracy rates, an area under the curve (AUC), precision, recall, and F-measure values.

Destekleyen Kurum

Ege University

Kaynakça

  • [1] M. A. Shehab and N. Kahraman, “A weighted voting ensemble of efficient regularized extreme learning machine,” Comput. Electr. Eng., vol. 85, 2020.
  • [2] J. Cao, S. Kwong, R. Wang, X. Li, K. Li, and X. Kong, “Class-specific soft voting based multiple extreme learning machines ensemble,” Neurocomputing, vol. 149, no. Part A, pp. 275–284, 2015.
  • [3] A. S. Khwaja, A. Anpalagan, M. Naeem, and B. Venkatesh, “Joint bagged-boosted artificial neural networks: Using ensemble machine learning to improve short-term electricity load forecasting,” Electr. Power Syst. Res., vol. 179, no. October 2019, p. 106080, 2020.
  • [4] P. J. G. Nieto, E. García-gonzalo, and J. C. Á. Antón, “Journal of Computational and Applied A comparison of several machine learning techniques for the centerline Abdul Ahad ABRO Vote-Based: Ensemble Approach Sakarya University Journal of Science 25(3), 858-866, 2021 864 segregation prediction in continuous cast steel slabs and evaluation of its performance,” J. Comput. Appl. Math., vol. 330, pp. 877–895, 2018.
  • [5] S. Lee and C. H. Jun, “Fast incremental learning of logistic model tree using least angle regression,” Expert Syst. Appl., vol. 97, pp. 137–145, 2018.
  • [6] H. Liu and L. Zhang, “Advancing Ensemble Learning Performance through data transformation and classifiers fusion in granular computing context,” Expert Syst. Appl., vol. 131, pp. 20–29, 2019.
  • [7] S. Shen, M. Sadoughi, M. Li, Z. Wang, and C. Hu, “Deep convolutional neural networks with ensemble learning and transfer learning for capacity estimation of lithium-ion batteries,” Appl. Energy, vol. 260, no. December 2019, p. 114296, 2020.
  • [8] A. A. ABRO, E. TAŞCI, and A. UGUR, “A Stacking-based Ensemble Learning Method for Outlier Detection,” Balk. J. Electr. Comput. Eng., vol. 8, no. 2, pp. 181–185, 2020.
  • [9] A. A. Aburomman, M. Bin, and I. Reaz, “A novel SVM-kNN-PSO ensemble method for intrusion detection system,” vol. 38, pp. 360– 372, 2016.
  • [10]F. Xu, Z. Pan, and R. Xia, “E-commerce product review sentiment classification based on a naïve Bayes continuous learning framework,” Inf. Process. Manag., no. February, p. 102221, 2020.
  • [11]S. S. Panesar, R. N. D. Souza, F. Yeh, and J. C. Fernandez-miranda, “Machine Learning Versus Logistic Regression Methods for 2- Year Mortality Prognostication in a Small, Heterogeneous Glioma Database,” World Neurosurg. X, vol. 2, p. 100012, 2019.
  • [12]A. A. Abro, M. Alci, and F. Hassan, “Theoretical Approach of Predictive Analytics on Big Data with Scope of Machine Learning.”
  • [13]W. Chen et al., “A comparative study of logistic model tree, random forest, and classification and regression tree models for spatial prediction of landslide susceptibility,” Catena, vol. 151, pp. 147–160, 2017.
  • [14]A. Kumar and A. Halder, “Ensemble-based active learning using fuzzy-rough approach for cancer sample classification,” Eng. Appl. Artif. Intell., vol. 91, no. December 2019, p. 103591, 2020.
  • [15]X. Zheng, W. Chen, Y. You, Y. Jiang, M. Li, and T. Zhang, “Ensemble deep learning for automated visual classification using EEG signals,” Pattern Recognit., vol. 102, p. 107147, 2020.
  • [16]T. Classification and B. K. Singh, “Investigations on Impact of Feature Normalization Techniques on Investigations on Impact of Feature Normalization Techniques on Classifier ’ s Performance in Breast Tumor Classification,” no. April 2015, pp. 10–15, 2017.
  • [17]L. Fan, K. L. Poh, and P. Zhou, “A sequential feature extraction approach for naïve bayes classification of microarray data,” Expert Syst. Appl., vol. 36, no. 6, pp. 9919–9923, 2009.
  • [18]E. Lella and G. Vessio, “Ensembling complex network ‘perspectives’ for mild cognitive impairment detection with artificial neural networks,” Pattern Recognit. Lett., vol. 136, pp. 168–174, 2020.
  • [19]R. Moraes, J. F. Valiati, and W. P. Gavião Neto, “Document-level sentiment classification: An empirical comparison between SVM and ANN,” Expert Syst. Appl., vol. 40, no. 2, pp. 621–633, 2013.
  • [20]N. Landwehr, M. Hall, and E. Frank, “Logistic model trees,” Mach. Learn., vol. 59, no. 1–2, pp. 161–205, 2005.
  • [21]UCI Machine Learning Repository, 2018, https://archive.ics.uci.edu/ml/index.php Abdul Ahad ABRO Vote-Based: Ensemble Approach Sakarya University Journal of Science 25(3), 858-866, 2021 865
  • [22]E. Frank, M. A. Hall, I. H. Witten, and T. Weka, “Eibe Frank, Mark A. Hall, and Ian H. Witten (2016). The WEKA Workbench. Online Appendix for ‘Data Mining: Practical Machine Learning Tools and Techniques’, Morgan Kaufmann, Fourth Edition, 2016.,” p. 2016, 2016.
  • [23]T. Fawcett, “An introduction to ROC analysis,” Pattern Recognit. Lett., vol. 27, no. 8, pp. 861–874, 2006.
  • [24]L. A. Bull, K. Worden, R. Fuentes, G. Manson, E. J. Cross, and N. Dervilis, “Outlier ensembles: A robust method for damage detection and unsupervised feature extraction from high-dimensional data,” J. Sound Vib., vol. 453, pp. 126–150, 2019.
  • [25]T. Fawcett, “ROC graphs: Notes and practical considerations for researchers,” Mach. Learn., vol. 31, no. 1, pp. 1–38, 2004.
  • [26]A. A. Abro, M. A. Yimer, and Z. Bhatti, “Identifying the Machine Learning Techniques for Classification of Target Datasets,” Sukkur IBA J. Comput. Math. Sci., vol. 4, no. 1, 2020.
Yıl 2021, , 858 - 866, 30.06.2021
https://doi.org/10.16984/saufenbilder.901960

Öz

Kaynakça

  • [1] M. A. Shehab and N. Kahraman, “A weighted voting ensemble of efficient regularized extreme learning machine,” Comput. Electr. Eng., vol. 85, 2020.
  • [2] J. Cao, S. Kwong, R. Wang, X. Li, K. Li, and X. Kong, “Class-specific soft voting based multiple extreme learning machines ensemble,” Neurocomputing, vol. 149, no. Part A, pp. 275–284, 2015.
  • [3] A. S. Khwaja, A. Anpalagan, M. Naeem, and B. Venkatesh, “Joint bagged-boosted artificial neural networks: Using ensemble machine learning to improve short-term electricity load forecasting,” Electr. Power Syst. Res., vol. 179, no. October 2019, p. 106080, 2020.
  • [4] P. J. G. Nieto, E. García-gonzalo, and J. C. Á. Antón, “Journal of Computational and Applied A comparison of several machine learning techniques for the centerline Abdul Ahad ABRO Vote-Based: Ensemble Approach Sakarya University Journal of Science 25(3), 858-866, 2021 864 segregation prediction in continuous cast steel slabs and evaluation of its performance,” J. Comput. Appl. Math., vol. 330, pp. 877–895, 2018.
  • [5] S. Lee and C. H. Jun, “Fast incremental learning of logistic model tree using least angle regression,” Expert Syst. Appl., vol. 97, pp. 137–145, 2018.
  • [6] H. Liu and L. Zhang, “Advancing Ensemble Learning Performance through data transformation and classifiers fusion in granular computing context,” Expert Syst. Appl., vol. 131, pp. 20–29, 2019.
  • [7] S. Shen, M. Sadoughi, M. Li, Z. Wang, and C. Hu, “Deep convolutional neural networks with ensemble learning and transfer learning for capacity estimation of lithium-ion batteries,” Appl. Energy, vol. 260, no. December 2019, p. 114296, 2020.
  • [8] A. A. ABRO, E. TAŞCI, and A. UGUR, “A Stacking-based Ensemble Learning Method for Outlier Detection,” Balk. J. Electr. Comput. Eng., vol. 8, no. 2, pp. 181–185, 2020.
  • [9] A. A. Aburomman, M. Bin, and I. Reaz, “A novel SVM-kNN-PSO ensemble method for intrusion detection system,” vol. 38, pp. 360– 372, 2016.
  • [10]F. Xu, Z. Pan, and R. Xia, “E-commerce product review sentiment classification based on a naïve Bayes continuous learning framework,” Inf. Process. Manag., no. February, p. 102221, 2020.
  • [11]S. S. Panesar, R. N. D. Souza, F. Yeh, and J. C. Fernandez-miranda, “Machine Learning Versus Logistic Regression Methods for 2- Year Mortality Prognostication in a Small, Heterogeneous Glioma Database,” World Neurosurg. X, vol. 2, p. 100012, 2019.
  • [12]A. A. Abro, M. Alci, and F. Hassan, “Theoretical Approach of Predictive Analytics on Big Data with Scope of Machine Learning.”
  • [13]W. Chen et al., “A comparative study of logistic model tree, random forest, and classification and regression tree models for spatial prediction of landslide susceptibility,” Catena, vol. 151, pp. 147–160, 2017.
  • [14]A. Kumar and A. Halder, “Ensemble-based active learning using fuzzy-rough approach for cancer sample classification,” Eng. Appl. Artif. Intell., vol. 91, no. December 2019, p. 103591, 2020.
  • [15]X. Zheng, W. Chen, Y. You, Y. Jiang, M. Li, and T. Zhang, “Ensemble deep learning for automated visual classification using EEG signals,” Pattern Recognit., vol. 102, p. 107147, 2020.
  • [16]T. Classification and B. K. Singh, “Investigations on Impact of Feature Normalization Techniques on Investigations on Impact of Feature Normalization Techniques on Classifier ’ s Performance in Breast Tumor Classification,” no. April 2015, pp. 10–15, 2017.
  • [17]L. Fan, K. L. Poh, and P. Zhou, “A sequential feature extraction approach for naïve bayes classification of microarray data,” Expert Syst. Appl., vol. 36, no. 6, pp. 9919–9923, 2009.
  • [18]E. Lella and G. Vessio, “Ensembling complex network ‘perspectives’ for mild cognitive impairment detection with artificial neural networks,” Pattern Recognit. Lett., vol. 136, pp. 168–174, 2020.
  • [19]R. Moraes, J. F. Valiati, and W. P. Gavião Neto, “Document-level sentiment classification: An empirical comparison between SVM and ANN,” Expert Syst. Appl., vol. 40, no. 2, pp. 621–633, 2013.
  • [20]N. Landwehr, M. Hall, and E. Frank, “Logistic model trees,” Mach. Learn., vol. 59, no. 1–2, pp. 161–205, 2005.
  • [21]UCI Machine Learning Repository, 2018, https://archive.ics.uci.edu/ml/index.php Abdul Ahad ABRO Vote-Based: Ensemble Approach Sakarya University Journal of Science 25(3), 858-866, 2021 865
  • [22]E. Frank, M. A. Hall, I. H. Witten, and T. Weka, “Eibe Frank, Mark A. Hall, and Ian H. Witten (2016). The WEKA Workbench. Online Appendix for ‘Data Mining: Practical Machine Learning Tools and Techniques’, Morgan Kaufmann, Fourth Edition, 2016.,” p. 2016, 2016.
  • [23]T. Fawcett, “An introduction to ROC analysis,” Pattern Recognit. Lett., vol. 27, no. 8, pp. 861–874, 2006.
  • [24]L. A. Bull, K. Worden, R. Fuentes, G. Manson, E. J. Cross, and N. Dervilis, “Outlier ensembles: A robust method for damage detection and unsupervised feature extraction from high-dimensional data,” J. Sound Vib., vol. 453, pp. 126–150, 2019.
  • [25]T. Fawcett, “ROC graphs: Notes and practical considerations for researchers,” Mach. Learn., vol. 31, no. 1, pp. 1–38, 2004.
  • [26]A. A. Abro, M. A. Yimer, and Z. Bhatti, “Identifying the Machine Learning Techniques for Classification of Target Datasets,” Sukkur IBA J. Comput. Math. Sci., vol. 4, no. 1, 2020.
Toplam 26 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yapay Zeka
Bölüm Araştırma Makalesi
Yazarlar

Abdul Ahad Abro 0000-0002-3591-9231

Yayımlanma Tarihi 30 Haziran 2021
Gönderilme Tarihi 23 Mart 2021
Kabul Tarihi 31 Mayıs 2021
Yayımlandığı Sayı Yıl 2021

Kaynak Göster

APA Abro, A. A. (2021). Vote-Based: Ensemble Approach. Sakarya University Journal of Science, 25(3), 858-866. https://doi.org/10.16984/saufenbilder.901960
AMA Abro AA. Vote-Based: Ensemble Approach. SAUJS. Haziran 2021;25(3):858-866. doi:10.16984/saufenbilder.901960
Chicago Abro, Abdul Ahad. “Vote-Based: Ensemble Approach”. Sakarya University Journal of Science 25, sy. 3 (Haziran 2021): 858-66. https://doi.org/10.16984/saufenbilder.901960.
EndNote Abro AA (01 Haziran 2021) Vote-Based: Ensemble Approach. Sakarya University Journal of Science 25 3 858–866.
IEEE A. A. Abro, “Vote-Based: Ensemble Approach”, SAUJS, c. 25, sy. 3, ss. 858–866, 2021, doi: 10.16984/saufenbilder.901960.
ISNAD Abro, Abdul Ahad. “Vote-Based: Ensemble Approach”. Sakarya University Journal of Science 25/3 (Haziran 2021), 858-866. https://doi.org/10.16984/saufenbilder.901960.
JAMA Abro AA. Vote-Based: Ensemble Approach. SAUJS. 2021;25:858–866.
MLA Abro, Abdul Ahad. “Vote-Based: Ensemble Approach”. Sakarya University Journal of Science, c. 25, sy. 3, 2021, ss. 858-66, doi:10.16984/saufenbilder.901960.
Vancouver Abro AA. Vote-Based: Ensemble Approach. SAUJS. 2021;25(3):858-66.

30930 This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.