<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN"
        "https://jats.nlm.nih.gov/publishing/1.4/JATS-journalpublishing1-4.dtd">
<article  article-type="research-article"        dtd-version="1.4">
            <front>

                <journal-meta>
                                                                <journal-id>saujs</journal-id>
            <journal-title-group>
                                                                                    <journal-title>Sakarya University Journal of Science</journal-title>
            </journal-title-group>
                                        <issn pub-type="epub">2147-835X</issn>
                                                                                            <publisher>
                    <publisher-name>Sakarya University</publisher-name>
                </publisher>
                    </journal-meta>
                <article-meta>
                                        <article-id pub-id-type="doi">10.16984/saufenbilder.901960</article-id>
                                                                <article-categories>
                                            <subj-group  xml:lang="en">
                                                            <subject>Artificial Intelligence</subject>
                                                    </subj-group>
                                            <subj-group  xml:lang="tr">
                                                            <subject>Yapay Zeka</subject>
                                                    </subj-group>
                                    </article-categories>
                                                                                                                                                        <title-group>
                                                                                                                                                            <article-title>Vote-Based: Ensemble Approach</article-title>
                                                                                                    </title-group>
            
                                                    <contrib-group content-type="authors">
                                                                        <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0002-3591-9231</contrib-id>
                                                                <name>
                                    <surname>Abro</surname>
                                    <given-names>Abdul Ahad</given-names>
                                </name>
                                                                    <aff>ILMA University</aff>
                                                            </contrib>
                                                                                </contrib-group>
                        
                                        <pub-date pub-type="pub" iso-8601-date="20210630">
                    <day>06</day>
                    <month>30</month>
                    <year>2021</year>
                </pub-date>
                                        <volume>25</volume>
                                        <issue>3</issue>
                                        <fpage>858</fpage>
                                        <lpage>866</lpage>
                        
                        <history>
                                    <date date-type="received" iso-8601-date="20210323">
                        <day>03</day>
                        <month>23</month>
                        <year>2021</year>
                    </date>
                                                    <date date-type="accepted" iso-8601-date="20210531">
                        <day>05</day>
                        <month>31</month>
                        <year>2021</year>
                    </date>
                            </history>
                                        <permissions>
                    <copyright-statement>Copyright © 1997, Sakarya University Journal of Science</copyright-statement>
                    <copyright-year>1997</copyright-year>
                    <copyright-holder>Sakarya University Journal of Science</copyright-holder>
                </permissions>
            
                                                                                                                        <abstract><p>Vote-based is one of the ensembles learning methods in which the individual classifier is situated on numerous weighted categories of the training datasets. In designing a method, training, validation and test sets are applied in terms of an ensemble approach to developing an efficient and robust binary classification model. Similarly, ensemble learning is the most prominent and broad research area of Machine Learning (ML) and image recognition, which assists in enhancing the capability of performance. In most cases, the ensemble learning algorithm yields better performance than ML algorithms. Unlike existing methods, the proposed technique aggregates an ensemble classifier, known as vote-based, to employ and integrate the advantage of ML classifiers, which are Artificial Neural Network (ANN), Naive Bayes (NB) and Logistic Model Tree (LMT). This paper proposes an ensemble framework that aims to evaluate datasets from the UCI ML repository by adopting performance analysis. Furthermore, the experimental outcomes indicate that the proposed method provides more accurate results according to the base learner approaches in terms of accuracy rates, an area under the curve (AUC), precision, recall, and F-measure values.</p></abstract>
                                                            
            
                                                                                        <kwd-group>
                                                    <kwd>Machine Learning</kwd>
                                                    <kwd>  Artificial Neural Network</kwd>
                                                    <kwd>  Ensemble learning</kwd>
                                                    <kwd>  Data Mining</kwd>
                                                    <kwd>  Classification</kwd>
                                            </kwd-group>
                            
                                                                                                                                                <funding-group specific-use="FundRef">
                    <award-group>
                                                    <funding-source>
                                <named-content content-type="funder_name">Ege University</named-content>
                            </funding-source>
                                                                    </award-group>
                </funding-group>
                                </article-meta>
    </front>
    <back>
                            <ref-list>
                                    <ref id="ref1">
                        <label>1</label>
                        <mixed-citation publication-type="journal">[1] M. A. Shehab and N. Kahraman, “A weighted voting ensemble of efficient regularized extreme learning machine,” Comput. Electr. Eng., vol. 85, 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref2">
                        <label>2</label>
                        <mixed-citation publication-type="journal">[2] J. Cao, S. Kwong, R. Wang, X. Li, K. Li, and X. Kong, “Class-specific soft voting based multiple extreme learning machines ensemble,” Neurocomputing, vol. 149, no. Part A, pp. 275–284, 2015.</mixed-citation>
                    </ref>
                                    <ref id="ref3">
                        <label>3</label>
                        <mixed-citation publication-type="journal">[3] A. S. Khwaja, A. Anpalagan, M. Naeem, and B. Venkatesh, “Joint bagged-boosted artificial neural networks: Using ensemble machine learning to improve short-term electricity load forecasting,” Electr. Power Syst. Res., vol. 179, no. October 2019, p. 106080, 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref4">
                        <label>4</label>
                        <mixed-citation publication-type="journal">[4] P. J. G. Nieto, E. García-gonzalo, and J. C. Á. Antón, “Journal of Computational and Applied A comparison of several machine learning techniques for the centerline Abdul Ahad ABRO Vote-Based: Ensemble Approach
Sakarya University Journal of Science 25(3), 858-866, 2021 864 segregation prediction in continuous cast
steel slabs and evaluation of its performance,” J. Comput. Appl. Math., vol. 330, pp. 877–895, 2018.</mixed-citation>
                    </ref>
                                    <ref id="ref5">
                        <label>5</label>
                        <mixed-citation publication-type="journal">[5] S. Lee and C. H. Jun, “Fast incremental learning of logistic model tree using least angle regression,” Expert Syst. Appl., vol. 97, pp. 137–145, 2018.</mixed-citation>
                    </ref>
                                    <ref id="ref6">
                        <label>6</label>
                        <mixed-citation publication-type="journal">[6] H. Liu and L. Zhang, “Advancing Ensemble Learning Performance through data transformation and classifiers fusion in granular computing context,” Expert Syst. Appl., vol. 131, pp. 20–29, 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref7">
                        <label>7</label>
                        <mixed-citation publication-type="journal">[7] S. Shen, M. Sadoughi, M. Li, Z. Wang, and C. Hu, “Deep convolutional neural networks with ensemble learning and transfer learning for capacity estimation of lithium-ion batteries,” Appl. Energy, vol. 260, no. December 2019, p. 114296, 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref8">
                        <label>8</label>
                        <mixed-citation publication-type="journal">[8] A. A. ABRO, E. TAŞCI, and A. UGUR, “A Stacking-based Ensemble Learning Method for Outlier Detection,” Balk. J. Electr. Comput. Eng., vol. 8, no. 2, pp. 181–185, 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref9">
                        <label>9</label>
                        <mixed-citation publication-type="journal">[9] A. A. Aburomman, M. Bin, and I. Reaz, “A novel SVM-kNN-PSO ensemble method for intrusion detection system,” vol. 38, pp. 360– 372, 2016.</mixed-citation>
                    </ref>
                                    <ref id="ref10">
                        <label>10</label>
                        <mixed-citation publication-type="journal">[10]F. Xu, Z. Pan, and R. Xia, “E-commerce product review sentiment classification based on a naïve Bayes continuous learning framework,” Inf. Process. Manag., no. February, p. 102221, 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref11">
                        <label>11</label>
                        <mixed-citation publication-type="journal">[11]S. S. Panesar, R. N. D. Souza, F. Yeh, and J. C. Fernandez-miranda, “Machine Learning Versus Logistic Regression Methods for 2- Year Mortality Prognostication in a Small, Heterogeneous Glioma Database,” World
Neurosurg. X, vol. 2, p. 100012, 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref12">
                        <label>12</label>
                        <mixed-citation publication-type="journal">[12]A. A. Abro, M. Alci, and F. Hassan, “Theoretical Approach of Predictive Analytics on Big Data with Scope of
Machine Learning.”</mixed-citation>
                    </ref>
                                    <ref id="ref13">
                        <label>13</label>
                        <mixed-citation publication-type="journal">[13]W. Chen et al., “A comparative study of logistic model tree, random forest, and classification and regression tree models for spatial prediction of landslide susceptibility,”
Catena, vol. 151, pp. 147–160, 2017.</mixed-citation>
                    </ref>
                                    <ref id="ref14">
                        <label>14</label>
                        <mixed-citation publication-type="journal">[14]A. Kumar and A. Halder, “Ensemble-based active learning using fuzzy-rough approach
for cancer sample classification,” Eng. Appl. Artif. Intell., vol. 91, no. December 2019, p.
103591, 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref15">
                        <label>15</label>
                        <mixed-citation publication-type="journal">[15]X. Zheng, W. Chen, Y. You, Y. Jiang, M. Li, and T. Zhang, “Ensemble deep learning for automated visual classification using EEG signals,” Pattern Recognit., vol. 102, p. 107147, 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref16">
                        <label>16</label>
                        <mixed-citation publication-type="journal">[16]T. Classification and B. K. Singh, “Investigations on Impact of Feature Normalization Techniques on Investigations on Impact of Feature Normalization Techniques on Classifier ’ s Performance in Breast Tumor Classification,” no. April 2015, pp. 10–15, 2017.</mixed-citation>
                    </ref>
                                    <ref id="ref17">
                        <label>17</label>
                        <mixed-citation publication-type="journal">[17]L. Fan, K. L. Poh, and P. Zhou, “A sequential feature extraction approach for naïve bayes classification of microarray data,” Expert Syst. Appl., vol. 36, no. 6, pp. 9919–9923, 2009.</mixed-citation>
                    </ref>
                                    <ref id="ref18">
                        <label>18</label>
                        <mixed-citation publication-type="journal">[18]E. Lella and G. Vessio, “Ensembling complex network ‘perspectives’ for mild cognitive impairment detection with artificial neural networks,” Pattern Recognit. Lett., vol. 136, pp. 168–174, 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref19">
                        <label>19</label>
                        <mixed-citation publication-type="journal">[19]R. Moraes, J. F. Valiati, and W. P. Gavião Neto, “Document-level sentiment classification: An empirical comparison between SVM and ANN,” Expert Syst. Appl., vol. 40, no. 2, pp. 621–633, 2013.</mixed-citation>
                    </ref>
                                    <ref id="ref20">
                        <label>20</label>
                        <mixed-citation publication-type="journal">[20]N. Landwehr, M. Hall, and E. Frank, “Logistic model trees,” Mach. Learn., vol. 59, no. 1–2, pp. 161–205, 2005.</mixed-citation>
                    </ref>
                                    <ref id="ref21">
                        <label>21</label>
                        <mixed-citation publication-type="journal">[21]UCI Machine Learning Repository, 2018, https://archive.ics.uci.edu/ml/index.php Abdul Ahad ABRO Vote-Based: Ensemble Approach Sakarya University Journal of Science 25(3), 858-866, 2021 865</mixed-citation>
                    </ref>
                                    <ref id="ref22">
                        <label>22</label>
                        <mixed-citation publication-type="journal">[22]E. Frank, M. A. Hall, I. H. Witten, and T. Weka, “Eibe Frank, Mark A. Hall, and Ian H. Witten (2016). The WEKA Workbench. Online Appendix for ‘Data Mining: Practical Machine Learning Tools and Techniques’, Morgan Kaufmann, Fourth Edition, 2016.,” p. 2016, 2016.</mixed-citation>
                    </ref>
                                    <ref id="ref23">
                        <label>23</label>
                        <mixed-citation publication-type="journal">[23]T. Fawcett, “An introduction to ROC analysis,” Pattern Recognit. Lett., vol. 27, no. 8, pp. 861–874, 2006.</mixed-citation>
                    </ref>
                                    <ref id="ref24">
                        <label>24</label>
                        <mixed-citation publication-type="journal">[24]L. A. Bull, K. Worden, R. Fuentes, G. Manson, E. J. Cross, and N. Dervilis, “Outlier ensembles: A robust method for damage detection and unsupervised feature extraction from high-dimensional data,” J. Sound Vib., vol. 453, pp. 126–150, 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref25">
                        <label>25</label>
                        <mixed-citation publication-type="journal">[25]T. Fawcett, “ROC graphs: Notes and practical considerations for researchers,” Mach. Learn., vol. 31, no. 1, pp. 1–38, 2004.</mixed-citation>
                    </ref>
                                    <ref id="ref26">
                        <label>26</label>
                        <mixed-citation publication-type="journal">[26]A. A. Abro, M. A. Yimer, and Z. Bhatti, “Identifying the Machine Learning Techniques for Classification of Target Datasets,” Sukkur IBA J. Comput. Math. Sci., vol. 4, no. 1, 2020.</mixed-citation>
                    </ref>
                            </ref-list>
                    </back>
    </article>
