Araştırma Makalesi
BibTex RIS Kaynak Göster

Ağırlıklı Çoğunluk Oylaması Kullanarak MNIST/Fashion MNIST Veri Setinde Topluluk Öğrenmesi Yaklaşımı

Yıl 2025, Cilt: 8 Sayı: 5, 2291 - 2310, 15.12.2025
https://doi.org/10.47495/okufbed.1635821

Öz

Derin öğrenme ve makine öğrenmesinde bazen modeli ne kadar karmaşık hale getirsek de istediğimiz sonuçları elde edemeyebiliriz. Hatta fazla karmaşık bir model genelleme yeteneğini kaybedebilir ve aşırı öğrenme sorunuyla karşı karşıya kalabilir. Topluluk öğrenmesi yaklaşımları birden fazla modelin beraber çalışmasını gerektirdiği için modellerin genelleme yeteneğini artırır ve aşırı öğrenme sorununu önler. Bu çalışmada, ağırlıklı çoğunluk oylaması yöntemini kullanarak çeşitli derin öğrenme modelleri (ResNet, Inception, Vgg16) ve makine öğrenmesi algoritmalarının (kNN, Random Forest, SVM) bir araya getirilmesiyle oluşturulan bir topluluk öğrenmesi modeli sunulmaktadır. Her bir modelin ağırlığı, eğitim seti üzerindeki performansına göre otomatik olarak ayarlanmaktadır. Bu yaklaşımın, modellerin güçlü yönlerini birleştirerek genel tahmin performansını artırdığı ve aşırı öğrenmeye karşı dirençli bir yapı sağladığı gözlemlenmiştir. Sunulan topluluk öğrenmesi modeli, MNIST el yazısı rakam tanıma veri seti üzerinde yapılan deneylerle değerlendirilmiş ve bireysel çalışan modellere kıyasla önemli ölçüde daha iyi sonuçlar elde edilmiştir. Çalıştığımız veri setinde 28x28 boyutlarında 70000 adet rakam görüntüsü bulunmaktadır. Gerçekleştirdiğimiz eğitim ve sınıflandırma işlemleri sonucunda, modellerin bireysel performanslarına göre daha başarılı sonuçlar elde ederek, test doğruluğu %97,88 seviyesine kadar yükselmiştir.

Kaynakça

  • Breiman L. Random forests. Machine Learning 2001; 45: 5-32.
  • Cayir A., Yenidogan I., Dag H. Feature extraction based on deep learning for some traditional machine learning methods. 2018 3rd International Conference on Computer Science and Engineering (UBMK), IEEE 2018; 494–497.
  • Dogan A., Birant DA. Weighted majority voting ensemble approach for classification. 4th International Conference on Computer Science and Engineering (UBMK), IEEE 2019; 366–371. IEEE.
  • Dong X., Yu Z., Cao W., Shi Y., Ma Q. A survey on ensemble learning. Frontiers of Computer Science 2020; 14(2): 241–258.
  • Fan GF., Guo YH., Zheng JM., Hong WC. Application of the weighted k-nearest neighbor algorithm for short-term load forecasting. Energies 2019; 12(5): 916.
  • Guo Y., Wang X., Xiao P., Xu X. An ensemble learning framework for convolutional neural network based on multiple classifiers. Soft Computing 2020; 24(5): 3727–3735.
  • He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition. Conference on Computer Vision and Pattern Recognition. IEEE 2016; 770-778.
  • Hearst MA., Dumais ST., Osuna E., Platt, J., Scholkopf B. Support vector machines. IEEE Intelligent Systems and Their Applications 1998; 13(4): 18-28.
  • Kaynar O., Arslan H., Görmez Y., Işık YE. Makine öğrenmesi ve öznitelik seçim yöntemleriyle saldırı tespiti. Bilişim Teknolojileri Dergisi 2018; 11(2): 175-185.
  • Mukti IZ., Biswas D. Transfer learning based plant diseases detection using ResNet50. 4th International Conference on Electrical İnformation and Communication Technology (EICT) 2019; 1-6.
  • Peterson LE. K-nearest neighbor. Scholarpedia 2009; 4(2): 1883.
  • Shehab M., Kahraman N. A weighted voting ensemble of efficient regularized extreme learning machine. Computers & Electrical Engineering 2020; 85: 106639.
  • Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 2014; 1409.
  • Szegedy C., Vanhoucke V., Ioffe S., Shlens J., Wojna, Z. Rethinking the inception architecture for computer vision. IEEE Conference on Computer Vision and Pattern Recognition 2016; 2818-2826.
  • Thamaraiselvan S., Venugopal V., Vekkot S. Interpretable handwritten digit classification: analyzing feature extraction and explainable AI on the extended MNIST Dataset. 13th International Conference on System Modeling and Advancement in Research Trends (SMART) 2024; 591-597.
  • Vasheghani S., Sharifi S. Dynamic ensemble learning for robust ımage classification: A model-specific selection strategy. Available at SSRN 5215134.
  • Wang P., Fan E., Wang P. Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recognition Letters 2021; 141: 61-67.
  • Wang W., Tian J., Zhang C., Luo Y., Wang X., Li J. An improved deep learning approach and its applications on colonic polyp images detection. BMC Medical Imaging 2020; 20: 1-14.
  • Wardoyo R., Musdholifah A., Pradipta GA., Sanjaya INH. Weighted majority voting by statistical performance analysis on ensemble multiclassifier. Fifth International Conference on Informatics and Computing (ICIC), IEEE 2020; 1-8.
  • Xia X., Xu C., Nan B. Inception-v3 for flower classification. In 2017 2nd international conference on image, vision and computing (ICIVC), IEEE 2017; 783-787.
  • Xiao H., Rasul K., Vollgraf R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 2017.
  • Zhang X., Liu S., Wang X., Li Y. A fragmented neural network ensemble method and its application to image classification. Scientific Reports 2024; 14(1): 1–16.
  • Zhou ZH. Ensemble methods: foundations and algorithms. CRC Press 2012.

Ensemble Learning Approach on MNIST/Fashion MNIST Dataset Using Weighted Majority Voting

Yıl 2025, Cilt: 8 Sayı: 5, 2291 - 2310, 15.12.2025
https://doi.org/10.47495/okufbed.1635821

Öz

In deep learning and machine learning, sometimes no matter how complex we make the model, it may fail to achieve the desired results. In fact, an overly complex model can lose its generalization ability and face the problem of overfitting. Ensemble learning approaches, which require the collaboration of multiple models, enhance the generalization capability of the models and prevent overfitting. This study presents an ensemble model created by combining various deep learning models (ResNet, Inception, VGG16) and machine learning algorithms (kNN, Random Forest, SVM) using the weighted majority voting method. The weight of each model is automatically adjusted based on its performance on the training set. This approach has been observed to enhance the overall prediction performance by leveraging the strengths of the models while providing a structure resistant to overfitting. The presented ensemble model was evaluated using experiments on the MNIST handwritten digit dataset and achieved significantly better results compared to the individual models operating alone. In our dataset, which contains 70,000 digit images of 28x28 dimensions, the classification operations have improved the accuracy to as high as 97.88%, surpassing the performance of individual models.

Kaynakça

  • Breiman L. Random forests. Machine Learning 2001; 45: 5-32.
  • Cayir A., Yenidogan I., Dag H. Feature extraction based on deep learning for some traditional machine learning methods. 2018 3rd International Conference on Computer Science and Engineering (UBMK), IEEE 2018; 494–497.
  • Dogan A., Birant DA. Weighted majority voting ensemble approach for classification. 4th International Conference on Computer Science and Engineering (UBMK), IEEE 2019; 366–371. IEEE.
  • Dong X., Yu Z., Cao W., Shi Y., Ma Q. A survey on ensemble learning. Frontiers of Computer Science 2020; 14(2): 241–258.
  • Fan GF., Guo YH., Zheng JM., Hong WC. Application of the weighted k-nearest neighbor algorithm for short-term load forecasting. Energies 2019; 12(5): 916.
  • Guo Y., Wang X., Xiao P., Xu X. An ensemble learning framework for convolutional neural network based on multiple classifiers. Soft Computing 2020; 24(5): 3727–3735.
  • He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition. Conference on Computer Vision and Pattern Recognition. IEEE 2016; 770-778.
  • Hearst MA., Dumais ST., Osuna E., Platt, J., Scholkopf B. Support vector machines. IEEE Intelligent Systems and Their Applications 1998; 13(4): 18-28.
  • Kaynar O., Arslan H., Görmez Y., Işık YE. Makine öğrenmesi ve öznitelik seçim yöntemleriyle saldırı tespiti. Bilişim Teknolojileri Dergisi 2018; 11(2): 175-185.
  • Mukti IZ., Biswas D. Transfer learning based plant diseases detection using ResNet50. 4th International Conference on Electrical İnformation and Communication Technology (EICT) 2019; 1-6.
  • Peterson LE. K-nearest neighbor. Scholarpedia 2009; 4(2): 1883.
  • Shehab M., Kahraman N. A weighted voting ensemble of efficient regularized extreme learning machine. Computers & Electrical Engineering 2020; 85: 106639.
  • Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 2014; 1409.
  • Szegedy C., Vanhoucke V., Ioffe S., Shlens J., Wojna, Z. Rethinking the inception architecture for computer vision. IEEE Conference on Computer Vision and Pattern Recognition 2016; 2818-2826.
  • Thamaraiselvan S., Venugopal V., Vekkot S. Interpretable handwritten digit classification: analyzing feature extraction and explainable AI on the extended MNIST Dataset. 13th International Conference on System Modeling and Advancement in Research Trends (SMART) 2024; 591-597.
  • Vasheghani S., Sharifi S. Dynamic ensemble learning for robust ımage classification: A model-specific selection strategy. Available at SSRN 5215134.
  • Wang P., Fan E., Wang P. Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recognition Letters 2021; 141: 61-67.
  • Wang W., Tian J., Zhang C., Luo Y., Wang X., Li J. An improved deep learning approach and its applications on colonic polyp images detection. BMC Medical Imaging 2020; 20: 1-14.
  • Wardoyo R., Musdholifah A., Pradipta GA., Sanjaya INH. Weighted majority voting by statistical performance analysis on ensemble multiclassifier. Fifth International Conference on Informatics and Computing (ICIC), IEEE 2020; 1-8.
  • Xia X., Xu C., Nan B. Inception-v3 for flower classification. In 2017 2nd international conference on image, vision and computing (ICIVC), IEEE 2017; 783-787.
  • Xiao H., Rasul K., Vollgraf R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 2017.
  • Zhang X., Liu S., Wang X., Li Y. A fragmented neural network ensemble method and its application to image classification. Scientific Reports 2024; 14(1): 1–16.
  • Zhou ZH. Ensemble methods: foundations and algorithms. CRC Press 2012.
Toplam 23 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Derin Öğrenme
Bölüm Araştırma Makalesi
Yazarlar

Buğra Hatipoğlu 0000-0003-2813-5612

Hüseyin Aydilek 0000-0003-3051-4259

Mustafa Yasin Erten 0000-0002-5140-1213

Fikret Yalçınkaya 0000-0002-2174-918X

Murat Lüy 0000-0002-2378-0009

Gönderilme Tarihi 8 Şubat 2025
Kabul Tarihi 22 Haziran 2025
Yayımlanma Tarihi 15 Aralık 2025
Yayımlandığı Sayı Yıl 2025 Cilt: 8 Sayı: 5

Kaynak Göster

APA Hatipoğlu, B., Aydilek, H., Erten, M. Y., … Yalçınkaya, F. (2025). Ensemble Learning Approach on MNIST/Fashion MNIST Dataset Using Weighted Majority Voting. Osmaniye Korkut Ata Üniversitesi Fen Bilimleri Enstitüsü Dergisi, 8(5), 2291-2310. https://doi.org/10.47495/okufbed.1635821
AMA Hatipoğlu B, Aydilek H, Erten MY, Yalçınkaya F, Lüy M. Ensemble Learning Approach on MNIST/Fashion MNIST Dataset Using Weighted Majority Voting. Osmaniye Korkut Ata Üniversitesi Fen Bilimleri Enstitüsü Dergisi. Aralık 2025;8(5):2291-2310. doi:10.47495/okufbed.1635821
Chicago Hatipoğlu, Buğra, Hüseyin Aydilek, Mustafa Yasin Erten, Fikret Yalçınkaya, ve Murat Lüy. “Ensemble Learning Approach on MNIST/Fashion MNIST Dataset Using Weighted Majority Voting”. Osmaniye Korkut Ata Üniversitesi Fen Bilimleri Enstitüsü Dergisi 8, sy. 5 (Aralık 2025): 2291-2310. https://doi.org/10.47495/okufbed.1635821.
EndNote Hatipoğlu B, Aydilek H, Erten MY, Yalçınkaya F, Lüy M (01 Aralık 2025) Ensemble Learning Approach on MNIST/Fashion MNIST Dataset Using Weighted Majority Voting. Osmaniye Korkut Ata Üniversitesi Fen Bilimleri Enstitüsü Dergisi 8 5 2291–2310.
IEEE B. Hatipoğlu, H. Aydilek, M. Y. Erten, F. Yalçınkaya, ve M. Lüy, “Ensemble Learning Approach on MNIST/Fashion MNIST Dataset Using Weighted Majority Voting”, Osmaniye Korkut Ata Üniversitesi Fen Bilimleri Enstitüsü Dergisi, c. 8, sy. 5, ss. 2291–2310, 2025, doi: 10.47495/okufbed.1635821.
ISNAD Hatipoğlu, Buğra vd. “Ensemble Learning Approach on MNIST/Fashion MNIST Dataset Using Weighted Majority Voting”. Osmaniye Korkut Ata Üniversitesi Fen Bilimleri Enstitüsü Dergisi 8/5 (Aralık2025), 2291-2310. https://doi.org/10.47495/okufbed.1635821.
JAMA Hatipoğlu B, Aydilek H, Erten MY, Yalçınkaya F, Lüy M. Ensemble Learning Approach on MNIST/Fashion MNIST Dataset Using Weighted Majority Voting. Osmaniye Korkut Ata Üniversitesi Fen Bilimleri Enstitüsü Dergisi. 2025;8:2291–2310.
MLA Hatipoğlu, Buğra vd. “Ensemble Learning Approach on MNIST/Fashion MNIST Dataset Using Weighted Majority Voting”. Osmaniye Korkut Ata Üniversitesi Fen Bilimleri Enstitüsü Dergisi, c. 8, sy. 5, 2025, ss. 2291-10, doi:10.47495/okufbed.1635821.
Vancouver Hatipoğlu B, Aydilek H, Erten MY, Yalçınkaya F, Lüy M. Ensemble Learning Approach on MNIST/Fashion MNIST Dataset Using Weighted Majority Voting. Osmaniye Korkut Ata Üniversitesi Fen Bilimleri Enstitüsü Dergisi. 2025;8(5):2291-310.

23487




196541947019414  

1943319434 19435194361960219721 19784  2123822610 23877

* Uluslararası Hakemli Dergi (International Peer Reviewed Journal)

* Yazar/yazarlardan hiçbir şekilde MAKALE BASIM ÜCRETİ vb. şeyler istenmemektedir (Free submission and publication).

* Yılda Ocak, Mart, Haziran, Eylül ve Aralık'ta olmak üzere 5 sayı yayınlanmaktadır (Published 5 times a year)

* Dergide, Türkçe ve İngilizce makaleler basılmaktadır.

*Dergi açık erişimli bir dergidir.

Creative Commons License

Bu web sitesi Creative Commons Atıf 4.0 Uluslararası Lisansı ile lisanslanmıştır.