Research Article
BibTex RIS Cite

Benchmarking Deep Learning Models for Breast Cancer Detection: A Comparison of Vision Transformers and CNNs

Year 2025, Volume: 13 Issue: 3, 108 - 119, 30.09.2025
https://doi.org/10.21541/apjess.1663864

Abstract

Breast cancer is a major global health issue, and accurate early detection is critical for improving patient outcomes. Deep learning-based image classification techniques have shown remarkable success in medical imaging, particularly convolutional neural networks (CNNs) and transformer-based models. This study evaluates and compares the performance of Vision Transformers (ViTs) with well-established CNN architectures, including AlexNet, ResNet-50, and VGG-19, for breast cancer image classification. The research aims to investigate whether ViTs can outperform conventional deep learning models in this domain and to analyze their strengths and limitations. The study utilizes a publicly available breast cancer dataset comprising 9,248 images categorized into benign, malignant, and normal classes. The dataset is preprocessed by resizing all images to 224×224 pixels, normalizing pixel intensity values, and applying data augmentation techniques. All models are trained under the same conditions using 80% of the data for training, 10% for validation, and 10% for testing. Performance evaluation is conducted based on accuracy, precision, recall, and F1-score metrics. Experimental results indicate that ResNet-50 achieves the highest classification accuracy (93.62%), outperforming the other models in terms of overall performance. AlexNet, despite having the smallest parameter count, delivers competitive accuracy (88.32%) while being computationally efficient. VGG-19, known for its depth, achieves 87.51% accuracy but has the highest computational cost. ViTs, although promising, achieve a lower accuracy of 87.46%, suggesting that transformer-based architectures may require larger datasets and further optimization to surpass traditional CNNs in medical image classification tasks. This study highlights that CNN-based models, particularly ResNet-50, remain the most effective approach for breast cancer classification in the given dataset. However, ViTs present a potential alternative, and future research should explore hybrid models integrating both CNN and transformer-based architectures to enhance classification performance.

References

  • M. Arnold et al., “Current and future burden of breast cancer: Global statistics for 2020 and 2040,” The Breast, vol. 66, pp. 15–23, 2022.
  • T. Iguchi, T. Sato, T. Nakajima, S. Miyagawa, and N. Takasugi, “New frontiers of developmental endocrinology opened by researchers connecting irreversible effects of sex hormones on developing organs,” Differentiation, vol. 118, pp. 4–23, 2021.
  • L. Wilkinson and T. Gathani, “Understanding breast cancer as a global health concern,” Br. J. Radiol., vol. 95, no. 1130, p. 20211033, 2022.
  • Y. Xu, M. Gong, Y. Wang, Y. Yang, S. Liu & Q. Zeng, “Global trends and forecasts of breast cancer incidence and deaths,” Scientific data, 10(1), 334, 2023.
  • L. Tabár et al., “A new approach to breast cancer terminology based on the anatomic site of tumour origin: The importance of radiologic imaging biomarkers,” Eur. J. Radiol., vol. 149, p. 110189, 2022.
  • Y. A. Yousef et al., “Ocular and periocular metastasis in breast cancer: Clinical characteristics, prognostic factors and treatment outcome,” Cancers, vol. 16, no. 8, p. 1518, 2024.
  • H. Aljuaid, N. Alturki, N. Alsubaie, L. Cavallaro, and A. Liotta, “Computer-aided diagnosis for breast cancer classification using deep neural networks and transfer learning,” Comput. Methods Programs Biomed., vol. 223, p. 106951, 2022.
  • E. Elyan et al., “Computer vision and machine learning for medical image analysis: Recent advances, challenges, and way forward,” Artif. Intell. Surg., vol. 2, no. 1, pp. 24–45, 2022.
  • N. Papandrianos, E. Papageorgiou, A. Anagnostis, and K. Papageorgiou, “Bone metastasis classification using whole body images from prostate cancer patients based on convolutional neural networks application,” PLoS One, vol. 15, no. 8, p. e0237213, 2020.
  • M. F. Ijaz and M. Woźniak, “Recent advances in deep learning and medical imaging for cancer treatment,” Cancers, vol. 16, no. 4, p. 700, 2024.
  • P. K. Mall et al., “A comprehensive review of deep neural networks for medical image processing: Recent developments and future opportunities,” Healthcare Anal., p. 100216, 2023.
  • V. K. Reshma et al., “Detection of breast cancer using histopathological image classification dataset with deep learning techniques,” Biomed. Res. Int., vol. 2022, p. 8363850, 2022.
  • W. Wang et al., “Pyramid vision transformer: A versatile backbone for dense prediction without convolutions,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 568–578.
  • H. Gan, M. Shen, Y. Hua, C. Ma, and T. Zhang, “From patch to pixel: A transformer-based hierarchical framework for compressive image sensing,” IEEE Trans. Comput. Imaging, vol. 9, pp. 133–146, 2023.
  • L. Gaur, U. Bhatia, N. Z. Jhanjhi, G. Muhammad, and M. Masud, “Medical image-based detection of COVID-19 using deep convolution neural networks,” Multimedia Syst., vol. 29, no. 3, pp. 1729–1738, 2023.
  • S. S. Skandha et al., “A novel genetic algorithm-based approach for compression and acceleration of deep learning convolution neural network: An application in computer tomography lung cancer data,” Neural Comput. Appl., vol. 34, no. 23, pp. 20915–20937, 2022.
  • S. Igarashi, Y. Sasaki, T. Mikami, H. Sakuraba, and S. Fukuda, “Anatomical classification of upper gastrointestinal organs under various image capture conditions using AlexNet,” Comput. Biol. Med., vol. 124, p. 103950, 2020.
  • H. Wang et al., “Scientific discovery in the age of artificial intelligence,” Nature, vol. 620, no. 7972, pp. 47–60, 2023.
  • O. Köpüklü, S. Hörmann, F. Herzog, H. Cevikalp, and G. Rigoll, “Dissected 3D CNNs: Temporal skip connections for efficient online video processing,” Comput. Vis. Image Underst., vol. 215, p. 103318, 2022.
  • S. Kumar, B. Gupta, R. Grover and M. Chhabra, (2024, May). Effective Machine Learning Model for Disease Recognition and Categorization of Mango Leaves. In 2024 International Conference on Computational Intelligence and Computing Applications (ICCICA), 2024, pp. 292-296.
  • J. Xiao, J. Wang, S. Cao, and B. Li, “Application of a novel and improved VGG-19 network in the detection of workers wearing masks,” in J. Phys.: Conf. Ser., vol. 1518, no. 1, p. 012041, IOP Publishing, 2020.
  • F. Rofii, G. Priyandoko, and M. I. Fanani, “Modeling of convolutional neural networks for detection and classification of three vehicle classes,” in J. Phys.: Conf. Ser., vol. 1908, no. 1, p. 012018, IOP Publishing, 2021.
  • A. A. E. Raj, N. B. Ahmad, S. A. Durai, and R. Renugadevi, “Integrating VGG 19 U‐Net for breast thermogram segmentation and hybrid enhancement with optimized classifier selection: A novel approach to breast cancer diagnosis,” Int. J. Imaging Syst. Technol., vol. 34, no. 6, p. e23210, 2024.
  • A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances Neural Inf. Process. Syst., vol. 25, 2012.
  • K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.
  • K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner ... & N. Houlsby,” An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  • M. Touvron et al., “Training data-efficient image transformers & distillation through attention,” in Int. Conf. Mach. Learn., 2021.
  • “Breast cancer image classification,” Kaggle, Nov. 10, 2024. [Online]. Available: https://www.kaggle.com/datasets/vishnuvamsi05799/breast-cancer-image-classification.
  • A. B. Amjoud and M. Amrouch, “Object detection using deep learning, CNNs and vision transformers: A review,” IEEE Access, vol. 11, pp. 35479–35516, 2023.
  • M. M. Naseer et al., “Intriguing properties of vision transformers,” Adv. Neural Inf. Process. Syst., vol. 34, pp. 23296–23308, 2021.
  • O. G. Ajayi, E. Iwendi, and O. O. Adetunji, “Optimizing crop classification in precision agriculture using AlexNet and high-resolution UAV imagery,” Technol. Agron., vol. 4, no. 1, 2024.
  • W. Ketwongsa, S. Boonlue, and U. Kokaew, “A new deep learning model for the classification of poisonous and edible mushrooms based on improved AlexNet convolutional neural network,” Appl. Sci., vol. 12, no. 7, p. 3409, 2022.
  • L. Zhang, Y. Bian, P. Jiang, and F. Zhang, “A transfer residual neural network based on ResNet-50 for detection of steel surface defects,” Appl. Sci., vol. 13, no. 9, p. 5260, 2023.
  • A. V. Ikechukwu, S. Murali, R. Deepu, and R. C. Shivamurthy, “ResNet-50 vs VGG-19 vs training from scratch: A comparative analysis of the segmentation and classification of pneumonia from chest X-ray images,” Glob. Transitions Proc., vol. 2, no. 2, pp. 375–381, 2021.
  • D. Wang, A. Khosla, R. Gargeya, H. Irshad & A. H. Beck, “Deep learning for identifying metastatic breast cancer,” arXiv preprint arXiv:1606.05718, 2016.
There are 36 citations in total.

Details

Primary Language English
Subjects Deep Learning, Neural Networks, Machine Learning (Other)
Journal Section Research Articles
Authors

Uğur Demiroğlu 0000-0002-0000-8411

Bilal Şenol 0000-0002-3734-8807

Early Pub Date September 30, 2025
Publication Date September 30, 2025
Submission Date March 23, 2025
Acceptance Date May 29, 2025
Published in Issue Year 2025 Volume: 13 Issue: 3

Cite

IEEE U. Demiroğlu and B. Şenol, “Benchmarking Deep Learning Models for Breast Cancer Detection: A Comparison of Vision Transformers and CNNs”, APJESS, vol. 13, no. 3, pp. 108–119, 2025, doi: 10.21541/apjess.1663864.

Academic Platform Journal of Engineering and Smart Systems