Araştırma Makalesi
BibTex RIS Kaynak Göster

A Novel Hybrid Attention VGG Method For Benign and Malignant Breast Cancer Classification

Yıl 2025, Cilt: 16 Sayı: 1, 27 - 47
https://doi.org/10.24012/dumf.1502403

Öz

Worldwide, breast cancer is quite widespread among many types of cancer. Early detection is crucial for effective treatment. While early detection does not cure cancer or prevent its recurrence, it significantly improves treatment outcomes. Regular breast cancer check-ups, including mammograms, play a vital role in early detection. The type of the observed tumour is also crucial. Therefore, our study utilized a range of deep learning methods to accurately classify distinct forms of breast cancer cells, including both benign and malignant varieties. The problem addressed in the study relies on the classification of tumour images as either benign or malignant. We used the augmented MIAS and INBREAST datasets, implementing fourteen deep learning models by adjusting different hyperparameter values. Aside from these, we trained a new model we created, the Hybrid Attention VGG16 model, on the datasets by adjusting the batch size and learning rate values used in other models. Our research has shown that initially models like VGG16, VGG19, ResNet50, ResNet101, EfficientNetV2B0 and EfficientNetV2L performed better at different hyperparameter values, whereas our proposed model, the Hybrid Attention VGG model, achieved one of the highest performance among deep learning models across many hyperparameter values and on both datasets, especially on the Augmented INBREAST dataset. Our newly proposed model, with its unique skip connection and attention mechanism, surpasses the accuracy of models employed in earlier studies, as demonstrated when comparing them in the literature

Kaynakça

  • [1] L. Tsochatzidis, L. Costaridou, and I. Pratikakis, ‘Deep Learning for Breast Cancer Diagnosis from Mammograms—A Comparative Study’, Journal of Imaging, vol. 5, no. 3, 2019.
  • [2] J. Arevalo, F. A. González, R. Ramos-Pollán, J. L. Oliveira, and M. A. Guevara Lopez, ‘Representation learning for mammography mass lesion classification with convolutional neural networks’, Computer Methods and Programs in Biomedicine, vol. 127, pp. 248–257, 2016.
  • [3] D. Moura et al., ‘Benchmarking Datasets for Breast Cancer Computer-Aided Diagnosis (CADx)’, 11 2013, vol. 8258, pp. 326–333.
  • [4] B. Huynh, H. Li, and M. Giger, ‘Digital mammographic tumor classification using transfer learning from deep convolutional neural networks’, Journal of Medical Imaging (Bellingham, Wash. ), vol. 3, p. 034501, 07 2016.
  • [5] A. Elbagoury, ‘Breast Infrared Thermography Segmentation Based on Adaptive Tuning of a Fully Convolutional Network’, Current Medical Imaging Reviews, vol. 16, pp. 611–621, 05 2020.
  • [6] Z. Jiao, X. Gao, Y. Wang, and J. Li, ‘A Deep Feature Based Framework for Breast Masses Classification’, Neurocomputing, vol. 197, 03 2016.
  • [7] F. F. Ting, Y. J. Tan, and K. S. Sim, ‘Convolutional neural network improvement for breast cancer classification’, Expert Systems with Applications, vol. 120, pp. 103–115, 2019.
  • [8] A. Rampun, B. Scotney, P. Morrow, and H. Wang, ‘Breast Mass Classification in Mammograms using Ensemble Convolutional Neural Networks’, 09 2018, pp. 1–6.
  • [9] V. R. and M. S, ‘DIscrete wavelet transform based principal component averaging fusion for medical images’, AEU - International Journal of Electronics and Communications, vol. 69, pp. 896–902, 04 2015.
  • [10] D. Ragab, M. Sharkas, S. Marshall, and J. Ren, ‘Breast cancer detection using deep convolutional neural networks and support vector machines’, PeerJ, vol. 7, p. e6201, 01 2019.
  • [11] H.-C. Shin et al., ‘Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning’, IEEE Transactions on Medical Imaging, vol. 35, 02 2016. [12] P. Ballester and R. Araujo, “On the Performance of GoogLeNet and AlexNet Applied to Sketches”, AAAI, vol. 30, no. 1, Feb. 2016.
  • [13] J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: Object Detection via Region-based Fully Convolutional Networks.,” in NIPS, 2016, pp. 379–387.
  • [14] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge.,” CoRR, vol. abs/1409.0575, 2014.
  • [15] R. S. Lee, F. Gimenez, A. Hoogi, K. K. Miyake, M. Gorovoy, and D. L. Rubin, ‘A curated mammography data set for use in computer-aided detection and diagnosis research’, Scientific data, vol. 4, p. 170177, Dec. 2017.
  • [16] C. Sun, A. Shrivastava, S. Singh, and A. Gupta, ‘Revisiting Unreasonable Effectiveness of Data in Deep Learning Era’, in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 843–852.
  • [17] S. C. Wong, A. Gatt, V. Stamatescu, and M. D. McDonnell, ‘Understanding data augmentation for classification: when to warp?’, CoRR, vol. abs/1609.08764, 2016.
  • [18] M.-L. Huang and T.-Y. Lin, ‘Dataset of breast mammography images with masses’, Data in Brief, vol. 31, p. 105928, 2020.
  • [19] F. Zhuang et al., ‘A Comprehensive Survey on Transfer Learning’, arXiv [cs.LG]. 2020.
  • [20] K. He, X. Zhang, S. Ren, and J. Sun, ‘Deep Residual Learning for Image Recognition’, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
  • [21] S. Liu and W. Deng, ‘Very deep convolutional neural network based image classification using small training sample size’, 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), pp. 730– 734, 2015.
  • [22] V. Sudha and D. Ganeshbabu, ‘A Convolutional Neural Network Classifier VGG-19 Architecture for Lesion Detection and Grading in Diabetic Retinopathy Based on Deep Learning’, Computers, Materials & Continua, vol. 66, pp. 827–842, 01 2020.
  • [23] K. He, X. Zhang, S. Ren, and J. Sun, ‘Deep Residual Learning for Image Recognition’, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778. [24] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, ‘Densely Connected Convolutional Networks’, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2261–2269.
  • [25] K. He, X. Zhang, S. Ren, and J. Sun, ‘Deep Residual Learning for Image Recognition’, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
  • [26] Y.-M. Chung, C.-S. Hu, A. Lawson, and C. D. Smyth, ‘TopoResNet: A hybrid deep learning architecture and its application to skin lesion classification’, CoRR, vol. abs/1905.08607, 2019.
  • [27] M. Koç and R. Özdemir, ‘Enhancing Facial Expression Recognition in the Wild with Deep Learning Methods Using a New Dataset: RidNet’, RidNet. Bilecik Seyh Edebali University Journal of Science, vol. 6, no. 2, pp. 384–396, 2019.
  • [28] A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.,” CoRR, vol. abs/1704.04861, 2017.
  • [29] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, ‘MobileNetV2: Inverted Residuals and Linear Bottlenecks’, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520.
  • [30] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, ‘Learning Transferable Architectures for Scalable Image Recognition’, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8697–8710.
  • [31] M. Wang, B. Liu, and H. Foroosh, ‘Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial “Bottleneck” Structure’, arXiv: Computer Vision and Pattern Recognition, 2016.
  • [32] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, ‘Rethinking the Inception Architecture for Computer Vision’, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818–2826.
  • [33] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, ‘Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning’, AAAI Conference on Artificial Intelligence, vol. 31, 02 2016.
  • [34] F. Chollet, ‘Xception: Deep Learning with Depthwise Separable Convolutions’, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1800–1807. [35] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, ‘Learning Transferable Architectures for Scalable Image Recognition’, arXiv [cs.CV]. 2018.
  • [36] Saraswathi Duraisamy and Srinivasan Emperumal, “Computer-aided mammogram diagnosis system using deep learning convolutional fully complex- valued relaxation neural network classifier.” IET Computer Vision vol. 11, no. 8, PP. 656-662, July 2017.
  • [37] N. Dhungel, G. Carneiro, and A. P. Bradley, "A deep learning approach for analysing masses in mammograms with minimal user intervention," Med. Image Anal., vol. 37, pp. 114-128, 2017.
  • [38] Jaffar, M. A., "Deep learning based computer aided diagnosis system for breast mammograms," International Journal of Advanced Computer Science and Applications, vol. 8, no. 7, pp. 286-290, 2017.
  • [39] E. M. F. El Houby and N. I. R. Yassin, ‘Malignant and nonmalignant classification of breast lesions in mammograms using convolutional neural networks’, Biomed. Signal Process. Control, vol. 70, no. 102954, p. 102954, Sep. 2021.
  • [40] P. Kaur, G. Singh, and P. Kaur, ‘Intellectual detection and validation of automated mammogram breast cancer images by multi-class SVM using deep learning classification’, Inform. Med. Unlocked, vol. 16, no. 100151, p. 100151, 2019.
  • [41] M. Tan and Q. V. Le, ‘EfficientNetV2: Smaller Models and Faster Training’, CoRR, vol. abs/2104.00298, 2021.

Derin Öğrenme Algoritmaları ve MIAS Mamografi Veriseti Kullanılarak İyi Huylu ve Kötü Huylu Meme Kanseri Sınıflandırılması

Yıl 2025, Cilt: 16 Sayı: 1, 27 - 47
https://doi.org/10.24012/dumf.1502403

Öz

Meme kanseri, dünya genelinde en yaygın kanser türlerinden biridir. Erken teşhis, başarılı bir tedavi için hayati önem taşır. Erken teşhis kanseri iyileştirmez veya gelecekte tekrarlamasını önlemez ancak tedavide önemli bir rol oynar. Düzenli meme kanseri kontrolü, özellikle mamografi ile erken teşhise büyük katkı sağlar. Ayrıca teşhis edilen tümörün özelliği de önemlidir. Bu nedenle çalışmamızda, benign ve malign gibi farklı meme kanseri hücrelerini sınıflandırmak için çeşitli derin öğrenme tekniklerini kullandık. Çalışmadaki ele alınan problem tümör görüntülerinin iyi huylu mu yoksa kötü huylu mu olduğunun sınıflandırılmasıdır. MIAS veri setinden mamografi görüntülerini kullandık ve VGG16, VGG19, DenseNet121, DenseNet169, ResNet50, ResNet101, MobileNet, MobileNetV2, InceptionV3 ve InceptionResNetV2 olmak üzere on derin öğrenme modeli uyguladık. Analizlerimiz, belirli modellerin diğerlerine göre sınıflandırma doğruluğu ve diğer performans metrikleri açısından daha iyi sonuçlar verdiğini gösterdi. MIAS mamogram veriseti için meme kanseri sınıflandırmada en düşük doğruluk değerine sahip derin öğrenme modeli Inceptionv3 iken en yüksek doğruluk değerine sahip derin öğrenme modeli ise ResNet50 olduğu görülmüştür. En başarılı model olan ResNet50, 0.9691 doğruluk değerine sahip olmuştur.

Kaynakça

  • [1] L. Tsochatzidis, L. Costaridou, and I. Pratikakis, ‘Deep Learning for Breast Cancer Diagnosis from Mammograms—A Comparative Study’, Journal of Imaging, vol. 5, no. 3, 2019.
  • [2] J. Arevalo, F. A. González, R. Ramos-Pollán, J. L. Oliveira, and M. A. Guevara Lopez, ‘Representation learning for mammography mass lesion classification with convolutional neural networks’, Computer Methods and Programs in Biomedicine, vol. 127, pp. 248–257, 2016.
  • [3] D. Moura et al., ‘Benchmarking Datasets for Breast Cancer Computer-Aided Diagnosis (CADx)’, 11 2013, vol. 8258, pp. 326–333.
  • [4] B. Huynh, H. Li, and M. Giger, ‘Digital mammographic tumor classification using transfer learning from deep convolutional neural networks’, Journal of Medical Imaging (Bellingham, Wash. ), vol. 3, p. 034501, 07 2016.
  • [5] A. Elbagoury, ‘Breast Infrared Thermography Segmentation Based on Adaptive Tuning of a Fully Convolutional Network’, Current Medical Imaging Reviews, vol. 16, pp. 611–621, 05 2020.
  • [6] Z. Jiao, X. Gao, Y. Wang, and J. Li, ‘A Deep Feature Based Framework for Breast Masses Classification’, Neurocomputing, vol. 197, 03 2016.
  • [7] F. F. Ting, Y. J. Tan, and K. S. Sim, ‘Convolutional neural network improvement for breast cancer classification’, Expert Systems with Applications, vol. 120, pp. 103–115, 2019.
  • [8] A. Rampun, B. Scotney, P. Morrow, and H. Wang, ‘Breast Mass Classification in Mammograms using Ensemble Convolutional Neural Networks’, 09 2018, pp. 1–6.
  • [9] V. R. and M. S, ‘DIscrete wavelet transform based principal component averaging fusion for medical images’, AEU - International Journal of Electronics and Communications, vol. 69, pp. 896–902, 04 2015.
  • [10] D. Ragab, M. Sharkas, S. Marshall, and J. Ren, ‘Breast cancer detection using deep convolutional neural networks and support vector machines’, PeerJ, vol. 7, p. e6201, 01 2019.
  • [11] H.-C. Shin et al., ‘Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning’, IEEE Transactions on Medical Imaging, vol. 35, 02 2016. [12] P. Ballester and R. Araujo, “On the Performance of GoogLeNet and AlexNet Applied to Sketches”, AAAI, vol. 30, no. 1, Feb. 2016.
  • [13] J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: Object Detection via Region-based Fully Convolutional Networks.,” in NIPS, 2016, pp. 379–387.
  • [14] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge.,” CoRR, vol. abs/1409.0575, 2014.
  • [15] R. S. Lee, F. Gimenez, A. Hoogi, K. K. Miyake, M. Gorovoy, and D. L. Rubin, ‘A curated mammography data set for use in computer-aided detection and diagnosis research’, Scientific data, vol. 4, p. 170177, Dec. 2017.
  • [16] C. Sun, A. Shrivastava, S. Singh, and A. Gupta, ‘Revisiting Unreasonable Effectiveness of Data in Deep Learning Era’, in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 843–852.
  • [17] S. C. Wong, A. Gatt, V. Stamatescu, and M. D. McDonnell, ‘Understanding data augmentation for classification: when to warp?’, CoRR, vol. abs/1609.08764, 2016.
  • [18] M.-L. Huang and T.-Y. Lin, ‘Dataset of breast mammography images with masses’, Data in Brief, vol. 31, p. 105928, 2020.
  • [19] F. Zhuang et al., ‘A Comprehensive Survey on Transfer Learning’, arXiv [cs.LG]. 2020.
  • [20] K. He, X. Zhang, S. Ren, and J. Sun, ‘Deep Residual Learning for Image Recognition’, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
  • [21] S. Liu and W. Deng, ‘Very deep convolutional neural network based image classification using small training sample size’, 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), pp. 730– 734, 2015.
  • [22] V. Sudha and D. Ganeshbabu, ‘A Convolutional Neural Network Classifier VGG-19 Architecture for Lesion Detection and Grading in Diabetic Retinopathy Based on Deep Learning’, Computers, Materials & Continua, vol. 66, pp. 827–842, 01 2020.
  • [23] K. He, X. Zhang, S. Ren, and J. Sun, ‘Deep Residual Learning for Image Recognition’, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778. [24] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, ‘Densely Connected Convolutional Networks’, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2261–2269.
  • [25] K. He, X. Zhang, S. Ren, and J. Sun, ‘Deep Residual Learning for Image Recognition’, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
  • [26] Y.-M. Chung, C.-S. Hu, A. Lawson, and C. D. Smyth, ‘TopoResNet: A hybrid deep learning architecture and its application to skin lesion classification’, CoRR, vol. abs/1905.08607, 2019.
  • [27] M. Koç and R. Özdemir, ‘Enhancing Facial Expression Recognition in the Wild with Deep Learning Methods Using a New Dataset: RidNet’, RidNet. Bilecik Seyh Edebali University Journal of Science, vol. 6, no. 2, pp. 384–396, 2019.
  • [28] A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.,” CoRR, vol. abs/1704.04861, 2017.
  • [29] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, ‘MobileNetV2: Inverted Residuals and Linear Bottlenecks’, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520.
  • [30] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, ‘Learning Transferable Architectures for Scalable Image Recognition’, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8697–8710.
  • [31] M. Wang, B. Liu, and H. Foroosh, ‘Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial “Bottleneck” Structure’, arXiv: Computer Vision and Pattern Recognition, 2016.
  • [32] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, ‘Rethinking the Inception Architecture for Computer Vision’, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818–2826.
  • [33] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, ‘Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning’, AAAI Conference on Artificial Intelligence, vol. 31, 02 2016.
  • [34] F. Chollet, ‘Xception: Deep Learning with Depthwise Separable Convolutions’, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1800–1807. [35] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, ‘Learning Transferable Architectures for Scalable Image Recognition’, arXiv [cs.CV]. 2018.
  • [36] Saraswathi Duraisamy and Srinivasan Emperumal, “Computer-aided mammogram diagnosis system using deep learning convolutional fully complex- valued relaxation neural network classifier.” IET Computer Vision vol. 11, no. 8, PP. 656-662, July 2017.
  • [37] N. Dhungel, G. Carneiro, and A. P. Bradley, "A deep learning approach for analysing masses in mammograms with minimal user intervention," Med. Image Anal., vol. 37, pp. 114-128, 2017.
  • [38] Jaffar, M. A., "Deep learning based computer aided diagnosis system for breast mammograms," International Journal of Advanced Computer Science and Applications, vol. 8, no. 7, pp. 286-290, 2017.
  • [39] E. M. F. El Houby and N. I. R. Yassin, ‘Malignant and nonmalignant classification of breast lesions in mammograms using convolutional neural networks’, Biomed. Signal Process. Control, vol. 70, no. 102954, p. 102954, Sep. 2021.
  • [40] P. Kaur, G. Singh, and P. Kaur, ‘Intellectual detection and validation of automated mammogram breast cancer images by multi-class SVM using deep learning classification’, Inform. Med. Unlocked, vol. 16, no. 100151, p. 100151, 2019.
  • [41] M. Tan and Q. V. Le, ‘EfficientNetV2: Smaller Models and Faster Training’, CoRR, vol. abs/2104.00298, 2021.
Toplam 38 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Derin Öğrenme
Bölüm Makaleler
Yazarlar

Mustafa Salih Bahar 0000-0002-1625-9362

Erken Görünüm Tarihi 26 Mart 2025
Yayımlanma Tarihi
Gönderilme Tarihi 18 Haziran 2024
Kabul Tarihi 19 Kasım 2024
Yayımlandığı Sayı Yıl 2025 Cilt: 16 Sayı: 1

Kaynak Göster

IEEE M. S. Bahar, “A Novel Hybrid Attention VGG Method For Benign and Malignant Breast Cancer Classification”, DÜMF MD, c. 16, sy. 1, ss. 27–47, 2025, doi: 10.24012/dumf.1502403.
DUJE tarafından yayınlanan tüm makaleler, Creative Commons Atıf 4.0 Uluslararası Lisansı ile lisanslanmıştır. Bu, orijinal eser ve kaynağın uygun şekilde belirtilmesi koşuluyla, herkesin eseri kopyalamasına, yeniden dağıtmasına, yeniden düzenlemesine, iletmesine ve uyarlamasına izin verir. 24456