Polyp Segmentation with Deep Learning: Utilizing DeeplabV3+ Architecture and Various CNN Backbones
Yıl 2024,
Cilt: 15 Sayı: 4, 797 - 805
Yaren Akgöl
,
Buket Toptaş
Öz
Polyps are abnormal tissue growths that often serve as early indicators for various types of cancer. Early detection is crucial in the treatment of diseases like colorectal cancer, which has a high mortality rate. There is a significant need for automated diagnostic systems to detect these cancers efficiently. This article introduces a deep learning-based model utilizing the Deeplabv3+ architecture, which has been augmented with four different convolutional neural network backbones. The enhanced architectures have been tested on the publicly available Kvasir-SEG and CVC-ClinicDB datasets for the task of polyp segmentation. Experimental studies have shown that the best results for the Kvasir-SEG dataset were achieved using the ResNet50 architecture, while the highest performance on the CVC-ClinicDB dataset was obtained with the SqueezeNet architecture.
Kaynakça
- [1] B. Toptaş and D. Hanbay, “Separation of arteries and veins in retinal fundus images with a new CNN architecture,” Comput. Methods Biomech. Biomed. Eng. Imaging Vis., vol. 11, no. 4, pp. 1512–1522, 2023, doi: 10.1080/21681163.2022.2151066.
- [2] M. Toptaş and D. Hanbay, “Mikroskobik Kan Hücre Görüntülerinin Güncel Derin Öğrenme Mimarileri ile Bölütlemesi,” Mühendislik Bilim. ve Araştırmaları Derg., vol. 5, no. 1, pp. 135–141, 2023, doi: 10.46387/bjesr.1261689.
- [3] C. Özdemir, “Meme Ultrason Görüntülerinde Kanser Hücre Segmentasyonu için Yeni Bir FCN Modeli,” Afyon Kocatepe Univ. J. Sci. Eng., vol. 23, no. 5, pp. 1160–1170, 2023, doi: 10.35414/akufemubid.1259253.
- [4] N. Şahin, N. Alpaslan, and D. Hanbay, “Robust optimization of SegNet hyperparameters for skin lesion segmentation,” Multimed. Tools Appl., vol. 81, no. 25, pp. 36031–36051, 2022, doi: 10.1007/s11042-021-11032-6.
- [5] W. Zhang, F. Lu, H. Su, and Y. Hu, “Dual-branch multi-information aggregation network with transformer and convolution for polyp segmentation,” Comput. Biol. Med., vol. 168, 2024, doi: 10.1016/j.compbiomed.2023.107760.
- [6] A. Siegel, R. L., Miller, K. D., Fuchs, H. E., & Jemal, “Cancer statistics, 2021,” Ca Cancer J Clin, pp. 7–33, 2021.
- [7] O. H. Maghsoudi, “Superpixel based segmentation and classification of polyps in wireless capsule endoscopy,” in 2017 IEEE Signal Processing in Medicine and Biology Symposium, SPMB 2017 - Proceedings, 2017, pp. 1–4. doi: 10.1109/SPMB.2017.8257027.
- [8] S. Hwang and M. E. Celebi, “Polyp detection in Wireless Capsule Endoscopy videos based on image segmentation and geometric feature,” in 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, 2010, pp. 678–681. doi: 10.1109/ICASSP.2010.5495103.
- [9] S. Gangrade, P. C. Sharma, A. K. Sharma, and Y. P. Singh, “Modified DeeplabV3+ with multi-level context attention mechanism for colonoscopy polyp segmentation,” Comput. Biol. Med., vol. 170, 2024, doi: 10.1016/j.compbiomed.2024.108096.
- [10] S. Li et al., “Boundary guided network with two-stage transfer learning for gastrointestinal polyps segmentation,” Expert Syst. Appl., vol. 240, 2024, doi: 10.1016/j.eswa.2023.122503.
- [11] W. Li, Z. Huang, F. Li, Y. Zhao, and H. Zhang, “CIFG-Net: Cross-level information fusion and guidance network for Polyp Segmentation,” Comput. Biol. Med., vol. 169, 2024, doi: 10.1016/j.compbiomed.2024.107931.
- [12] X. Jia et al., “PolypMixNet: Enhancing semi-supervised polyp segmentation with polyp-aware augmentation,” Comput. Biol. Med., vol. 170, 2024, doi: 10.1016/j.compbiomed.2024.108006.
- [13] Y. He, Y. Yi, C. Zheng, and J. Kong, “BGF-Net: Boundary guided filter network for medical image segmentation,” Comput. Biol. Med., vol. 171, 2024, doi: 10.1016/j.compbiomed.2024.108184.
- [14] D. Liu, H. Deng, Z. Huang, and J. Fu, “FCA-Net: Fully context-aware feature aggregation network for medical segmentation,” Biomed. Signal Process. Control, vol. 91, 2024, doi: 10.1016/j.bspc.2024.106004.
- [15] H. Wang et al., “Unveiling camouflaged and partially occluded colorectal polyps: Introducing CPSNet for accurate colon polyp segmentation,” Comput. Biol. Med., vol. 171, 2024, doi: 10.1016/j.compbiomed.2024.108186.
- [16] G. Liu et al., “CAFE-Net: Cross-Attention and Feature Exploration Network for polyp segmentation,” Expert Syst. Appl., vol. 238, 2024, doi: 10.1016/j.eswa.2023.121754.
- [17] D. Shao, H. Yang, C. Liu, and L. Ma, “AFANet: Adaptive Feature Aggregation for Polyp Segmentation,” Med. Eng. Phys., p. 104118, 2024, doi: 10.1016/j.medengphy.2024.104118.
- [18] Z. U. D. Muhammad, U. Muhammad, Z. Huang, and N. Gu, “MMFIL-Net: Multi-level and multi-source feature interactive lightweight network for polyp segmentation,” Displays, vol. 81, 2024, doi: 10.1016/j.displa.2023.102600.
- [19] G. Yue et al., “Boundary uncertainty aware network for automated polyp segmentation,” Neural Networks, vol. 170, pp. 390–404, 2024, doi: 10.1016/j.neunet.2023.11.050.
- [20] S. Ahmed and M. K. Hasan, “Twin-SegNet: Dynamically coupled complementary segmentation networks for generalized medical image segmentation,” Comput. Vis. Image Underst., vol. 240, 2024, doi: 10.1016/j.cviu.2023.103910.
- [21] P. Fan, Y. Diao, F. Li, W. Zhao, and Z. Chen, “SRSegNet: Super-resolution-assisted small targets polyp segmentation network with combined high and low resolution,” J. King Saud Univ. - Comput. Inf. Sci., vol. 36, no. 3, 2024, doi: 10.1016/j.jksuci.2024.101981.
- [22] J. Liu, Q. Chen, Y. Zhang, Z. Wang, X. Deng, and J. Wang, “Multi-level feature fusion network combining attention mechanisms for polyp segmentation,” Inf. Fusion, vol. 104, 2024, doi: 10.1016/j.inffus.2023.102195.
- [23] X. Pan, C. Ma, Y. Mu, and M. Bi, “GLSNet: A Global Guided Local Feature Stepwise Aggregation Network for polyp segmentation,” Biomed. Signal Process. Control, vol. 87, 2024, doi: 10.1016/j.bspc.2023.105528.
- [24] Y. Lin, X. Han, K. Chen, W. Zhang, and Q. Liu, “CSwinDoubleU-Net: A double U-shaped network combined with convolution and Swin Transformer for colorectal polyp segmentation,” Biomed. Signal Process. Control, vol. 89, 2024, doi: 10.1016/j.bspc.2023.105749.
- [25] X. Liu and S. Song, “Attention combined pyramid vision transformer for polyp segmentation,” Biomed. Signal Process. Control, vol. 89, 2024, doi: 10.1016/j.bspc.2023.105792.
- [26] D. C. Nguyen and H. L. Nguyen, “PolyPooling: An accurate polyp segmentation from colonoscopy images,” Biomed. Signal Process. Control, vol. 92, 2024, doi: 10.1016/j.bspc.2024.105979.
- [27] B. Huang, T. Huang, J. Xu, J. Min, C. Hu, and Z. Zhang, “RCNU-Net: Reparameterized convolutional network with convolutional block attention module for improved polyp image segmentation,” Biomed. Signal Process. Control, vol. 93, 2024, doi: 10.1016/j.bspc.2024.106138.
- [28] D. Jha et al., “Kvasir-SEG: A Segmented Polyp Dataset,” 2020, pp. 451–462. doi: 10.1007/978-3-030-37734-2_37.
- [29] J. Bernal, F. J. Sánchez, G. Fernández-Esparrach, D. Gil, C. Rodríguez, and F. Vilariño, “WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians,” Comput. Med. Imaging Graph., vol. 43, pp. 99–111, Jul. 2015, doi: 10.1016/j.compmedimag.2015.02.007.
- [30] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation,” Feb. 2018, doi: https://doi.org/10.48550/arXiv.1802.02611.
- [31] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-Janua. pp. 3462–3471, 2017. doi: 10.1109/CVPR.2017.369.
- [32] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” Aug. 2016, [Online]. Available: http://arxiv.org/abs/1608.06993
- [33] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.
- [34] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size,” Feb. 2016, [Online]. Available: http://arxiv.org/abs/1602.07360
- [35] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., 2015.
- [36] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” 2015, pp. 234–241. doi: 10.1007/978-3-319-24574-4_28.
- [37] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation,” IEEE Trans. Med. Imaging, vol. 39, no. 6, pp. 1856–1867, Jun. 2020, doi: 10.1109/TMI.2019.2959609.
- [38] D. Jha et al., “Real-Time Polyp Detection, Localization and Segmentation in Colonoscopy Using Deep Learning,” IEEE Access, vol. 9, pp. 40496–40510, 2021, doi: 10.1109/ACCESS.2021.3063716.
Derin Öğrenme ile Polip Segmentasyonu: DeeplabV3+ Mimarisi ve Farklı CNN Omurgalarının Kullanımı
Yıl 2024,
Cilt: 15 Sayı: 4, 797 - 805
Yaren Akgöl
,
Buket Toptaş
Öz
Polipler, anormal doku büyümeleri olup birçok kanser türünün erken habercisidir. Erken teşhis, özellikle ölüm oranı yüksek olan kolorektal kanser gibi hastalıkların tedavisinde hayati öneme sahiptir. Bu kanserlerin tespit edilmesi için otomatik tanı sistemlerine ihtiyaç duyulmaktadır. Bu makalede, derin öğrenme tabanlı bir model olan Deeplabv3+ mimarisi kullanılmış ve bu mimari dört farklı evrişimli sinir ağı omurgası ile güçlendirilmiştir. Güçlendirilmiş dört mimari, halka açık olarak verilen Kvasir-SEG ve CLINIC-DB veri setleri üzerinde polip segmentasyonu görevini gerçekleştirmiştir. Deneysel çalışmalarda Kvasir-SEG veri seti için en yüksek sonuçlar ResNet50 mimarisi ile elde edilirken, CVC-ClinicDB veri seti için en yüksek sonuçlar SqueezeNet mimarisi ile elde edilmiştir.
Kaynakça
- [1] B. Toptaş and D. Hanbay, “Separation of arteries and veins in retinal fundus images with a new CNN architecture,” Comput. Methods Biomech. Biomed. Eng. Imaging Vis., vol. 11, no. 4, pp. 1512–1522, 2023, doi: 10.1080/21681163.2022.2151066.
- [2] M. Toptaş and D. Hanbay, “Mikroskobik Kan Hücre Görüntülerinin Güncel Derin Öğrenme Mimarileri ile Bölütlemesi,” Mühendislik Bilim. ve Araştırmaları Derg., vol. 5, no. 1, pp. 135–141, 2023, doi: 10.46387/bjesr.1261689.
- [3] C. Özdemir, “Meme Ultrason Görüntülerinde Kanser Hücre Segmentasyonu için Yeni Bir FCN Modeli,” Afyon Kocatepe Univ. J. Sci. Eng., vol. 23, no. 5, pp. 1160–1170, 2023, doi: 10.35414/akufemubid.1259253.
- [4] N. Şahin, N. Alpaslan, and D. Hanbay, “Robust optimization of SegNet hyperparameters for skin lesion segmentation,” Multimed. Tools Appl., vol. 81, no. 25, pp. 36031–36051, 2022, doi: 10.1007/s11042-021-11032-6.
- [5] W. Zhang, F. Lu, H. Su, and Y. Hu, “Dual-branch multi-information aggregation network with transformer and convolution for polyp segmentation,” Comput. Biol. Med., vol. 168, 2024, doi: 10.1016/j.compbiomed.2023.107760.
- [6] A. Siegel, R. L., Miller, K. D., Fuchs, H. E., & Jemal, “Cancer statistics, 2021,” Ca Cancer J Clin, pp. 7–33, 2021.
- [7] O. H. Maghsoudi, “Superpixel based segmentation and classification of polyps in wireless capsule endoscopy,” in 2017 IEEE Signal Processing in Medicine and Biology Symposium, SPMB 2017 - Proceedings, 2017, pp. 1–4. doi: 10.1109/SPMB.2017.8257027.
- [8] S. Hwang and M. E. Celebi, “Polyp detection in Wireless Capsule Endoscopy videos based on image segmentation and geometric feature,” in 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, 2010, pp. 678–681. doi: 10.1109/ICASSP.2010.5495103.
- [9] S. Gangrade, P. C. Sharma, A. K. Sharma, and Y. P. Singh, “Modified DeeplabV3+ with multi-level context attention mechanism for colonoscopy polyp segmentation,” Comput. Biol. Med., vol. 170, 2024, doi: 10.1016/j.compbiomed.2024.108096.
- [10] S. Li et al., “Boundary guided network with two-stage transfer learning for gastrointestinal polyps segmentation,” Expert Syst. Appl., vol. 240, 2024, doi: 10.1016/j.eswa.2023.122503.
- [11] W. Li, Z. Huang, F. Li, Y. Zhao, and H. Zhang, “CIFG-Net: Cross-level information fusion and guidance network for Polyp Segmentation,” Comput. Biol. Med., vol. 169, 2024, doi: 10.1016/j.compbiomed.2024.107931.
- [12] X. Jia et al., “PolypMixNet: Enhancing semi-supervised polyp segmentation with polyp-aware augmentation,” Comput. Biol. Med., vol. 170, 2024, doi: 10.1016/j.compbiomed.2024.108006.
- [13] Y. He, Y. Yi, C. Zheng, and J. Kong, “BGF-Net: Boundary guided filter network for medical image segmentation,” Comput. Biol. Med., vol. 171, 2024, doi: 10.1016/j.compbiomed.2024.108184.
- [14] D. Liu, H. Deng, Z. Huang, and J. Fu, “FCA-Net: Fully context-aware feature aggregation network for medical segmentation,” Biomed. Signal Process. Control, vol. 91, 2024, doi: 10.1016/j.bspc.2024.106004.
- [15] H. Wang et al., “Unveiling camouflaged and partially occluded colorectal polyps: Introducing CPSNet for accurate colon polyp segmentation,” Comput. Biol. Med., vol. 171, 2024, doi: 10.1016/j.compbiomed.2024.108186.
- [16] G. Liu et al., “CAFE-Net: Cross-Attention and Feature Exploration Network for polyp segmentation,” Expert Syst. Appl., vol. 238, 2024, doi: 10.1016/j.eswa.2023.121754.
- [17] D. Shao, H. Yang, C. Liu, and L. Ma, “AFANet: Adaptive Feature Aggregation for Polyp Segmentation,” Med. Eng. Phys., p. 104118, 2024, doi: 10.1016/j.medengphy.2024.104118.
- [18] Z. U. D. Muhammad, U. Muhammad, Z. Huang, and N. Gu, “MMFIL-Net: Multi-level and multi-source feature interactive lightweight network for polyp segmentation,” Displays, vol. 81, 2024, doi: 10.1016/j.displa.2023.102600.
- [19] G. Yue et al., “Boundary uncertainty aware network for automated polyp segmentation,” Neural Networks, vol. 170, pp. 390–404, 2024, doi: 10.1016/j.neunet.2023.11.050.
- [20] S. Ahmed and M. K. Hasan, “Twin-SegNet: Dynamically coupled complementary segmentation networks for generalized medical image segmentation,” Comput. Vis. Image Underst., vol. 240, 2024, doi: 10.1016/j.cviu.2023.103910.
- [21] P. Fan, Y. Diao, F. Li, W. Zhao, and Z. Chen, “SRSegNet: Super-resolution-assisted small targets polyp segmentation network with combined high and low resolution,” J. King Saud Univ. - Comput. Inf. Sci., vol. 36, no. 3, 2024, doi: 10.1016/j.jksuci.2024.101981.
- [22] J. Liu, Q. Chen, Y. Zhang, Z. Wang, X. Deng, and J. Wang, “Multi-level feature fusion network combining attention mechanisms for polyp segmentation,” Inf. Fusion, vol. 104, 2024, doi: 10.1016/j.inffus.2023.102195.
- [23] X. Pan, C. Ma, Y. Mu, and M. Bi, “GLSNet: A Global Guided Local Feature Stepwise Aggregation Network for polyp segmentation,” Biomed. Signal Process. Control, vol. 87, 2024, doi: 10.1016/j.bspc.2023.105528.
- [24] Y. Lin, X. Han, K. Chen, W. Zhang, and Q. Liu, “CSwinDoubleU-Net: A double U-shaped network combined with convolution and Swin Transformer for colorectal polyp segmentation,” Biomed. Signal Process. Control, vol. 89, 2024, doi: 10.1016/j.bspc.2023.105749.
- [25] X. Liu and S. Song, “Attention combined pyramid vision transformer for polyp segmentation,” Biomed. Signal Process. Control, vol. 89, 2024, doi: 10.1016/j.bspc.2023.105792.
- [26] D. C. Nguyen and H. L. Nguyen, “PolyPooling: An accurate polyp segmentation from colonoscopy images,” Biomed. Signal Process. Control, vol. 92, 2024, doi: 10.1016/j.bspc.2024.105979.
- [27] B. Huang, T. Huang, J. Xu, J. Min, C. Hu, and Z. Zhang, “RCNU-Net: Reparameterized convolutional network with convolutional block attention module for improved polyp image segmentation,” Biomed. Signal Process. Control, vol. 93, 2024, doi: 10.1016/j.bspc.2024.106138.
- [28] D. Jha et al., “Kvasir-SEG: A Segmented Polyp Dataset,” 2020, pp. 451–462. doi: 10.1007/978-3-030-37734-2_37.
- [29] J. Bernal, F. J. Sánchez, G. Fernández-Esparrach, D. Gil, C. Rodríguez, and F. Vilariño, “WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians,” Comput. Med. Imaging Graph., vol. 43, pp. 99–111, Jul. 2015, doi: 10.1016/j.compmedimag.2015.02.007.
- [30] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation,” Feb. 2018, doi: https://doi.org/10.48550/arXiv.1802.02611.
- [31] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-Janua. pp. 3462–3471, 2017. doi: 10.1109/CVPR.2017.369.
- [32] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” Aug. 2016, [Online]. Available: http://arxiv.org/abs/1608.06993
- [33] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.
- [34] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size,” Feb. 2016, [Online]. Available: http://arxiv.org/abs/1602.07360
- [35] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., 2015.
- [36] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” 2015, pp. 234–241. doi: 10.1007/978-3-319-24574-4_28.
- [37] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation,” IEEE Trans. Med. Imaging, vol. 39, no. 6, pp. 1856–1867, Jun. 2020, doi: 10.1109/TMI.2019.2959609.
- [38] D. Jha et al., “Real-Time Polyp Detection, Localization and Segmentation in Colonoscopy Using Deep Learning,” IEEE Access, vol. 9, pp. 40496–40510, 2021, doi: 10.1109/ACCESS.2021.3063716.