Research Article
BibTex RIS Cite
Year 2023, Volume: 11 Issue: 2, 129 - 137, 04.06.2023
https://doi.org/10.17694/bajece.1240284

Abstract

Supporting Institution

Dicle Üniversitesi

Project Number

DÜBAP Project No: MÜHENDİSLİK.22.001

Thanks

Dicle Üniversitesi Bilimsel Araştırma Projeleri Koordinatörlüğü'ne desteklerinden dolayı teşekkür ederiz.

References

  • [1] S. Famitha and M. Moorthi, “Intelligent and novel multi-type cancer prediction model using optimized ensemble learning,” Comput. Methods Biomech. Biomed. Engin., 2022, doi: 10.1080/10255842.2022.2081504.
  • [2] D. M. Metter, T. J. Colgan, S. T. Leung, C. F. Timmons, and J. Y. Park, “Trends in the us and canadian pathologistworkforces from 2007 to 2017,” JAMA Netw. Open, vol. 2, no. 5, pp. 1–11, 2019, doi: 10.1001/jamanetworkopen.2019.4337.
  • [3] I. Mármol, C. Sánchez-de-Diego, A. P. Dieste, E. Cerrada, and M. J. R. Yoldi, “Colorectal carcinoma: A general overview and future perspectives in colorectal cancer,” Int. J. Mol. Sci., vol. 18, no. 1, 2017, doi: 10.3390/ijms18010197.
  • [4] J. Wei et al., “A Petri Dish for Histopathology Image Analysis,” Jan. 2021, [Online]. Available: http://arxiv.org/abs/2101.12355
  • [5] J. Wei et al., “Learn like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification,” Sep. 2020, [Online]. Available: http://arxiv.org/abs/2009.13698
  • [6] Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, “Generalizing from a Few Examples,” ACM Comput. Surv., vol. 53, no. 3, pp. 1–34, 2021, doi: 10.1145/3386252.
  • [7] V. Dumoulin et al., “Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot Classification Benchmark,” Apr. 2021, [Online]. Available: http://arxiv.org/abs/2104.02638
  • [8] X. X. Yin, S. Hadjiloucas, Y. Zhang, and Z. Tian, “MRI radiogenomics for intelligent diagnosis of breast tumors and accurate prediction of neoadjuvant chemotherapy responses-a review,” Comput. Methods Programs Biomed., vol. 214, p. 106510, 2022, doi: 10.1016/j.cmpb.2021.106510.
  • [9] D. Pandey, X. Yin, H. Wang, and Y. Zhang, “Accurate vessel segmentation using maximum entropy incorporating line detection and phase-preserving denoising,” Comput. Vis. Image Underst., vol. 155, pp. 162–172, 2017, doi: 10.1016/j.cviu.2016.12.005.
  • [10] J. Wei, L. Torresani, J. Wei, and S. Hassanpour, “Calibrating Histopathology Image Classifiers using Label Smoothing,” Jan. 2022, [Online]. Available: http://arxiv.org/abs/2201.11866
  • [11] Y. Bengio, umontrealca Jérôme Louradour, R. Collobert, and J. Weston, “Curriculum Learning.”
  • [12] C. L. Srinidhi and A. L. Martel, “Improving Self-supervised Learning with Hardness-aware Dynamic Curriculum Learning: An Application to Digital Pathology.” [Online]. Available: https://github.com/srinidhiPY/
  • [13] X. Wang et al., “TransPath: Transformer-Based Self-supervised Learning for Histopathological Image Classification,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 12908 LNCS, pp. 186–195, 2021, doi: 10.1007/978-3-030-87237-3_18.
  • [14] S. B. Yengec-tasdemir, “Classi cation of Colorectal Polyps from Histopathological Images using Ensemble of ConvNeXt Variants Classification of Colorectal Polyps from Histopathological Images using Ensemble of ConvNeXt Variants,” pp. 0–26, 2022.
  • [15] R. Zhang et al., “HistoKT: Cross Knowledge Transfer in Computational Pathology,” Jan. 2022, [Online]. Available: http://arxiv.org/abs/2201.11246
  • [16] T. Who and T. Who, “The 2019 WHO classification of tumours of the digestive system,” pp. 182–188, 2020, doi: 10.1111/his.13975.
  • [17] N. A. C. S. Wong, L. P. Hunt, M. R. Novelli, N. A. Shepherd, and B. F. Warren, “Observer agreement in the diagnosis of serrated polyps of the large bowel,” Histopathology, vol. 55, no. 1. pp. 63–66, 2009. doi: 10.1111/j.1365-2559.2009.03329.x.
  • [18] M. Raghu, C. Zhang, J. Kleinberg, and S. Bengio, “Transfusion: Understanding Transfer Learning for Medical Imaging,” Feb. 2019, [Online]. Available: http://arxiv.org/abs/1902.07208
  • [19] Z. Liu et al., “Swin Transformer: Hierarchical Vision Transformer using Shifted Windows,” pp. 9992–10002, 2022, doi: 10.1109/iccv48922.2021.00986.
  • [20] Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, “A ConvNet for the 2020s,” 2022, [Online]. Available: http://arxiv.org/abs/2201.03545
  • [21] H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jégou, “Training data-efficient image transformers & distillation through attention,” 2020, [Online]. Available: http://arxiv.org/abs/2012.12877
  • [22] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “MixUp: Beyond empirical risk minimization,” 6th Int. Conf. Learn. Represent. ICLR 2018 - Conf. Track Proc., pp. 1–13, 2018.
  • [23] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 5987–5995, 2017, doi: 10.1109/CVPR.2017.634.
  • [24] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” Int. J. Comput. Vis., vol. 128, no. 2, pp. 336–359, 2020, doi: 10.1007/s11263-019-01228-7.

Classification of Precancerous Colorectal Lesions via ConvNeXt on Histopathological Images

Year 2023, Volume: 11 Issue: 2, 129 - 137, 04.06.2023
https://doi.org/10.17694/bajece.1240284

Abstract

In this translational study, the classification of precancerous colorectal lesions is performed by the ConvNeXt method on MHIST histopathological imaging dataset. The ConvNeXt method is the modernized ResNet-50 architecture having some training tricks inspired by Swin Transformers and ResNeXT. The performance of the ConvNeXt models are benchmarked on different scenarios such as ‘full data’, ‘gradually increasing difficulty based data’ and ‘k-shot data’. The ConvNeXt models outperformed almost all the other studies which are applied on MHIST by using ResNet models, vision transformers, weight distillation, self-supervised learning and curriculum learning strategy in terms of different scenarios and metrics. The ConvNeXt model trained with ‘full data’ yields the best result with the score of 0.8890 for accuracy, 0.9391 for AUC, 0.9121 for F1 and 0.7633 for cohen’s cappa.

Project Number

DÜBAP Project No: MÜHENDİSLİK.22.001

References

  • [1] S. Famitha and M. Moorthi, “Intelligent and novel multi-type cancer prediction model using optimized ensemble learning,” Comput. Methods Biomech. Biomed. Engin., 2022, doi: 10.1080/10255842.2022.2081504.
  • [2] D. M. Metter, T. J. Colgan, S. T. Leung, C. F. Timmons, and J. Y. Park, “Trends in the us and canadian pathologistworkforces from 2007 to 2017,” JAMA Netw. Open, vol. 2, no. 5, pp. 1–11, 2019, doi: 10.1001/jamanetworkopen.2019.4337.
  • [3] I. Mármol, C. Sánchez-de-Diego, A. P. Dieste, E. Cerrada, and M. J. R. Yoldi, “Colorectal carcinoma: A general overview and future perspectives in colorectal cancer,” Int. J. Mol. Sci., vol. 18, no. 1, 2017, doi: 10.3390/ijms18010197.
  • [4] J. Wei et al., “A Petri Dish for Histopathology Image Analysis,” Jan. 2021, [Online]. Available: http://arxiv.org/abs/2101.12355
  • [5] J. Wei et al., “Learn like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification,” Sep. 2020, [Online]. Available: http://arxiv.org/abs/2009.13698
  • [6] Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, “Generalizing from a Few Examples,” ACM Comput. Surv., vol. 53, no. 3, pp. 1–34, 2021, doi: 10.1145/3386252.
  • [7] V. Dumoulin et al., “Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot Classification Benchmark,” Apr. 2021, [Online]. Available: http://arxiv.org/abs/2104.02638
  • [8] X. X. Yin, S. Hadjiloucas, Y. Zhang, and Z. Tian, “MRI radiogenomics for intelligent diagnosis of breast tumors and accurate prediction of neoadjuvant chemotherapy responses-a review,” Comput. Methods Programs Biomed., vol. 214, p. 106510, 2022, doi: 10.1016/j.cmpb.2021.106510.
  • [9] D. Pandey, X. Yin, H. Wang, and Y. Zhang, “Accurate vessel segmentation using maximum entropy incorporating line detection and phase-preserving denoising,” Comput. Vis. Image Underst., vol. 155, pp. 162–172, 2017, doi: 10.1016/j.cviu.2016.12.005.
  • [10] J. Wei, L. Torresani, J. Wei, and S. Hassanpour, “Calibrating Histopathology Image Classifiers using Label Smoothing,” Jan. 2022, [Online]. Available: http://arxiv.org/abs/2201.11866
  • [11] Y. Bengio, umontrealca Jérôme Louradour, R. Collobert, and J. Weston, “Curriculum Learning.”
  • [12] C. L. Srinidhi and A. L. Martel, “Improving Self-supervised Learning with Hardness-aware Dynamic Curriculum Learning: An Application to Digital Pathology.” [Online]. Available: https://github.com/srinidhiPY/
  • [13] X. Wang et al., “TransPath: Transformer-Based Self-supervised Learning for Histopathological Image Classification,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 12908 LNCS, pp. 186–195, 2021, doi: 10.1007/978-3-030-87237-3_18.
  • [14] S. B. Yengec-tasdemir, “Classi cation of Colorectal Polyps from Histopathological Images using Ensemble of ConvNeXt Variants Classification of Colorectal Polyps from Histopathological Images using Ensemble of ConvNeXt Variants,” pp. 0–26, 2022.
  • [15] R. Zhang et al., “HistoKT: Cross Knowledge Transfer in Computational Pathology,” Jan. 2022, [Online]. Available: http://arxiv.org/abs/2201.11246
  • [16] T. Who and T. Who, “The 2019 WHO classification of tumours of the digestive system,” pp. 182–188, 2020, doi: 10.1111/his.13975.
  • [17] N. A. C. S. Wong, L. P. Hunt, M. R. Novelli, N. A. Shepherd, and B. F. Warren, “Observer agreement in the diagnosis of serrated polyps of the large bowel,” Histopathology, vol. 55, no. 1. pp. 63–66, 2009. doi: 10.1111/j.1365-2559.2009.03329.x.
  • [18] M. Raghu, C. Zhang, J. Kleinberg, and S. Bengio, “Transfusion: Understanding Transfer Learning for Medical Imaging,” Feb. 2019, [Online]. Available: http://arxiv.org/abs/1902.07208
  • [19] Z. Liu et al., “Swin Transformer: Hierarchical Vision Transformer using Shifted Windows,” pp. 9992–10002, 2022, doi: 10.1109/iccv48922.2021.00986.
  • [20] Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, “A ConvNet for the 2020s,” 2022, [Online]. Available: http://arxiv.org/abs/2201.03545
  • [21] H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jégou, “Training data-efficient image transformers & distillation through attention,” 2020, [Online]. Available: http://arxiv.org/abs/2012.12877
  • [22] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “MixUp: Beyond empirical risk minimization,” 6th Int. Conf. Learn. Represent. ICLR 2018 - Conf. Track Proc., pp. 1–13, 2018.
  • [23] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 5987–5995, 2017, doi: 10.1109/CVPR.2017.634.
  • [24] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” Int. J. Comput. Vis., vol. 128, no. 2, pp. 336–359, 2020, doi: 10.1007/s11263-019-01228-7.
There are 24 citations in total.

Details

Primary Language English
Subjects Artificial Intelligence
Journal Section Araştırma Articlessi
Authors

Mehmet Nergiz 0000-0002-0867-5518

Project Number DÜBAP Project No: MÜHENDİSLİK.22.001
Early Pub Date May 26, 2023
Publication Date June 4, 2023
Published in Issue Year 2023 Volume: 11 Issue: 2

Cite

APA Nergiz, M. (2023). Classification of Precancerous Colorectal Lesions via ConvNeXt on Histopathological Images. Balkan Journal of Electrical and Computer Engineering, 11(2), 129-137. https://doi.org/10.17694/bajece.1240284

All articles published by BAJECE are licensed under the Creative Commons Attribution 4.0 International License. This permits anyone to copy, redistribute, remix, transmit and adapt the work provided the original work and source is appropriately cited.Creative Commons Lisansı