Research Article
BibTex RIS Cite
Year 2024, Volume: 13 Issue: 1, 358 - 365, 24.03.2024
https://doi.org/10.17798/bitlisfen.1413650

Abstract

References

  • [1] Y. Ma and Y. Luo, “Bone fracture detection through the two-stage system of Crack-Sensitive Convolutional Neural Network,” Inform. Med. Unlocked, vol. 22, no. 100452, p. 100452, 2021.
  • [2] E. Yahalomi, M. Chernofsky, and M. Werman, “Detection of distal radius fractures trained by a small set of X-ray images and faster R-CNN,” in Advances in Intelligent Systems and Computing, Cham: Springer International Publishing, 2019, pp. 971–981.
  • [3] T. Urakawa, Y. Tanaka, S. Goto, H. Matsuzawa, K. Watanabe, and N. Endo, “Detecting intertrochanteric hip fractures with orthopedist-level accuracy using a deep convolutional neural network,” Skeletal Radiol., vol. 48, no. 2, pp. 239–244, 2019.
  • [4] H. Çetiner, “Cataract disease classification from fundus images with transfer learning based deep learning model on two ocular disease datasets,” Gümüshane Üniversitesi Fen Bilimleri Enstitüsü Dergisi, vol. 13, no. 2, 2023.
  • [5] K. A. Y. A. Volkan and İ. Akgül, “Classification of skin cancer using VGGNet model structures,” Gümüşhane Üniversitesi Fen Bilimleri Dergisi, vol. 13, no. 1, pp. 190–198, 2023.
  • [6] R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Ruan Qiuqi. Digital Image Processing, vol. 8. Beijing: Publishing House of Electronics Industry, 2007.
  • [7] D. Wang et al., “A novel dual-network architecture for mixed-supervised medical image segmentation,” Comput. Med. Imaging Graph., vol. 89, no. 101841, p. 101841, 2021.
  • [8] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” arXiv [cs.CV], 2015.
  • [9] J. Bullock, C. Cuesta-Lazaro, and A. Quera-Bofarull, “XNet: a convolutional neural network (CNN) implementation for medical x-ray image segmentation suitable for small datasets,” in Medical Imaging 2019: Biomedical Applications in Molecular, Structural, and Functional Imaging, 2019.
  • [10] M. Drozdzal et al., “Learning normalized inputs for iterative estimation in medical image segmentation,” Medical image analysis, vol. 44, pp. 1–13, 2018.
  • [11] A. Omar, “Lung CT Parenchyma Segmentation using VGG-16 based SegNet Model,” Int. J. Comput. Appl., vol. 178, no. 44, pp. 10–13, 2019.
  • [12] H. Lee et al., “Fully automated deep learning system for bone age assessment,” J. Digit. Imaging, vol. 30, no. 4, pp. 427–441, 2017.
  • [13] F. La Rosa, A deep learning approach to bone segmentation in CT scans, Universit` a di Bologna, Alma Mater Studiorum, 2017.
  • [14] E. Smistad, T. L. Falch, M. Bozorgi, A. C. Elster, and F. Lindseth, “Medical image segmentation on GPUs-A comprehensive review,” Medical image analysis, vol. 20, no. 1, pp. 1–18, 2015.
  • [15] K. O’Shea and R. Nash, “An Introduction to Convolutional Neural Networks,” arXiv [cs.NE], 2015.
  • [16] A. A. Shervine Amidi, Stanford Convolutional Neural Networks Handbook. Palo Alto, CA: Stanford University.
  • [17] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  • [18] He, K., Gkioxari, G., Dollár, P., & Girshick, R, “Mask r-cnn,” in IEEE international conference on computer vision, 2017, pp. 2961–2969.
  • [19] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [20] Stanford University, “LERA- Lower Extremity RAdiographs,” Stanford Center for Artifical Intelligence in Medicine & Imaging. [Online]. Available: https://aimi.stanford.edu/lera-lower-extremity-radiographs. [Accessed: 12-Oct-2023].
  • [21] Y. He et al., “Deep learning-based classification of primary bone tumors on radiographs: A preliminary study,” EBioMedicine, vol. 62, no. 103121, p. 103121, 2020.
  • [22] F. R. Eweje et al., “Deep learning for classification of bone lesions on routine MRI,” EBioMedicine, vol. 68, no. 103402, p. 103402, 2021.
  • [23] V. Chianca et al., “Radiomic machine learning classifiers in spine bone tumors: A multi-software, multi-scanner study,” Eur. J. Radiol., vol. 137, no. 109586, p. 109586, 2021.
  • [24] D. M. Anisuzzaman, H. Barzekar, L. Tong, J. Luo, and Z. Yu, “A deep learning study on osteosarcoma detection from histological images,” Biomed. Signal Process. Control, vol. 69, no. 102931, p. 102931, 2021.
  • [25] R. Karthik, R. Menaka, and H. M, “Learning distinctive filters for COVID-19 detection from chest X-ray using shuffled residual CNN,” Appl. Soft Comput., vol. 99, no. 106744, p. 106744, 2021.
  • [26] S. Thakur and A. Kumar, “X-ray and CT-scan-based automated detection and classification of covid-19 using convolutional neural networks (CNN),” Biomed. Signal Process. Control, vol. 69, no. 102920, p. 102920, 2021.
  • [27] B. Felfeliyan, A. Hareendranathan, G. Kuntze, J. L. Jaremko, and J. L. Ronsky, “Improved-Mask R-CNN: Towards an accurate generic MSK MRI instance segmentation platform (data from the Osteoarthritis Initiative),” Comput. Med. Imaging Graph., vol. 97, no. 102056, p. 102056, 2022.
  • [28] An official website of the United States government, “https://medpix.nlm.nih.gov/,” MEDPIX. [Online]. Available: https://medpix.nlm.nih.gov/search?allen=true&-allt=true&alli=true&query=tibia,. [Accessed: 10-Dec-2023].
  • [29] A. Aslam and E. Curry, “A survey on object detection for the internet of multimedia things (IoMT) using deep learning and event-based middleware: Approaches, challenges, and future directions,” Image Vis. Comput., vol. 106, no. 104095, p. 104095, 2021.

Upper and lower extremity bone segmentation with Mask R-CNN

Year 2024, Volume: 13 Issue: 1, 358 - 365, 24.03.2024
https://doi.org/10.17798/bitlisfen.1413650

Abstract

Most medical image processing studies use medical images to detect and measure the structure of organs and bones. The segmentation of image data is of great importance for the determination of the area to be studied and for the reduction of the size of the data to be studied. Working with image data creates an exponentially increasing workload depending on the size and number of images and requires high computing power using machine learning methods. Our study aims to achieve high success in bone segmentation, the first step in medical object detection studies. In many situations and cases, such as fractures and age estimation, the humerus and radius of the upper extremity and the femur and tibia of the lower extremity of the human skeleton provide data. In our bone segmentation study on X-RAY images, 160 images from one hundred patients were collected using data compiled from accessible databases. A segmentation result with an average accuracy of 0.981 was obtained using the Mask R-CNN method with the resnet50 architecture.

Ethical Statement

The study is complied with research and publication ethics

References

  • [1] Y. Ma and Y. Luo, “Bone fracture detection through the two-stage system of Crack-Sensitive Convolutional Neural Network,” Inform. Med. Unlocked, vol. 22, no. 100452, p. 100452, 2021.
  • [2] E. Yahalomi, M. Chernofsky, and M. Werman, “Detection of distal radius fractures trained by a small set of X-ray images and faster R-CNN,” in Advances in Intelligent Systems and Computing, Cham: Springer International Publishing, 2019, pp. 971–981.
  • [3] T. Urakawa, Y. Tanaka, S. Goto, H. Matsuzawa, K. Watanabe, and N. Endo, “Detecting intertrochanteric hip fractures with orthopedist-level accuracy using a deep convolutional neural network,” Skeletal Radiol., vol. 48, no. 2, pp. 239–244, 2019.
  • [4] H. Çetiner, “Cataract disease classification from fundus images with transfer learning based deep learning model on two ocular disease datasets,” Gümüshane Üniversitesi Fen Bilimleri Enstitüsü Dergisi, vol. 13, no. 2, 2023.
  • [5] K. A. Y. A. Volkan and İ. Akgül, “Classification of skin cancer using VGGNet model structures,” Gümüşhane Üniversitesi Fen Bilimleri Dergisi, vol. 13, no. 1, pp. 190–198, 2023.
  • [6] R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Ruan Qiuqi. Digital Image Processing, vol. 8. Beijing: Publishing House of Electronics Industry, 2007.
  • [7] D. Wang et al., “A novel dual-network architecture for mixed-supervised medical image segmentation,” Comput. Med. Imaging Graph., vol. 89, no. 101841, p. 101841, 2021.
  • [8] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” arXiv [cs.CV], 2015.
  • [9] J. Bullock, C. Cuesta-Lazaro, and A. Quera-Bofarull, “XNet: a convolutional neural network (CNN) implementation for medical x-ray image segmentation suitable for small datasets,” in Medical Imaging 2019: Biomedical Applications in Molecular, Structural, and Functional Imaging, 2019.
  • [10] M. Drozdzal et al., “Learning normalized inputs for iterative estimation in medical image segmentation,” Medical image analysis, vol. 44, pp. 1–13, 2018.
  • [11] A. Omar, “Lung CT Parenchyma Segmentation using VGG-16 based SegNet Model,” Int. J. Comput. Appl., vol. 178, no. 44, pp. 10–13, 2019.
  • [12] H. Lee et al., “Fully automated deep learning system for bone age assessment,” J. Digit. Imaging, vol. 30, no. 4, pp. 427–441, 2017.
  • [13] F. La Rosa, A deep learning approach to bone segmentation in CT scans, Universit` a di Bologna, Alma Mater Studiorum, 2017.
  • [14] E. Smistad, T. L. Falch, M. Bozorgi, A. C. Elster, and F. Lindseth, “Medical image segmentation on GPUs-A comprehensive review,” Medical image analysis, vol. 20, no. 1, pp. 1–18, 2015.
  • [15] K. O’Shea and R. Nash, “An Introduction to Convolutional Neural Networks,” arXiv [cs.NE], 2015.
  • [16] A. A. Shervine Amidi, Stanford Convolutional Neural Networks Handbook. Palo Alto, CA: Stanford University.
  • [17] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  • [18] He, K., Gkioxari, G., Dollár, P., & Girshick, R, “Mask r-cnn,” in IEEE international conference on computer vision, 2017, pp. 2961–2969.
  • [19] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [20] Stanford University, “LERA- Lower Extremity RAdiographs,” Stanford Center for Artifical Intelligence in Medicine & Imaging. [Online]. Available: https://aimi.stanford.edu/lera-lower-extremity-radiographs. [Accessed: 12-Oct-2023].
  • [21] Y. He et al., “Deep learning-based classification of primary bone tumors on radiographs: A preliminary study,” EBioMedicine, vol. 62, no. 103121, p. 103121, 2020.
  • [22] F. R. Eweje et al., “Deep learning for classification of bone lesions on routine MRI,” EBioMedicine, vol. 68, no. 103402, p. 103402, 2021.
  • [23] V. Chianca et al., “Radiomic machine learning classifiers in spine bone tumors: A multi-software, multi-scanner study,” Eur. J. Radiol., vol. 137, no. 109586, p. 109586, 2021.
  • [24] D. M. Anisuzzaman, H. Barzekar, L. Tong, J. Luo, and Z. Yu, “A deep learning study on osteosarcoma detection from histological images,” Biomed. Signal Process. Control, vol. 69, no. 102931, p. 102931, 2021.
  • [25] R. Karthik, R. Menaka, and H. M, “Learning distinctive filters for COVID-19 detection from chest X-ray using shuffled residual CNN,” Appl. Soft Comput., vol. 99, no. 106744, p. 106744, 2021.
  • [26] S. Thakur and A. Kumar, “X-ray and CT-scan-based automated detection and classification of covid-19 using convolutional neural networks (CNN),” Biomed. Signal Process. Control, vol. 69, no. 102920, p. 102920, 2021.
  • [27] B. Felfeliyan, A. Hareendranathan, G. Kuntze, J. L. Jaremko, and J. L. Ronsky, “Improved-Mask R-CNN: Towards an accurate generic MSK MRI instance segmentation platform (data from the Osteoarthritis Initiative),” Comput. Med. Imaging Graph., vol. 97, no. 102056, p. 102056, 2022.
  • [28] An official website of the United States government, “https://medpix.nlm.nih.gov/,” MEDPIX. [Online]. Available: https://medpix.nlm.nih.gov/search?allen=true&-allt=true&alli=true&query=tibia,. [Accessed: 10-Dec-2023].
  • [29] A. Aslam and E. Curry, “A survey on object detection for the internet of multimedia things (IoMT) using deep learning and event-based middleware: Approaches, challenges, and future directions,” Image Vis. Comput., vol. 106, no. 104095, p. 104095, 2021.
There are 29 citations in total.

Details

Primary Language English
Subjects Artificial Intelligence (Other)
Journal Section Araştırma Makalesi
Authors

Ayhan Aydın 0000-0001-9127-0951

Caner Özcan 0000-0002-2854-4005

Early Pub Date March 21, 2024
Publication Date March 24, 2024
Submission Date January 2, 2024
Acceptance Date March 15, 2024
Published in Issue Year 2024 Volume: 13 Issue: 1

Cite

IEEE A. Aydın and C. Özcan, “Upper and lower extremity bone segmentation with Mask R-CNN”, Bitlis Eren Üniversitesi Fen Bilimleri Dergisi, vol. 13, no. 1, pp. 358–365, 2024, doi: 10.17798/bitlisfen.1413650.

Bitlis Eren University
Journal of Science Editor
Bitlis Eren University Graduate Institute
Bes Minare Mah. Ahmet Eren Bulvari, Merkez Kampus, 13000 BITLIS