Araştırma Makalesi
BibTex RIS Kaynak Göster

KONTRASTLI ÖĞRENME TABANLI ÇEKİŞMELİ ÜRETKEN AĞLAR İLE PAPSMEAR GÖRÜNTÜ BÖLÜTLEME

Yıl 2022, Cilt: Vol: 7 Sayı: Issue: 1, 36 - 46, 06.06.2022
https://doi.org/10.53070/bbd.1038007

Öz

PapSmear görsellerinin otomatik olarak rahim ağzı kanser varlığının tespit edilmesi aktif bir
çalışma alanıdır. PapSmear görüntülerinde nesnelerin dağılımı sürekli yer değiştirmektedir. Bu
çalışmada, Çekişmeli Üretken Ağlar (ÇÜA) ve karşılaştırmalı öğrenme tekniklerinden parça tabanlı
yöntemler kullanılarak PapSmear görüntü bölütlemesi yapılmıştır. Kıyaslanan yöntemler CycleGAN,
CUT, FastCUT, DCLGAN ve SimDCL yöntemidir. Tüm yöntemler eşlenmemiş görüntüler üzerinde
çalışmaktadır. Bu yöntemler bir birlerini temel alarak geliştirilmişlerdir. DCLGAN ve SimDCL yöntemi
CUT ve CycleGAN yönteminin birleşimidir. Bu yöntemlerde maliyet fonksiyonları, ağ sayıları
değişkenlik göstermektedir. Bu çalışmada yöntemler ayrıntılı bir şekilde incelenmiştir. Yöntemlerin
birbirine benzerlik ve farklılıkları gözlemlenmiştir. Bölütleme yapıldıktan sonra hem görsel hem de
ölçüm metrikleri kullanılarak bulunan sonuçlara yer verilmiştir. Ölçüm metriği olarak FID, KID, PSNR
ve LPIPS yöntemleri kullanılmıştır. Yapılan deneysel çalışmalar, DCLGAN ve SimDCL yönteminin
PapSmear bölümletlemede kıyaslanan yöntemler arasında daha iyi oldukları olduğu gözlemlenmiştir.
CycleGAN yönteminin ise diğer yöntemlerden daha başarısız olduğu gözlemlenmiştir.

Destekleyen Kurum

İnönü Üniversitesi BAP birimi

Proje Numarası

FDK-2021-2675

Teşekkür

Bu çalışma İnönü Üniversitesi Bilimsel Araştırma ve Koordinasyon birimi tarafından “FDK-2021-2675” proje numarası ile finanse edilmiştir. İnönü Üniversitesi’ne teşekkürlerimizi sunarız.

Kaynakça

  • Liu, M. Y., Breuel, T., & Kautz, J. (2017). Unsupervised image-to-image translation networks. In Advances in neural information processing systems (pp. 700-708).
  • Zhou, Y. F., Jiang, R. H., Wu, X., He, J. Y., Weng, S., & Peng, Q. (2019). Branchgan: Unsupervised mutual image-to-image transfer with a single encoder and dual decoders. IEEE Transactions on Multimedia, 21(12), 3136-3149.
  • Huang, X., Liu, M. Y., Belongie, S., & Kautz, J. (2018). Multimodal unsupervised image-to-image translation. In Proceedings of the European conference on computer vision (ECCV) (pp. 172-189).
  • Lin, J., Chen, Z., Xia, Y., Liu, S., Qin, T., & Luo, J. (2019). Exploring explicit domain supervision for latent space disentanglement in unpaired image-to-image translation. IEEE transactions on pattern analysis and machine intelligence, 43(4), 1254-1266.
  • Park, T., Efros, A. A., Zhang, R., & Zhu, J. Y. (2020, August). Contrastive learning for unpaired image- to-image translation. In European Conference on Computer Vision (pp. 319-345). Springer, Cham.
  • Han, J., Shoeiby, M., Petersson, L., & Armin, M. A. (2021). Dual Contrastive Learning for Unsupervised Image-to-Image Translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 746-755).
  • Yurt, M., Dar, S. U., Erdem, A., Erdem, E., Oguz, K. K., & Çukur, T. (2021). Mustgan: Multi-stream generative adversarial networks for MR image synthesis. Medical Image Analysis, 70, 101944.
  • Yao, S., Tan, J., Chen, Y., & Gu, Y. (2021). A weighted feature transfer gan for medical image synthesis. Machine Vision and Applications, 32(1), 1-11.
  • Chabra, S. (2016). Cervical cancer preventable, treatable, but continues to kill women. Cervical Cancer, 1(112), 2.
  • Mustafa, W. A., Halim, A., Jamlos, M. A., & Idrus, S. Z. S. (2020, April). A Review: Pap Smear Analysis Based on Image Processing Approach. In Journal of Physics: Conference Series (Vol. 1529, No. 2, p. 022080). IOP Publishing.
  • Fekri-Ershad, S. (2019). Pap smear classification using combination of global significant value, texture statistical features and time series features. Multimedia Tools and Applications, 78(22), 31121-31136.
  • Gautam, S., Jith, N., Sao, A. K., Bhavsar, A., & Natarajan, A. (2018). Considerations for a PAP smear image analysis system with CNN features. arXiv preprint arXiv:1806.09025.
  • Yi, Z., Zhang, H., Tan, P., & Gong, M. (2017). Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE international conference on computer vision (pp. 2849-2857).
  • Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 586-595).
  • Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30.

PAPSMEAR IMAGE SEGMENTATION WITH CONTRASTIVE LEARNING BASED GENERATIVE ADVERASRİAL NETWORKS

Yıl 2022, Cilt: Vol: 7 Sayı: Issue: 1, 36 - 46, 06.06.2022
https://doi.org/10.53070/bbd.1038007

Öz

Automatically detecting the presence of cervical cancer in PapSmear images is an active
field of study. The distribution of objects in PapSmear images is constantly changing. In this study,
PapSmear image segmentation was performed by using patch-based methods from Generative
Adversarial Networks (GAN) and contrastive learning techniques. The methods compared are
CycleGAN, CUT, FastCUT, DCLGAN and SimDCL methods. All methods work on unpaired images.
These methods were developed on the basis of each other. DCLGAN and SimDCL method is a
combination of CUT and CycleGAN methods. In these methods, cost functions and network numbers
vary. In this study, the methods were examined in detail. Similarities and differences between the
methods were observed. After segmentation, the results obtained using both visual and measurement
metrics are included. FID, KID, PSNR and LPIPS methods were used as measurement metrics.
Experimental studies have shown that DCLGAN and SimDCL method are better among the compared
methods in PapSmear segmentation. It has been observed that the CycleGAN method is more
unsuccessful than other methods.

Proje Numarası

FDK-2021-2675

Kaynakça

  • Liu, M. Y., Breuel, T., & Kautz, J. (2017). Unsupervised image-to-image translation networks. In Advances in neural information processing systems (pp. 700-708).
  • Zhou, Y. F., Jiang, R. H., Wu, X., He, J. Y., Weng, S., & Peng, Q. (2019). Branchgan: Unsupervised mutual image-to-image transfer with a single encoder and dual decoders. IEEE Transactions on Multimedia, 21(12), 3136-3149.
  • Huang, X., Liu, M. Y., Belongie, S., & Kautz, J. (2018). Multimodal unsupervised image-to-image translation. In Proceedings of the European conference on computer vision (ECCV) (pp. 172-189).
  • Lin, J., Chen, Z., Xia, Y., Liu, S., Qin, T., & Luo, J. (2019). Exploring explicit domain supervision for latent space disentanglement in unpaired image-to-image translation. IEEE transactions on pattern analysis and machine intelligence, 43(4), 1254-1266.
  • Park, T., Efros, A. A., Zhang, R., & Zhu, J. Y. (2020, August). Contrastive learning for unpaired image- to-image translation. In European Conference on Computer Vision (pp. 319-345). Springer, Cham.
  • Han, J., Shoeiby, M., Petersson, L., & Armin, M. A. (2021). Dual Contrastive Learning for Unsupervised Image-to-Image Translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 746-755).
  • Yurt, M., Dar, S. U., Erdem, A., Erdem, E., Oguz, K. K., & Çukur, T. (2021). Mustgan: Multi-stream generative adversarial networks for MR image synthesis. Medical Image Analysis, 70, 101944.
  • Yao, S., Tan, J., Chen, Y., & Gu, Y. (2021). A weighted feature transfer gan for medical image synthesis. Machine Vision and Applications, 32(1), 1-11.
  • Chabra, S. (2016). Cervical cancer preventable, treatable, but continues to kill women. Cervical Cancer, 1(112), 2.
  • Mustafa, W. A., Halim, A., Jamlos, M. A., & Idrus, S. Z. S. (2020, April). A Review: Pap Smear Analysis Based on Image Processing Approach. In Journal of Physics: Conference Series (Vol. 1529, No. 2, p. 022080). IOP Publishing.
  • Fekri-Ershad, S. (2019). Pap smear classification using combination of global significant value, texture statistical features and time series features. Multimedia Tools and Applications, 78(22), 31121-31136.
  • Gautam, S., Jith, N., Sao, A. K., Bhavsar, A., & Natarajan, A. (2018). Considerations for a PAP smear image analysis system with CNN features. arXiv preprint arXiv:1806.09025.
  • Yi, Z., Zhang, H., Tan, P., & Gong, M. (2017). Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE international conference on computer vision (pp. 2849-2857).
  • Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 586-595).
  • Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30.
Toplam 15 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Yapay Zeka
Bölüm PAPERS
Yazarlar

Sara Altun 0000-0003-2877-7105

Muhammed Fatih Talu 0000-0003-1166-8404

Proje Numarası FDK-2021-2675
Erken Görünüm Tarihi 5 Haziran 2022
Yayımlanma Tarihi 6 Haziran 2022
Gönderilme Tarihi 19 Aralık 2021
Kabul Tarihi 29 Ocak 2022
Yayımlandığı Sayı Yıl 2022 Cilt: Vol: 7 Sayı: Issue: 1

Kaynak Göster

APA Altun, S., & Talu, M. F. (2022). KONTRASTLI ÖĞRENME TABANLI ÇEKİŞMELİ ÜRETKEN AĞLAR İLE PAPSMEAR GÖRÜNTÜ BÖLÜTLEME. Computer Science, Vol: 7(Issue: 1), 36-46. https://doi.org/10.53070/bbd.1038007

The Creative Commons Attribution 4.0 International License 88x31.png  is applied to all research papers published by JCS and

a Digital Object Identifier (DOI)     Logo_TM.png  is assigned for each published paper.