Konferans Bildirisi
BibTex RIS Kaynak Göster

The Implementation of DCGAN in the Data Augmentation for the Sperm Morphology Datasets

Yıl 2021, Sayı: 26 - Ejosat Özel Sayı 2021 (HORA), 307 - 314, 31.07.2021
https://doi.org/10.31590/ejosat.952561

Öz

A large amount of data is the key requirement in order to train a neural network efficiently. Using a small size training set in network training causes low accuracy for model performance over the testing set and also hard to implement the model in practice. Similar to many other problems, sperm morphology datasets are also limited for training the neural network-based deep networks in order to provide an automatic evaluation of sperm morphometry. Data augmentation mitigates this problem by utilizing actual data more effectively. The standard data augmentation techniques focus on only spatial changes over the images and can only produce a restricted number of useful informative and disjunctive data. Therefore, in order to create more distinctive and diverse data than the regular spatial domain-based augmentation techniques, a deep learning-based data augmentation technique which is known as the generative model, is trained in this study for the sperm morphology datasets. The deep convolutional generative adversarial network (DCGAN) was optimized and utilized in this study for three well-known sperm morphometry datasets as SMIDS, HuSHeM, and SCIAN-Morpho. Each dataset was individually augmented to a 1000 sample size by the proposed approach. In order to optimize the network with different parameters and observe the generated data, a graphical user interface has been designed. For the similarity evaluation of the generated images to original images, the Fréchet Inception Distance (FID) score was utilized. The FID results indicate that the most similar generated images have been obtained for SMIDS with an average of 29.06 FID score. The worst performance (Average FID = 53.46) was obtained for the SCIAN-Morpho dataset, which has low resolution and data imbalance problems. Lastly, DCGAN based proposed approach resulted in an average of 44.25 FID score for the HuSHeM dataset.

Teşekkür

Important! This paper has been accepted in HORA 2021 conference for publication in your valuable journal. Our conference paper ID is 114. I have added the related reviewers who are the conference holders.

Kaynakça

  • Balayev, K., & et al. (2020). Synthetic data generation with DCGAN. GitHub. https://github.com/Kamran017/Synthetic-Data-Generation-With-DCGAN
  • Barışkan, M. A., Orman, Z., & Şamlı, R. (2020). Common generative adversarial network types and practical applications. Avrupa Bilim ve Teknoloji Dergisi, 585–590.
  • Chang, V., Garcia, A., Hitschfeld, N., & Härtel, S. (2017). Gold-standard for computer-assisted morphological sperm analysis. Computers in Biology and Medicine, 83, 143–150.
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, & K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems (27).
  • Ilhan, H. O, Sigirci, I. O., Serbes, G., & Aydin, N. (2020). A fully automated hybrid human sperm detection and classification system based on mobile-net and the performance comparison with conventional methods. Medical & Biological Engineering & Computing, 58(5), 1047–1068.
  • Ilhan, H. O., & Aydin, N. (2018). A novel data acquisition and analyzing approach to spermiogram tests. Biomedical Signal Processing and Control, 41, 129–139.
  • Kapoor, D. A. (2021). The changing landscape of urologic practice. Urologic Clinics, 48(2).
  • Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2018). Progressive growing of GANs for improved quality, stability, and variation. https://arxiv.org/abs/1710.10196
  • Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4396–4405.
  • Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., & Shi, W. (2017). Photo-realistic single image super-resolution using a generative adversarial network. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 105–114.
  • Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. http://arxiv.org/abs/1411.1784
  • Radford, A., Metz, L., & Chintala, S. (2016). Unsupervised representation learning with deep convolutional generative adversarial networks. https://arxiv.org/abs/1511.06434
  • Rubin, M., Stein, O., Giryes, R., Roitshtain, D., & Shaked, N. T. (2018). Quantitative phase maps of live cells classified by transfer learning and generative adversarial network (GAN). Imaging and Applied Optics 2018 (3D, AO, AIO, COSI, DH, IS, LACSEA, LS\&C, MATH, PcAOP), CW2E.7.
  • Rubin, M., Stein, O., Turko, N. A., Nygate, Y., Roitshtain, D., Karako, L., Barnea, I., Giryes, R., & Shaked, N. T. (2019). TOP-GAN: Stain-free cancer cell classification using deep learning with a small training set. Medical Image Analysis, 57, 176–185.
  • Shaker, F., Monadjemi, S. A., Alirezaie, J., & Naghsh-Nilchi, A. R. (2017). A dictionary learning approach for human sperm heads classification. Computers in Biology and Medicine, 91, 181–190.
  • Tortumlu, O. L., & Ilhan, H. O. (2020). The analysis of mobile platform based CNN networks in the classification of sperm morphology. 2020 Medical Technologies Congress (TIPTEKNO), 1–4.
  • Vasconcelos, C. N., & Vasconcelos, B. N. (2017). Increasing deep learning melanoma classification by classical and expert knowledge based image transforms. http://arxiv.org/abs/1702.07025
  • Yeh, R. A., Chen, C., Lim, T. Y., Schwing, A. G., Hasegawa-Johnson, M., & Do, M. N. (2017). Semantic image inpainting with deep generative models. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6882–6890.
  • Yüzkat, M., Ilhan, H. O., & Aydın, N. (2020). Morphological classification of low quality sperm images using deep learning networks. 2020 Medical Technologies Congress (TIPTEKNO), 1–4.
  • Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. (2017). StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks. 2017 IEEE International Conference on Computer Vision (ICCV), 5908–5916.
  • Zhao, J., Mathieu, M., & LeCun, Y. (2017). Energy-based generative adversarial network. https://arxiv.org/abs/1609.03126
  • Zheng, Z., & Liu, J. (2020). P2-GAN: Efficient style transfer using single style image. https://arxiv.org/abs/2001.07466

Sperm Morfolojisi Veri Kümeleri için Veri Artırmada DCGAN’ın Uygulanması

Yıl 2021, Sayı: 26 - Ejosat Özel Sayı 2021 (HORA), 307 - 314, 31.07.2021
https://doi.org/10.31590/ejosat.952561

Öz

Bir sinir ağını verimli bir şekilde eğitmek için büyük miktarda veri temel gereksinimdir. Ağ eğitiminde küçük boyutlu bir eğitim kümesinin kullanılması, test kümesinde düşük doğruluklu model performansına neden olur ve modelin pratikte uygulanmasını zorlaştırır. Diğer birçok soruna benzer şekilde, sperm morfolojisi veri kümeleri de, sperm morfometrisinin otomatik değerlendirilmesini sağlamak için sinir ağı tabanlı derin ağları eğitmek açısından sınırlıdır. Veri artırma, gerçek verileri daha verimli kullanarak bu sorunu azaltır. Standart veri artırma teknikleri, yalnızca görüntüler üzerindeki uzamsal değişikliklere odaklanır ve sadece sınırlı sayıda yararlı bilgi sağlayan ve ayrık veri üretebilir. Bu nedenle, standart uzamsal tabanlı veri artırma tekniklerinden daha farklı ve çeşitli veriler oluşturmak için, bu çalışmada sperm morfolojisi veri kümeleri adına üretici model olarak bilinen derin öğrenme tabanlı bir veri artırma tekniği kullanılmıştır. Bu çalışmada Derin Evrişimli Üretici Çekişmeli Ağ (DCGAN) optimize edilmiş ve SMIDS, HuSHeM ve SCIAN-Morpho gibi üç iyi bilinen sperm morfometrisi veri kümeleri kullanılmıştır. Her veri kümesi, önerilen yaklaşımla ayrı ayrı 1000 örneklem büyüklüğüne yükseltilmiştir. Ağı farklı parametrelerle optimize etmek ve üretilen verileri gözlemlemek için grafiksel bir kullanıcı arayüzü tasarlanmıştır. Üretilen görüntülerin orijinal görüntülere benzerlik değerlendirmesi için Fréchet Başlangıç Mesafesi (FID) skoru kullanılmıştır. FID sonuçları, oluşturulan en benzer görüntülerin ortalama 29.06 FID puanı ile SMIDS için elde edildiğini göstermektedir. En kötü performans (Ortalama FID = 53.46), düşük çözünürlük ve veri dengesizliği sorunları olan SCIAN-Morpho veri kümesi için elde edilmiştir. Son olarak, DCGAN tabanlı önerilen yaklaşım HuSHeM veri kümesi için ortalama 44.25 FID puanıyla sonuçlanmıştır.

Kaynakça

  • Balayev, K., & et al. (2020). Synthetic data generation with DCGAN. GitHub. https://github.com/Kamran017/Synthetic-Data-Generation-With-DCGAN
  • Barışkan, M. A., Orman, Z., & Şamlı, R. (2020). Common generative adversarial network types and practical applications. Avrupa Bilim ve Teknoloji Dergisi, 585–590.
  • Chang, V., Garcia, A., Hitschfeld, N., & Härtel, S. (2017). Gold-standard for computer-assisted morphological sperm analysis. Computers in Biology and Medicine, 83, 143–150.
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, & K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems (27).
  • Ilhan, H. O, Sigirci, I. O., Serbes, G., & Aydin, N. (2020). A fully automated hybrid human sperm detection and classification system based on mobile-net and the performance comparison with conventional methods. Medical & Biological Engineering & Computing, 58(5), 1047–1068.
  • Ilhan, H. O., & Aydin, N. (2018). A novel data acquisition and analyzing approach to spermiogram tests. Biomedical Signal Processing and Control, 41, 129–139.
  • Kapoor, D. A. (2021). The changing landscape of urologic practice. Urologic Clinics, 48(2).
  • Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2018). Progressive growing of GANs for improved quality, stability, and variation. https://arxiv.org/abs/1710.10196
  • Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4396–4405.
  • Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., & Shi, W. (2017). Photo-realistic single image super-resolution using a generative adversarial network. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 105–114.
  • Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. http://arxiv.org/abs/1411.1784
  • Radford, A., Metz, L., & Chintala, S. (2016). Unsupervised representation learning with deep convolutional generative adversarial networks. https://arxiv.org/abs/1511.06434
  • Rubin, M., Stein, O., Giryes, R., Roitshtain, D., & Shaked, N. T. (2018). Quantitative phase maps of live cells classified by transfer learning and generative adversarial network (GAN). Imaging and Applied Optics 2018 (3D, AO, AIO, COSI, DH, IS, LACSEA, LS\&C, MATH, PcAOP), CW2E.7.
  • Rubin, M., Stein, O., Turko, N. A., Nygate, Y., Roitshtain, D., Karako, L., Barnea, I., Giryes, R., & Shaked, N. T. (2019). TOP-GAN: Stain-free cancer cell classification using deep learning with a small training set. Medical Image Analysis, 57, 176–185.
  • Shaker, F., Monadjemi, S. A., Alirezaie, J., & Naghsh-Nilchi, A. R. (2017). A dictionary learning approach for human sperm heads classification. Computers in Biology and Medicine, 91, 181–190.
  • Tortumlu, O. L., & Ilhan, H. O. (2020). The analysis of mobile platform based CNN networks in the classification of sperm morphology. 2020 Medical Technologies Congress (TIPTEKNO), 1–4.
  • Vasconcelos, C. N., & Vasconcelos, B. N. (2017). Increasing deep learning melanoma classification by classical and expert knowledge based image transforms. http://arxiv.org/abs/1702.07025
  • Yeh, R. A., Chen, C., Lim, T. Y., Schwing, A. G., Hasegawa-Johnson, M., & Do, M. N. (2017). Semantic image inpainting with deep generative models. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6882–6890.
  • Yüzkat, M., Ilhan, H. O., & Aydın, N. (2020). Morphological classification of low quality sperm images using deep learning networks. 2020 Medical Technologies Congress (TIPTEKNO), 1–4.
  • Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. (2017). StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks. 2017 IEEE International Conference on Computer Vision (ICCV), 5908–5916.
  • Zhao, J., Mathieu, M., & LeCun, Y. (2017). Energy-based generative adversarial network. https://arxiv.org/abs/1609.03126
  • Zheng, Z., & Liu, J. (2020). P2-GAN: Efficient style transfer using single style image. https://arxiv.org/abs/2001.07466
Toplam 22 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Kamran Balayev 0000-0002-0056-8152

Nihad Guluzade 0000-0003-0482-2303

Sercan Aygün 0000-0002-4615-7914

Hamza O.ilhan Bu kişi benim 0000-0002-1753-2703

Yayımlanma Tarihi 31 Temmuz 2021
Yayımlandığı Sayı Yıl 2021 Sayı: 26 - Ejosat Özel Sayı 2021 (HORA)

Kaynak Göster

APA Balayev, K., Guluzade, N., Aygün, S., O.ilhan, H. (2021). The Implementation of DCGAN in the Data Augmentation for the Sperm Morphology Datasets. Avrupa Bilim Ve Teknoloji Dergisi(26), 307-314. https://doi.org/10.31590/ejosat.952561

Cited By