Araştırma Makalesi
BibTex RIS Kaynak Göster

CNN Tabanlı Algısal Benzerlik Metrikleriyle Görüntüden Görüntüye Dönüşüm

Yıl 2024, , 84 - 98, 06.06.2024
https://doi.org/10.53070/bbd.1429596

Öz

Görüntüden görüntüye çeviri, farklı alanlardaki görüntüleri dönüştürme sürecidir. Generative Adversarial Networks (GAN'lar) ve Convolutional Neural Networks (CNN'ler), görüntü çevirisinde yaygın olarak kullanılan tekniklerdir. Bu çalışma, GAN mimarileri için en etkili kayıp fonksiyonunu bulmayı amaçlamaktadır ve daha iyi görüntüler sentezlemeyi hedeflemektedir. Bu amaçla, temel bir GAN mimarisi olan Pix2Pix yönteminde kayıp fonksiyonlarını değiştirerek deneysel sonuçlar elde edildi. Pix2Pix yönteminde kullanılan mevcut kayıp fonksiyonu Mean Absolute Error (MAE) olarak bilinen L_1 metriğidir. Bu çalışmada, Pix2Pix mimarisinde kayıp fonksiyonuna konvolüsyon tabanlı algısal benzerlik CONTENT, LPIPS ve DISTS metriklerinin etkisi incelendi. Ayrıca, görüntüden görüntüye çevirme üzerindeki etkiler, orijinal L_1 kaybıyla birlikte L_1_CONTENT, L_1_LPIPS ve L_1_DISTS algısal benzerlik metrikleri kullanılarak yüzde 50 oranında analiz edildi. Yöntemlerin performans analizleri Cityscapes, Denim2Mustache, Maps ve Papsmear veri setleri üzerinde gerçekleştirildi. Görsel sonuçlar, geleneksel (FSIM, HaarPSI, MS-SSIM, PSNR, SSIM, VIFp ve VSI) ve güncel (FID ve KID) görüntü karşılaştırma metrikleri ile analiz edildi. Sonuç olarak, GAN mimarilerinin kayıp fonksiyonu için konvansiyonel yöntemler yerine konvolüsyon tabanlı yöntemler kullanıldığında daha iyi sonuçlar elde edildiği gözlemlendi. Ayrıca, LPIPS ve DISTS yöntemlerinin gelecekte GAN mimarilerinin kayıp fonksiyonunda kullanılabileceği gözlemlendi.

Kaynakça

  • Zhu, X. X., Tuia, D., Mou, L., Xia, G. S., Zhang, L., Xu, F., & Fraundorfer, F. (2017). Deep learning in remote sensing: A comprehensive review and list of resources. IEEE geoscience and remote sensing magazine, 5(4), 8-36.
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
  • Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., & Fei-Fei, L. (2014). Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 1725-1732).
  • Koushik, J. (2016). Understanding convolutional neural networks. arXiv preprint arXiv:1605.09081.
  • Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., & Lew, M. S. (2016). Deep learning for visual understanding: A review. Neurocomputing, 187, 27-48.
  • Van Den Oord, A., Kalchbrenner, N., & Kavukcuoglu, K. (2016, June). Pixel recurrent neural networks. In International conference on machine learning (pp. 1747-1756). PMLR.
  • Van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., & Graves, A. (2016). Conditional image generation with pixelcnn decoders. Advances in neural information processing systems, 29.
  • Salimans, T., Karpathy, A., Chen, X., & Kingma, D. P. (2017). Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517.
  • Chen, L., Zhang, H., Xiao, J., Nie, L., Shao, J., Liu, W., & Chua, T. S. (2017). Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5659-5667).
  • Liu, M. Y., Breuel, T., & Kautz, J. (2017). Unsupervised image-to-image translation networks. Advances in neural information processing systems, 30.
  • Liu, M. Y., & Tuzel, O. (2016). Coupled generative adversarial networks. Advances in neural information processing systems, 29.
  • Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
  • Wang, T. C., Liu, M. Y., Zhu, J. Y., Tao, A., Kautz, J., & Catanzaro, B. (2018). High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8798-8807).
  • Liu, Y., De Nadai, M., Yao, J., Sebe, N., Lepri, B., & Alameda-Pineda, X. (2020). Gmm-unit: Unsupervised multi-domain and multi-modal image-to-image translation via attribute gaussian mixture modeling. arXiv preprint arXiv:2003.06788.l
  • Royer, A., Bousmalis, K., Gouws, S., Bertsch, F., Mosseri, I., Cole, F., & Murphy, K. (2020). Xgan: Unsupervised image-to-image translation for many-to-many mappings. In Domain Adaptation for Visual Understanding (pp. 33-49). Cham: Springer International Publishing.
  • Royer, A., Bousmalis, K., Gouws, S., Bertsch, F., Mosseri, I., Cole, F., & Murphy, K. (2020). Xgan: Unsupervised image-to-image translation for many-to-many mappings. In Domain Adaptation for Visual Understanding (pp. 33-49). Cham: Springer International Publishing.
  • Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J., & Greenspan, H. (2018). GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing, 321, 321-331.
  • Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).
  • Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
  • Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 586-595).
  • Ding, K., Ma, K., Wang, S., & Simoncelli, E. P. (2020). Image quality assessment: Unifying structure and texture similarity. IEEE transactions on pattern analysis and machine intelligence, 44(5), 2567-2581.
  • Gatys, L. A., Ecker, A. S., & Bethge, M. (2015). A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576.
  • Zhang, L., Zhang, L., Mou, X., & Zhang, D. (2011). FSIM: A feature similarity index for image quality assessment. IEEE transactions on Image Processing, 20(8), 2378-2386.
  • Reisenhofer, R., Bosse, S., Kutyniok, G., & Wiegand, T. (2018). A Haar wavelet-based perceptual similarity index for image quality assessment. Signal Processing: Image Communication, 61, 33-43.
  • Fardo, F. A., Conforto, V. H., de Oliveira, F. C., & Rodrigues, P. S. (2016). A formal evaluation of PSNR as quality measurement parameter for image segmentation algorithms. arXiv preprint arXiv:1605.07116.
  • Wang, Z., Simoncelli, E. P., & Bovik, A. C. (2003, November). Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003 (Vol. 2, pp. 1398-1402). Ieee.
  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.
  • Sheikh, H. R., & Bovik, A. C. (2006). Image information and visual quality. IEEE Transactions on image processing, 15(2), 430-444.
  • Zhang, L., Shen, Y., & Li, H. (2014). VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Transactions on Image processing, 23(10), 4270-4281.
  • Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30.
  • Bińkowski, M., Sutherland, D. J., Arbel, M., & Gretton, A. (2018). Demystifying mmd gans. arXiv preprint arXiv:1801.01401.
  • Choi, Y., Uh, Y., Yoo, J., & Ha, J. W. (2020). Stargan v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8188-8197).
  • Ding, K., Liu, Y., Zou, X., Wang, S., & Ma, K. (2021, October). Locally adaptive structure and texture similarity for image quality assessment. In Proceedings of the 29th ACM International Conference on Multimedia (pp. 2483-2491).
  • Sim, K., Yang, J., Lu, W., & Gao, X. (2020). MaD-DLS: mean and deviation of deep and local similarity for image quality assessment. IEEE Transactions on Multimedia, 23, 4037-4048.
  • Borasinski, S., Yavuz, E., & Béhuret, S. (2022). Paired Image-to-Image Translation Quality Assessment Using Multi-Method Fusion. arXiv preprint arXiv:2205.04186.
  • Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., & Fan, J. (2022). Contour-enhanced CycleGAN framework for style transfer from scenery photos to Chinese landscape paintings. Neural Computing and Applications, 34(20), 18075-18096.
  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  • Suzuki, A., Akutsu, H., Naruko, T., Tsubota, K., & Aizawa, K. (2021). Learned Image Compression with Super-Resolution Residual Modules and DISTS Optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1906-1910).
  • Chuan, P. M., Son, L. H., Ali, M., Khang, T. D., Huong, L. T., & Dey, N. (2018). Link prediction in co-authorship networks based on hybrid content similarity metric. Applied Intelligence, 48, 2470-2486.
  • Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 (pp. 234-241). Springer International Publishing.
  • Li, C., & Wand, M. (2016). Precomputed real-time texture synthesis with markovian generative adversarial networks. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14 (pp. 702-716). Springer International Publishing.
  • Ding, K., Ma, K., Wang, S., & Simoncelli, E. P. (2021). Comparison of full-reference image quality models for optimization of image processing systems. International Journal of Computer Vision, 129, 1258-1281.
  • Mihelich, M., Dognin, C., Shu, Y., & Blot, M. (2020, June). A characterization of mean squared error for estimator with bagging. In International Conference on Artificial Intelligence and Statistics (pp. 288-297). PMLR.
  • Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., ... & Schiele, B. (2016). The cityscap es dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3213-3223).
  • Şahin, E., & Talu, M. F. (2021). Bıyık Deseni Üretiminde Çekişmeli Üretici Ağların Performans Karşılaştırması. Bitlis Eren Üniversitesi Fen Bilimleri Dergisi, 10(4), 1575-1589.
  • Altun, S., & Talu, M. F. (2022). A new approach for Pap-Smear image generation with generative adversarial networks. Journal of the Faculty of Engineering and Architecture of Gazi University, 37(3), 1401-1410.

Image-to-Image Translation with CNN Based Perceptual Similarity Metrics

Yıl 2024, , 84 - 98, 06.06.2024
https://doi.org/10.53070/bbd.1429596

Öz

Image-to-image translation is the process of transforming images from different domains. Generative Adversarial Networks (GANs), and Convolutional Neural Networks (CNNs) are widely used in image translation. This study aims to find the most effective loss function for GAN architectures and synthesize better images. For this, experimental results were obtained by changing the loss functions on the Pix2Pix method, one of the basic GAN architectures. The exist loss function used in the Pix2Pix method is the Mean Absolute Error (MAE). It is called the L_1metric. In this study, the effect of convolutional-based perceptual similarity CONTENT, LPIPS, and DISTS metrics on image-to-image translation was applied on the loss function in Pix2Pix architecture. In addition, the effects on image-to-image translation were analyzed using perceptual similarity metrics ( L_1_CONTENT, L_1_LPIPS, and L_1_DISTS) with the original L_1 loss at a rate of 50%. Performance analyzes of the methods were performed with the Cityscapes, Denim2Mustache, Maps, and Papsmear datasets. Visual results were analyzed with conventional (FSIM, HaarPSI, MS-SSIM, PSNR, SSIM, VIFp and VSI) and up-to-date (FID and KID) image comparison metrics. As a result, it has been observed that better results are obtained when convolutional-based methods are used instead of conventional methods for the loss function of GAN architectures. It has been observed that LPIPS and DISTS methods can be used in the loss function of GAN architectures in the future

Kaynakça

  • Zhu, X. X., Tuia, D., Mou, L., Xia, G. S., Zhang, L., Xu, F., & Fraundorfer, F. (2017). Deep learning in remote sensing: A comprehensive review and list of resources. IEEE geoscience and remote sensing magazine, 5(4), 8-36.
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
  • Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., & Fei-Fei, L. (2014). Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 1725-1732).
  • Koushik, J. (2016). Understanding convolutional neural networks. arXiv preprint arXiv:1605.09081.
  • Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., & Lew, M. S. (2016). Deep learning for visual understanding: A review. Neurocomputing, 187, 27-48.
  • Van Den Oord, A., Kalchbrenner, N., & Kavukcuoglu, K. (2016, June). Pixel recurrent neural networks. In International conference on machine learning (pp. 1747-1756). PMLR.
  • Van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., & Graves, A. (2016). Conditional image generation with pixelcnn decoders. Advances in neural information processing systems, 29.
  • Salimans, T., Karpathy, A., Chen, X., & Kingma, D. P. (2017). Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517.
  • Chen, L., Zhang, H., Xiao, J., Nie, L., Shao, J., Liu, W., & Chua, T. S. (2017). Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5659-5667).
  • Liu, M. Y., Breuel, T., & Kautz, J. (2017). Unsupervised image-to-image translation networks. Advances in neural information processing systems, 30.
  • Liu, M. Y., & Tuzel, O. (2016). Coupled generative adversarial networks. Advances in neural information processing systems, 29.
  • Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
  • Wang, T. C., Liu, M. Y., Zhu, J. Y., Tao, A., Kautz, J., & Catanzaro, B. (2018). High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8798-8807).
  • Liu, Y., De Nadai, M., Yao, J., Sebe, N., Lepri, B., & Alameda-Pineda, X. (2020). Gmm-unit: Unsupervised multi-domain and multi-modal image-to-image translation via attribute gaussian mixture modeling. arXiv preprint arXiv:2003.06788.l
  • Royer, A., Bousmalis, K., Gouws, S., Bertsch, F., Mosseri, I., Cole, F., & Murphy, K. (2020). Xgan: Unsupervised image-to-image translation for many-to-many mappings. In Domain Adaptation for Visual Understanding (pp. 33-49). Cham: Springer International Publishing.
  • Royer, A., Bousmalis, K., Gouws, S., Bertsch, F., Mosseri, I., Cole, F., & Murphy, K. (2020). Xgan: Unsupervised image-to-image translation for many-to-many mappings. In Domain Adaptation for Visual Understanding (pp. 33-49). Cham: Springer International Publishing.
  • Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J., & Greenspan, H. (2018). GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing, 321, 321-331.
  • Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).
  • Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
  • Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 586-595).
  • Ding, K., Ma, K., Wang, S., & Simoncelli, E. P. (2020). Image quality assessment: Unifying structure and texture similarity. IEEE transactions on pattern analysis and machine intelligence, 44(5), 2567-2581.
  • Gatys, L. A., Ecker, A. S., & Bethge, M. (2015). A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576.
  • Zhang, L., Zhang, L., Mou, X., & Zhang, D. (2011). FSIM: A feature similarity index for image quality assessment. IEEE transactions on Image Processing, 20(8), 2378-2386.
  • Reisenhofer, R., Bosse, S., Kutyniok, G., & Wiegand, T. (2018). A Haar wavelet-based perceptual similarity index for image quality assessment. Signal Processing: Image Communication, 61, 33-43.
  • Fardo, F. A., Conforto, V. H., de Oliveira, F. C., & Rodrigues, P. S. (2016). A formal evaluation of PSNR as quality measurement parameter for image segmentation algorithms. arXiv preprint arXiv:1605.07116.
  • Wang, Z., Simoncelli, E. P., & Bovik, A. C. (2003, November). Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003 (Vol. 2, pp. 1398-1402). Ieee.
  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.
  • Sheikh, H. R., & Bovik, A. C. (2006). Image information and visual quality. IEEE Transactions on image processing, 15(2), 430-444.
  • Zhang, L., Shen, Y., & Li, H. (2014). VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Transactions on Image processing, 23(10), 4270-4281.
  • Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30.
  • Bińkowski, M., Sutherland, D. J., Arbel, M., & Gretton, A. (2018). Demystifying mmd gans. arXiv preprint arXiv:1801.01401.
  • Choi, Y., Uh, Y., Yoo, J., & Ha, J. W. (2020). Stargan v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8188-8197).
  • Ding, K., Liu, Y., Zou, X., Wang, S., & Ma, K. (2021, October). Locally adaptive structure and texture similarity for image quality assessment. In Proceedings of the 29th ACM International Conference on Multimedia (pp. 2483-2491).
  • Sim, K., Yang, J., Lu, W., & Gao, X. (2020). MaD-DLS: mean and deviation of deep and local similarity for image quality assessment. IEEE Transactions on Multimedia, 23, 4037-4048.
  • Borasinski, S., Yavuz, E., & Béhuret, S. (2022). Paired Image-to-Image Translation Quality Assessment Using Multi-Method Fusion. arXiv preprint arXiv:2205.04186.
  • Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., & Fan, J. (2022). Contour-enhanced CycleGAN framework for style transfer from scenery photos to Chinese landscape paintings. Neural Computing and Applications, 34(20), 18075-18096.
  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  • Suzuki, A., Akutsu, H., Naruko, T., Tsubota, K., & Aizawa, K. (2021). Learned Image Compression with Super-Resolution Residual Modules and DISTS Optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1906-1910).
  • Chuan, P. M., Son, L. H., Ali, M., Khang, T. D., Huong, L. T., & Dey, N. (2018). Link prediction in co-authorship networks based on hybrid content similarity metric. Applied Intelligence, 48, 2470-2486.
  • Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 (pp. 234-241). Springer International Publishing.
  • Li, C., & Wand, M. (2016). Precomputed real-time texture synthesis with markovian generative adversarial networks. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14 (pp. 702-716). Springer International Publishing.
  • Ding, K., Ma, K., Wang, S., & Simoncelli, E. P. (2021). Comparison of full-reference image quality models for optimization of image processing systems. International Journal of Computer Vision, 129, 1258-1281.
  • Mihelich, M., Dognin, C., Shu, Y., & Blot, M. (2020, June). A characterization of mean squared error for estimator with bagging. In International Conference on Artificial Intelligence and Statistics (pp. 288-297). PMLR.
  • Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., ... & Schiele, B. (2016). The cityscap es dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3213-3223).
  • Şahin, E., & Talu, M. F. (2021). Bıyık Deseni Üretiminde Çekişmeli Üretici Ağların Performans Karşılaştırması. Bitlis Eren Üniversitesi Fen Bilimleri Dergisi, 10(4), 1575-1589.
  • Altun, S., & Talu, M. F. (2022). A new approach for Pap-Smear image generation with generative adversarial networks. Journal of the Faculty of Engineering and Architecture of Gazi University, 37(3), 1401-1410.
Toplam 47 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Görüntü İşleme, Derin Öğrenme
Bölüm PAPERS
Yazarlar

Sara Altun Güven 0000-0003-2877-7105

Emrullah Şahin 0000-0002-3390-6285

Muhammed Fatih Talu 0000-0003-1166-8404

Yayımlanma Tarihi 6 Haziran 2024
Gönderilme Tarihi 31 Ocak 2024
Kabul Tarihi 13 Mart 2024
Yayımlandığı Sayı Yıl 2024

Kaynak Göster

APA Altun Güven, S., Şahin, E., & Talu, M. F. (2024). Image-to-Image Translation with CNN Based Perceptual Similarity Metrics. Computer Science, 9(Issue:1), 84-98. https://doi.org/10.53070/bbd.1429596

The Creative Commons Attribution 4.0 International License 88x31.png  is applied to all research papers published by JCS and

a Digital Object Identifier (DOI)     Logo_TM.png  is assigned for each published paper.