Araştırma Makalesi
BibTex RIS Kaynak Göster

Generative Adversarial Network for Generating Synthetic Infrared Image from Visible Image

Yıl 2022, Cilt: 10 Sayı: 2, 286 - 299, 30.06.2022
https://doi.org/10.29109/gujsc.1011486

Öz

One of the most important discoveries in the field of deep learning in recent years is the Generative Adversarial Networks (GAN). It offers great convenience and flexibility for image-to-image conversion processes. This study aims to obtain thermal images from visible band colour images by using Pix2Pix Network, which is a Conditionally Generative Adversarial Network (cGAN). For this purpose, a data set has been prepared by taking facial images at different angles in the visible and infrared bands. By applying image processing methods on this created dataset, pixel-by-pixel matching process was performed. Synthetic thermal face images were obtained thanks to this learning network fed with facial images consisting of visible and long wavelength infrared image (LWIR) pairs. In the generator and discriminator deep networks of the Pix2Pix GAN, Batch Normalization and Instance Normalization methods are applied and their effects on the outputs are examined. The same process has also been tested on the Google Maps dataset and thus its effects on different datasets has been demonstrated. Similarity values between synthetic outputs and real images of both studies has been calculated with several image quality metrics. The performance of this generated model in creating the details in the infrared band is also given in the conclusion section.

Kaynakça

  • [1] McCarthy J., What is Artificial Intelligence?, Stanford University, Available at: http://www-formal.stanford.edu/jmc/whatisai.pdf, (Accessed: 2021-10-05), 2.
  • [2] Goodfellow I. J., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A. C., and Bengio Y., Generative Adversarial Nets, In Proceedings of NIPS, (2014) 2672–2680.
  • [3] Isola P., Zhu J., Zhou T., Efros A., Image-to-Image Translation with Conditional Adversarial Networks, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017).
  • [4] Demirhan A., Kılıç Y. A., Güler İ., Artificial Intelligence Applications in Medicine, Yoğun Bakım Dergisi 9(1):31-41, (2010).
  • [5] ELMAS Ç.,Yapay zeka uygulamaları, Ankara: Seçkin Yayıncılık2018, pp.479
  • [6] Balcı O., Terzi Ş. B., Balaban Ö., Map Generation & Manipulation With Generative Adversarial Network, Journal of Computational Design, 1 (3), (2020) 95-114
  • [7] Turhan C. G., Bilge H. S., Variational Autoencoded Compositional Pattern Generative Adversarial Network for Handwritten
  • [8] M. Mirza, S. Osindero, Conditional Generative Adversarial nets., arXiv preprint arXiv:1411.1784, (2014).
  • [9] A. Mino, G. Spanakis, LoGAN: Generating Logos with a Generative Adversarial Neural Network Conditioned on color, 17th IEEE International Conference on Machine Learning and Applications, (2018) 966.
  • [10] Altun S., Talu M.F., Review of Generative Adversarial Networks for Image-to-Image Translation and Image Synthesis, Avrupa Bilim ve Teknoloji Dergisi, Ejosat Özel Sayı 2021 (HORA), (2021) 53-60.
  • [11] Liu M., Breuel T., Kautz J., Unsupervised Image-to-Image Translation Networks, arXiv preprint arXiv:1703.00848, (2017).
  • [12] Ioffe S., Szegedy C., Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, ICML'15: Proceedings of the 32nd International Conference on International Conference on Machine Learning, Volume 37 (2015) 448–456.
  • [13] Radford A., Metz L., Chintala S., Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, arXiv:1511.06434, (2015).
  • [14] Salimans T., Goodfellow I., Zaremba W., Cheung V., Radford A., Chen X., Improved Techniques for Training GANs, arXiv:1606.03498, (2016).
  • [15] Ulyanov D., Lebedev V., Vedaldi A., Lempitsky V., Texture Networks: Feed-forward Synthesis of Textures and Stylized Images, ICML'16: Proceedings of the 33rd International Conference on International Conference on Machine Learning, Volume 48 (2016) 1349–1357
  • [16] Ulyanov D., Vedaldi A., Lempitsky V., Instance Normalization: The Missing Ingredient for Fast Stylization, arXiv preprint arXiv:1607.08022, (2016).
Yıl 2022, Cilt: 10 Sayı: 2, 286 - 299, 30.06.2022
https://doi.org/10.29109/gujsc.1011486

Öz

Kaynakça

  • [1] McCarthy J., What is Artificial Intelligence?, Stanford University, Available at: http://www-formal.stanford.edu/jmc/whatisai.pdf, (Accessed: 2021-10-05), 2.
  • [2] Goodfellow I. J., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A. C., and Bengio Y., Generative Adversarial Nets, In Proceedings of NIPS, (2014) 2672–2680.
  • [3] Isola P., Zhu J., Zhou T., Efros A., Image-to-Image Translation with Conditional Adversarial Networks, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017).
  • [4] Demirhan A., Kılıç Y. A., Güler İ., Artificial Intelligence Applications in Medicine, Yoğun Bakım Dergisi 9(1):31-41, (2010).
  • [5] ELMAS Ç.,Yapay zeka uygulamaları, Ankara: Seçkin Yayıncılık2018, pp.479
  • [6] Balcı O., Terzi Ş. B., Balaban Ö., Map Generation & Manipulation With Generative Adversarial Network, Journal of Computational Design, 1 (3), (2020) 95-114
  • [7] Turhan C. G., Bilge H. S., Variational Autoencoded Compositional Pattern Generative Adversarial Network for Handwritten
  • [8] M. Mirza, S. Osindero, Conditional Generative Adversarial nets., arXiv preprint arXiv:1411.1784, (2014).
  • [9] A. Mino, G. Spanakis, LoGAN: Generating Logos with a Generative Adversarial Neural Network Conditioned on color, 17th IEEE International Conference on Machine Learning and Applications, (2018) 966.
  • [10] Altun S., Talu M.F., Review of Generative Adversarial Networks for Image-to-Image Translation and Image Synthesis, Avrupa Bilim ve Teknoloji Dergisi, Ejosat Özel Sayı 2021 (HORA), (2021) 53-60.
  • [11] Liu M., Breuel T., Kautz J., Unsupervised Image-to-Image Translation Networks, arXiv preprint arXiv:1703.00848, (2017).
  • [12] Ioffe S., Szegedy C., Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, ICML'15: Proceedings of the 32nd International Conference on International Conference on Machine Learning, Volume 37 (2015) 448–456.
  • [13] Radford A., Metz L., Chintala S., Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, arXiv:1511.06434, (2015).
  • [14] Salimans T., Goodfellow I., Zaremba W., Cheung V., Radford A., Chen X., Improved Techniques for Training GANs, arXiv:1606.03498, (2016).
  • [15] Ulyanov D., Lebedev V., Vedaldi A., Lempitsky V., Texture Networks: Feed-forward Synthesis of Textures and Stylized Images, ICML'16: Proceedings of the 33rd International Conference on International Conference on Machine Learning, Volume 48 (2016) 1349–1357
  • [16] Ulyanov D., Vedaldi A., Lempitsky V., Instance Normalization: The Missing Ingredient for Fast Stylization, arXiv preprint arXiv:1607.08022, (2016).
Toplam 16 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Tasarım ve Teknoloji
Yazarlar

Utku Ulusoy 0000-0003-0490-8115

Koray Yılmaz 0000-0002-0536-4618

Gülay Özşahin 0000-0002-7312-697X

Yayımlanma Tarihi 30 Haziran 2022
Gönderilme Tarihi 18 Ekim 2021
Yayımlandığı Sayı Yıl 2022 Cilt: 10 Sayı: 2

Kaynak Göster

APA Ulusoy, U., Yılmaz, K., & Özşahin, G. (2022). Generative Adversarial Network for Generating Synthetic Infrared Image from Visible Image. Gazi University Journal of Science Part C: Design and Technology, 10(2), 286-299. https://doi.org/10.29109/gujsc.1011486

                                     16168      16167     16166     21432        logo.png


    e-ISSN:2147-9526