Araştırma Makalesi
BibTex RIS Kaynak Göster

Çekişmeli Üretici Ağlar ile Denim Kumaşından Otomatik Bıyık Desen Üretimi

Yıl 2022, Cilt: Vol: 7 Sayı: Issue: 1, 1 - 9, 06.06.2022
https://doi.org/10.53070/bbd.1019451

Öz

Denim kumaşları üzerine çizilen bıyık desenleri lazer ışın cihazıyla oluşturulmaktadır. Bu cihazın istenilen bıyık desenini çizebilmesi için desen görselinin hazırlanması gerekir. Müşteriden alınan numune kotlardaki bıyık desenlerinin görsele aktarılabilmesi için Photoshop programında uzmanlaşmış bir personelin ortalama 2-3 saat süren bir çalışma yapması gerekir. Bu durum üretim hızının yavaşlamasına ve insana bağlı hataların ortaya çıkmasına neden olmaktadır. Bu çalışmada, müşteriden alınacak örnek kot numunelerindeki bıyık desenlerini otomatik algılayarak desen görselini üreten yeni bir yaklaşım önerilmektedir. Bu yaklaşımda, bıyık desen görüntülerinin üretilebilmesi için çekişmeli üretici ağlar (Generative adversarial network-GAN) içerisinde yer alan Pix2Pix mimarisinin güncellenmiş bir versiyonu kullanılmaktadır. Kot ve bıyık desen görsellerinden inşa edilen bir veri kümesiyle eğitimin yapılmış ve personele bağlı farklı bıyık deseni üretiminin önüne geçilmiştir. Yapılan deneysel çalışmalar sonucunda, bıyık desen görseli üretim hızı bir saniyenin altına düşerken, üretim doğruluğu %89 seviyelerinde olduğu görülmektedir. Bir sonraki çalışmada veri kümesindeki görsellerin standardizasyonu sağlanarak doğruluğun arttırılması hedeflenmektedir.

Destekleyen Kurum

İnönü Üniversitesi Bilimsel Araştırma ve Koordinasyon Birimi

Proje Numarası

FKP-2021-2144

Teşekkür

Bu çalışmada İnönü Üniversitesi Bilimsel Araştırma Projeleri Koordinasyon Birimi (BAP) tarafından FKP-2021-2144 kodlu proje ile desteklendiği için teşekkürlerimizi sunarız.

Kaynakça

  • Zou, X., Wong, W. K., & Mo, D. (2018). Fashion meets AI technology. In International Conference on Artificial Intelligence on Textile and Apparel (pp. 255-267). Springer, Cham.
  • Jucienė, M., Urbelis, V., Juchnevičienė, Ž., & Čepukonė, L. (2014). The effect of laser technological parameters on the color and structure of denim fabric. Textile Research Journal, 84(6), 662-670.
  • Zhong, T., Dhandapani, R., Liang, D., Wang, J., Wolcott, M. P., Van Fossen, D., & Liu, H. (2020). Nanocellulose from recycled indigo-dyed denim fabric and its application in composite films. Carbohydrate Polymers, 240, 116283.
  • Golden Laser. (2021). Jeans Laser Engraving Machine. Retrieved from: https://www.goldenlaser.cc/jeans-laser-engraving-machine.html Accessed 21 June 2021
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
  • Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
  • Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., ... & Shi, W. (2016). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4681-4690).
  • Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
  • Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196.
  • Park, T., Liu, M. Y., Wang, T. C., & Zhu, J. Y. (2019). Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2337-2346).
  • Wang, X., Xie, L., Dong, C., & Shan, Y. (2021). Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 1905-1914).
  • Huang, H., Yu, P. S., & Wang, C. (2018). An introduction to image synthesis with generative adversarial nets. arXiv preprint arXiv:1803.04469.
  • Goodfellow, I. (2016). Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160.
  • Atay, M. (2021). Generative Adversarial Networks. Retrieved from: https://colab.research.google.com/github/deeplearningturkiye/cekismeli-uretici-aglar-generative-adversarial-networks-gan/blob/master/gan-notebook-fresh.ipynb#scrollTo=toO2cXNQFYI5 Accessed 12 October 2021
  • Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784.
  • Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
  • Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).
  • Neurohiv. (2021). Pix2Pix – Image-to-Image Translation Network. Retrieved from: https://neurohive.io/en/popular-networks/pix2pix-image-to-image-translation/#:~:text=Pix2pix%20architecture,to%20get%20the%20output%20image. Accessed 2 October 2021
  • Hu, J., Yu, W., & Yu, Z. (2017). Image-to-Image Translation with Conditional-GAN.
  • Nilsson, J., & Akenine-Möller, T. (2020). Understanding ssim. arXiv preprint arXiv:2006.13846.
  • Ravuri, S., & Vinyals, O. (2019). Classification accuracy score for conditional generative models. arXiv preprint arXiv:1905.10887.
  • Willmott, C. J., & Matsuura, K. (2005). Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Climate research, 30(1), 79-82.

Automatic Mustache Pattern Production on Denim Fabric with Generative Adversarial Networks

Yıl 2022, Cilt: Vol: 7 Sayı: Issue: 1, 1 - 9, 06.06.2022
https://doi.org/10.53070/bbd.1019451

Öz

Mustache patterns drawn on denim jeans are created with a laser beam device. For this device to draw the desired mustache pattern, the pattern visual must be prepared. For the mustache patterns in the sample jeans taken from the customer to be transferred to the visual, specialist personnel in the Photoshop program should perform an average of 2-3 hours of work. This situation causes the production speed to slow down and human errors occur. In this study, a new approach is proposed that automatically produces the pattern visual by detecting the mustache patterns in the sample jeans samples to be taken from the customer. In this approach, an updated version of the Pix2Pix architecture within the Generative adversarial network (GAN) is used to produce mustache pattern images. The training was carried out with a dataset constructed from jeans and mustache pattern images, and the production of different mustache patterns depending on the personnel was prevented. As a result of the experimental studies, while the production speed of the mustache pattern fell below one second, it is seen that the production accuracy is around 89%. The next study, it is aimed to increase the accuracy by ensuring the standardization of the images in the dataset.

Proje Numarası

FKP-2021-2144

Kaynakça

  • Zou, X., Wong, W. K., & Mo, D. (2018). Fashion meets AI technology. In International Conference on Artificial Intelligence on Textile and Apparel (pp. 255-267). Springer, Cham.
  • Jucienė, M., Urbelis, V., Juchnevičienė, Ž., & Čepukonė, L. (2014). The effect of laser technological parameters on the color and structure of denim fabric. Textile Research Journal, 84(6), 662-670.
  • Zhong, T., Dhandapani, R., Liang, D., Wang, J., Wolcott, M. P., Van Fossen, D., & Liu, H. (2020). Nanocellulose from recycled indigo-dyed denim fabric and its application in composite films. Carbohydrate Polymers, 240, 116283.
  • Golden Laser. (2021). Jeans Laser Engraving Machine. Retrieved from: https://www.goldenlaser.cc/jeans-laser-engraving-machine.html Accessed 21 June 2021
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
  • Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
  • Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., ... & Shi, W. (2016). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4681-4690).
  • Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
  • Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196.
  • Park, T., Liu, M. Y., Wang, T. C., & Zhu, J. Y. (2019). Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2337-2346).
  • Wang, X., Xie, L., Dong, C., & Shan, Y. (2021). Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 1905-1914).
  • Huang, H., Yu, P. S., & Wang, C. (2018). An introduction to image synthesis with generative adversarial nets. arXiv preprint arXiv:1803.04469.
  • Goodfellow, I. (2016). Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160.
  • Atay, M. (2021). Generative Adversarial Networks. Retrieved from: https://colab.research.google.com/github/deeplearningturkiye/cekismeli-uretici-aglar-generative-adversarial-networks-gan/blob/master/gan-notebook-fresh.ipynb#scrollTo=toO2cXNQFYI5 Accessed 12 October 2021
  • Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784.
  • Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
  • Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).
  • Neurohiv. (2021). Pix2Pix – Image-to-Image Translation Network. Retrieved from: https://neurohive.io/en/popular-networks/pix2pix-image-to-image-translation/#:~:text=Pix2pix%20architecture,to%20get%20the%20output%20image. Accessed 2 October 2021
  • Hu, J., Yu, W., & Yu, Z. (2017). Image-to-Image Translation with Conditional-GAN.
  • Nilsson, J., & Akenine-Möller, T. (2020). Understanding ssim. arXiv preprint arXiv:2006.13846.
  • Ravuri, S., & Vinyals, O. (2019). Classification accuracy score for conditional generative models. arXiv preprint arXiv:1905.10887.
  • Willmott, C. J., & Matsuura, K. (2005). Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Climate research, 30(1), 79-82.
Toplam 22 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yapay Zeka, Yazılım Mühendisliği
Bölüm PAPERS
Yazarlar

Emrullah Şahin 0000-0002-3390-6285

Muhammed Fatih Talu 0000-0003-1166-8404

Proje Numarası FKP-2021-2144
Erken Görünüm Tarihi 5 Haziran 2022
Yayımlanma Tarihi 6 Haziran 2022
Gönderilme Tarihi 5 Kasım 2021
Kabul Tarihi 3 Aralık 2021
Yayımlandığı Sayı Yıl 2022 Cilt: Vol: 7 Sayı: Issue: 1

Kaynak Göster

APA Şahin, E., & Talu, M. F. (2022). Automatic Mustache Pattern Production on Denim Fabric with Generative Adversarial Networks. Computer Science, Vol: 7(Issue: 1), 1-9. https://doi.org/10.53070/bbd.1019451

The Creative Commons Attribution 4.0 International License 88x31.png  is applied to all research papers published by JCS and

a Digital Object Identifier (DOI)     Logo_TM.png  is assigned for each published paper.