Research Article
BibTex RIS Cite

A Hybrid Framework for Matching Printing Design Files to Product Photos

Year 2020, Volume: 8 Issue: 2, 170 - 180, 30.04.2020
https://doi.org/10.17694/bajece.677326

Abstract

We propose a real-time image matching framework, which is hybrid in the sense that it uses both hand-crafted features and deep features obtained from a well-tuned deep convolutional network. The matching problem, which we concentrate on, is specific to a certain application, that is, printing design to product photo matching. Printing designs are any kind of template image files, created using a design tool, thus are perfect image signals. However, photographs of a printed product suffer many unwanted effects, such as uncontrolled shooting angle, uncontrolled illumination, occlusions, printing deficiencies in color, camera noise, optic blur, et cetera. For this purpose, we create an image set that includes printing design and corresponding product photo pairs with collaboration of an actual printing facility. Using this image set, we benchmark various hand-crafted and deep features for matching performance and propose a framework in which deep learning is utilized with highest contribution, but without disabling real-time operation using an ordinary desktop computer.

Supporting Institution

TUBITAK - TEYDEB

Project Number

7170364

Thanks

The authors would like to thank the owner of the project, Şans Printing Industries (bidolubaski.com) for their support and hard-work.

References

  • [1] T. Dharani and I. L. Aroquiaraj, "A survey on content based image retrieval," in 2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering, pp. 485-490, Feb 2013.
  • [2] Y. Liu, D. Zhang, G. Lu, and W.-Y. Ma, "A survey of content-based image retrieval with high-level semantics," Pattern Recognition, vol. 40, no. 1, pp. 262 - 282, 2007.
  • [3] Sivic and Zisserman, "Video google: a text retrieval approach to object matching in videos," in Proceedings Ninth IEEE International Conference on Computer Vision, pp. 1470-1477 vol.2, Oct 2003.
  • [4] H. Wang, Y. Cai, Y. Zhang, H. Pan, W. Lv, and H. Han, "Deep learning for image retrieval: What works and what doesn't," in 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pp. 1576-1583, Nov 2015.
  • [5] J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson, "Understanding neural networks through deep visualization," in Deep Learning Workshop, 31. International Conference on Machine Learning (ICML), 2015.
  • [6] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," in Proc. of Workshop at Int. Conf. on Learning Representations (ICLR) Workshops, 2015.
  • [7] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, June 2015.
  • [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in Neural Information Processing Systems 25 (F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, eds.), pp. 1097-1105, Curran Associates, Inc., 2012.
  • [9] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, "Overfeat: Integrated recognition, localization and detection using convolutional networks," CoRR, vol. abs/1312.6229, 2013.
  • [10] A. Babenko, A. Slesarev, A. Chigorin, and V. Lempitsky, "Neural codes for image retrieval," in Computer Vision - ECCV 2014 (D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds.), (Cham), pp. 584-599, Springer International Publishing, 2014.
  • [11] V. Chandrasekhar, J. Lin, O. Morère, H. Goh, and A. Veillard, "A practical guide to cnns and fisher vectors for image instance retrieval," Signal Process., vol. 128, pp. 426-439, Nov. 2016.
  • [12] I. Melekhov, J. Kannala, and E. Rahtu, "Siamese network features for image matching," in 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 378-383, Dec 2016.
  • [13] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, "Deepface: Closing the gap to human-level performance in face verification," in 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701-1708, June 2014.
  • [14] T. Lin, Y. Cui, S. Belongie, and J. Hays, "Learning deep representations for ground-to-aerial geolocalization," in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5007-5015, June 2015.
  • [15] D. Cai, X. Gu, and C. Wang, "A revisit on deep hashings for large-scale content based image retrieval," CoRR, vol. abs/1711.06016, 2017.
  • [16] R. Datta, J. Li, and J. Z. Wang, "Content-based image retrieval: Approaches and trends of the new age," in Proceedings of the 7th ACM SIGMM International Workshop on Multimedia Information Retrieval, MIR '05, (New York, NY, USA), pp. 253-262, ACM, 2005.
  • [17] P. Clough, H. Müller, T. Deselaers, M. Grubinger, T. Martin Lehmann, J. R. Jensen, and W. Hersh, "The clef 2005 cross-language image retrieval track," CEUR Workshop Proceedings, vol. 1171, pp. 535-557, 09 2005.
  • [18] G. Schaefer, "Ucid-raw - a colour image database in raw format," in VipIMAGE 2017, (Cham), pp. 179-184, Springer International Publishing, 2018.
  • [19] T. Ahmad, P. Campr, M. Cadik, and G. Bebis, "Comparison of semantic segmentation approaches for horizon/sky line detection," 2017 International Joint Conference on Neural Networks (IJCNN), May 2017.
  • [20] F. Jiang, A. Grigorev, S. Rho, Z. Tian, Y. Fu, W. Jifara, A. Khan, and S. Liu, "Medical image semantic segmentation based on deep learning," Neural Computing and Applications, 07 2017.
  • [21] M. Siam, S. Elkerdawy, M. Jägersand, and S. Yogamani, "Deep semantic segmentation for automated driving: Taxonomy, roadmap and challenges," in 20th IEEE International Conference on Intelligent Transportation Systems, ITSC 2017, Yokohama, Japan, October 16-19, 2017, pp. 1-8, 2017.
  • [22] M. Thoma, "A survey of semantic segmentation," arXiv preprint arXiv:1602.06541, 2016.
  • [23] M. H. Saffar, M. Fayyaz, M. Sabokrou, and M. Fathy, "Semantic video segmentation: A review on recent approaches," 2018.
  • [24] H. Yu, Z. Yang, L. Tan, Y. Wang, W. Sun, M. Sun, and Y. Tang, "Methods and datasets on semantic segmentation: A review," Neurocomputing, vol. 304, pp. 82 - 103, 2018.
  • [25] Y. Guo, Y. Liu, T. Georgiou, and M. S. Lew, "A review of semantic segmentation using deep neural networks," International Journal of Multimedia Information Retrieval, vol. 7, pp. 87-93, Jun 2018.
  • [26] A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, and J. G. Rodríguez, "A review on deep learning techniques applied to semantic segmentation," CoRR, vol. abs/1704.06857, 2017.
  • [27] E. Shelhamer, J. Long, and T. Darrell, "Fully convolutional networks for semantic segmentation," IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, pp. 640-651, Apr. 2017.
  • [28] A. Vedaldi and K. Lenc, "Matconvnet: Convolutional neural networks for matlab," in Proceedings of the 23rd ACM International Conference on Multimedia, MM '15, pp. 689-692, 2015.
  • [29] H. Zhang, J. E. Fritts, and S. A. Goldman, "Image segmentation evaluation: A survey of unsupervised methods," Computer Vision and Image Understanding, vol. 110, no. 2, pp. 260 - 280, 2008.
  • [30] J. Harel, C. Koch, and P. Perona, "Graph-based visual saliency," in Proc. Advances in Neural Information Processing Systems, (NIPS), pp. 545-552, 2006.
  • [31] Y. Xu, J. Li, J. Chen, G. Shen, and Y. Gao, "A novel approach for visual saliency detection and segmentation based on objectness and top-down attention," in 2017 2nd International Conference on Image, Vision and Computing (ICIVC), pp. 361-365, June 2017.
  • [32] J. Lankinen, V. Kangas, and J. Kamarainen, "A comparison of local feature detectors and descriptors for visual object categorization by intra-class repeatability and matching," in Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), pp. 780-783, Nov 2012.
  • [33] A. Oliva and A. Torralba, "Modeling the shape of the scene: A holistic representation of the spatial envelope," Int. J. Comput. Vision, vol. 42, pp. 145-175, May 2001.
  • [34] N. Dalal and B. Triggs, "Histograms of oriented gradients for human detection," in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 1, pp. 886-893 vol. 1, June 2005.
  • [35] D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol. 60, pp. 91-110, Nov 2004.
  • [36] H. Bay, T. Tuytelaars, and L. Van Gool, "Surf: Speeded up robust features," in Computer Vision - ECCV 2006 (A. Leonardis, H. Bischof, and A. Pinz, eds.), (Berlin, Heidelberg), pp. 404-417, Springer Berlin Heidelberg, 2006.
  • [37] K. He, X. Zhang, S. Ren, and J. Sun, "Spatial pyramid pooling in deep convolutional networks for visual recognition," CoRR, vol. abs/1406.4729, 2014.
  • [38] C. Hentschel and H. Sack, "Does one size really fit all?: Evaluating classifiers in bag-of-visual-words classification," in Proceedings of the 14th International Conference on Knowledge Technologies and Data-driven Business, iKNOW '14, (New York, NY, USA), pp. 7:1-7:8, ACM, 2014.
  • [39] L. I. Kuncheva, "On the optimality of naïve bayes with dependent binary features," Pattern Recogn. Lett., vol. 27, pp. 830-837, May 2006.
  • [40] K. Lenc and A. Vedaldi, "Understanding image representations by measuring their equivariance and equivalence," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 991-999, 2015.
Year 2020, Volume: 8 Issue: 2, 170 - 180, 30.04.2020
https://doi.org/10.17694/bajece.677326

Abstract

Project Number

7170364

References

  • [1] T. Dharani and I. L. Aroquiaraj, "A survey on content based image retrieval," in 2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering, pp. 485-490, Feb 2013.
  • [2] Y. Liu, D. Zhang, G. Lu, and W.-Y. Ma, "A survey of content-based image retrieval with high-level semantics," Pattern Recognition, vol. 40, no. 1, pp. 262 - 282, 2007.
  • [3] Sivic and Zisserman, "Video google: a text retrieval approach to object matching in videos," in Proceedings Ninth IEEE International Conference on Computer Vision, pp. 1470-1477 vol.2, Oct 2003.
  • [4] H. Wang, Y. Cai, Y. Zhang, H. Pan, W. Lv, and H. Han, "Deep learning for image retrieval: What works and what doesn't," in 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pp. 1576-1583, Nov 2015.
  • [5] J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson, "Understanding neural networks through deep visualization," in Deep Learning Workshop, 31. International Conference on Machine Learning (ICML), 2015.
  • [6] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," in Proc. of Workshop at Int. Conf. on Learning Representations (ICLR) Workshops, 2015.
  • [7] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, June 2015.
  • [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in Neural Information Processing Systems 25 (F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, eds.), pp. 1097-1105, Curran Associates, Inc., 2012.
  • [9] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, "Overfeat: Integrated recognition, localization and detection using convolutional networks," CoRR, vol. abs/1312.6229, 2013.
  • [10] A. Babenko, A. Slesarev, A. Chigorin, and V. Lempitsky, "Neural codes for image retrieval," in Computer Vision - ECCV 2014 (D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds.), (Cham), pp. 584-599, Springer International Publishing, 2014.
  • [11] V. Chandrasekhar, J. Lin, O. Morère, H. Goh, and A. Veillard, "A practical guide to cnns and fisher vectors for image instance retrieval," Signal Process., vol. 128, pp. 426-439, Nov. 2016.
  • [12] I. Melekhov, J. Kannala, and E. Rahtu, "Siamese network features for image matching," in 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 378-383, Dec 2016.
  • [13] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, "Deepface: Closing the gap to human-level performance in face verification," in 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701-1708, June 2014.
  • [14] T. Lin, Y. Cui, S. Belongie, and J. Hays, "Learning deep representations for ground-to-aerial geolocalization," in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5007-5015, June 2015.
  • [15] D. Cai, X. Gu, and C. Wang, "A revisit on deep hashings for large-scale content based image retrieval," CoRR, vol. abs/1711.06016, 2017.
  • [16] R. Datta, J. Li, and J. Z. Wang, "Content-based image retrieval: Approaches and trends of the new age," in Proceedings of the 7th ACM SIGMM International Workshop on Multimedia Information Retrieval, MIR '05, (New York, NY, USA), pp. 253-262, ACM, 2005.
  • [17] P. Clough, H. Müller, T. Deselaers, M. Grubinger, T. Martin Lehmann, J. R. Jensen, and W. Hersh, "The clef 2005 cross-language image retrieval track," CEUR Workshop Proceedings, vol. 1171, pp. 535-557, 09 2005.
  • [18] G. Schaefer, "Ucid-raw - a colour image database in raw format," in VipIMAGE 2017, (Cham), pp. 179-184, Springer International Publishing, 2018.
  • [19] T. Ahmad, P. Campr, M. Cadik, and G. Bebis, "Comparison of semantic segmentation approaches for horizon/sky line detection," 2017 International Joint Conference on Neural Networks (IJCNN), May 2017.
  • [20] F. Jiang, A. Grigorev, S. Rho, Z. Tian, Y. Fu, W. Jifara, A. Khan, and S. Liu, "Medical image semantic segmentation based on deep learning," Neural Computing and Applications, 07 2017.
  • [21] M. Siam, S. Elkerdawy, M. Jägersand, and S. Yogamani, "Deep semantic segmentation for automated driving: Taxonomy, roadmap and challenges," in 20th IEEE International Conference on Intelligent Transportation Systems, ITSC 2017, Yokohama, Japan, October 16-19, 2017, pp. 1-8, 2017.
  • [22] M. Thoma, "A survey of semantic segmentation," arXiv preprint arXiv:1602.06541, 2016.
  • [23] M. H. Saffar, M. Fayyaz, M. Sabokrou, and M. Fathy, "Semantic video segmentation: A review on recent approaches," 2018.
  • [24] H. Yu, Z. Yang, L. Tan, Y. Wang, W. Sun, M. Sun, and Y. Tang, "Methods and datasets on semantic segmentation: A review," Neurocomputing, vol. 304, pp. 82 - 103, 2018.
  • [25] Y. Guo, Y. Liu, T. Georgiou, and M. S. Lew, "A review of semantic segmentation using deep neural networks," International Journal of Multimedia Information Retrieval, vol. 7, pp. 87-93, Jun 2018.
  • [26] A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, and J. G. Rodríguez, "A review on deep learning techniques applied to semantic segmentation," CoRR, vol. abs/1704.06857, 2017.
  • [27] E. Shelhamer, J. Long, and T. Darrell, "Fully convolutional networks for semantic segmentation," IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, pp. 640-651, Apr. 2017.
  • [28] A. Vedaldi and K. Lenc, "Matconvnet: Convolutional neural networks for matlab," in Proceedings of the 23rd ACM International Conference on Multimedia, MM '15, pp. 689-692, 2015.
  • [29] H. Zhang, J. E. Fritts, and S. A. Goldman, "Image segmentation evaluation: A survey of unsupervised methods," Computer Vision and Image Understanding, vol. 110, no. 2, pp. 260 - 280, 2008.
  • [30] J. Harel, C. Koch, and P. Perona, "Graph-based visual saliency," in Proc. Advances in Neural Information Processing Systems, (NIPS), pp. 545-552, 2006.
  • [31] Y. Xu, J. Li, J. Chen, G. Shen, and Y. Gao, "A novel approach for visual saliency detection and segmentation based on objectness and top-down attention," in 2017 2nd International Conference on Image, Vision and Computing (ICIVC), pp. 361-365, June 2017.
  • [32] J. Lankinen, V. Kangas, and J. Kamarainen, "A comparison of local feature detectors and descriptors for visual object categorization by intra-class repeatability and matching," in Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), pp. 780-783, Nov 2012.
  • [33] A. Oliva and A. Torralba, "Modeling the shape of the scene: A holistic representation of the spatial envelope," Int. J. Comput. Vision, vol. 42, pp. 145-175, May 2001.
  • [34] N. Dalal and B. Triggs, "Histograms of oriented gradients for human detection," in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 1, pp. 886-893 vol. 1, June 2005.
  • [35] D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol. 60, pp. 91-110, Nov 2004.
  • [36] H. Bay, T. Tuytelaars, and L. Van Gool, "Surf: Speeded up robust features," in Computer Vision - ECCV 2006 (A. Leonardis, H. Bischof, and A. Pinz, eds.), (Berlin, Heidelberg), pp. 404-417, Springer Berlin Heidelberg, 2006.
  • [37] K. He, X. Zhang, S. Ren, and J. Sun, "Spatial pyramid pooling in deep convolutional networks for visual recognition," CoRR, vol. abs/1406.4729, 2014.
  • [38] C. Hentschel and H. Sack, "Does one size really fit all?: Evaluating classifiers in bag-of-visual-words classification," in Proceedings of the 14th International Conference on Knowledge Technologies and Data-driven Business, iKNOW '14, (New York, NY, USA), pp. 7:1-7:8, ACM, 2014.
  • [39] L. I. Kuncheva, "On the optimality of naïve bayes with dependent binary features," Pattern Recogn. Lett., vol. 27, pp. 830-837, May 2006.
  • [40] K. Lenc and A. Vedaldi, "Understanding image representations by measuring their equivariance and equivalence," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 991-999, 2015.
There are 40 citations in total.

Details

Primary Language English
Subjects Artificial Intelligence
Journal Section Araştırma Articlessi
Authors

Alper Kaplan

Erdem Akagunduz 0000-0002-0792-7306

Project Number 7170364
Publication Date April 30, 2020
Published in Issue Year 2020 Volume: 8 Issue: 2

Cite

APA Kaplan, A., & Akagunduz, E. (2020). A Hybrid Framework for Matching Printing Design Files to Product Photos. Balkan Journal of Electrical and Computer Engineering, 8(2), 170-180. https://doi.org/10.17694/bajece.677326

All articles published by BAJECE are licensed under the Creative Commons Attribution 4.0 International License. This permits anyone to copy, redistribute, remix, transmit and adapt the work provided the original work and source is appropriately cited.Creative Commons Lisansı