Research Article
BibTex RIS Cite

Veri Artırma Tekniklerinin Derin Öğrenmeye Dayalı Yüz Tanıma Sisteminde Etkisi

Year 2021, , 76 - 80, 30.04.2021
https://doi.org/10.47769/izufbed.880581

Abstract

Geçtiğimiz son on yılda, Derin Öğrenme, bilhassa Evrişimsel Sinir Ağları (Convolutional Neural Network), Makine Öğrenmesi ve Derin Sinir Ağları’nın en hızlı büyüyen alanıdır. Birçok Derin Sinir Ağı arasında, günümüzde Evrişimsel Sinir Ağları görüntü analizi ve sınıflandırma amaçları için kullanılan ana araçların başında gelmektedir. Evrişimsel Sinir Ağı tabanlı modeller, yüz tanıma görevlerinde yüksek başarı performansı sergilemektedirler. İlgili modellerin başarı performansının yüksek olması oluşturulan mimariye ve tercih edilen hiper-parametrelere bağlıdır. Ek olarak, modellerin eğitildikleri veri setinin boyutu performans üzerinde büyük etkiye sahiptir. Bu çalışmada ana amacımız, afin dönüşümü (affine transform) yöntemi ile veri artırma işlemini gerçekleştirmek ve bu veri artırma tekniğinin Evrişimsel Sinir Ağlarına dayalı yüz tanıma sistemine etkisini analiz etmektir. Evrişimsel Sinir Ağlarına dayalı yüz tanıma sistemi Destek Vektör Makineleri ve K-En Yakın Komşu sınıflandırma algoritmaları ile uygulanmış, devamına iki algoritmanın performansı karşılaştırılmıştır. Çalışmamızda bütün deneyler Labeled Faces in the Wild (LFW) veri seti üzerinde gerçekleşmiştir. Elde edilen sonuçlar, uygulanan veri artırma tekniğinin, yüz doğrulama işlemi için %1.8 oranında, yüz sınıflandırma işlemi için %2.2 (Destek Vektör Makineleri) ve %2.5 (K-En Yakın Komşu) oranında artış sergilediği gözlemlenmiştir. Gerçekleşen bütün deneyler sonunda, yüz tanıma sistemi doğrulama işleminde %94.4 oranında doğruluk elde etmiştir. Sınıflandırma işlemlerinde ise sistem, Destek Vektör Makineleri algoritması uygulanarak %97.1, K-En Yakın Komşu algoritması uygulanarak ise %96.3 oranında başarı performansı elde etmiştir.

References

  • Referans 1 Fukushima, K. (1988). Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural networks, 1(2), 119-130..
  • Referans 2 LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
  • Referans 3 Pawar, K. B., Mirajkar, F., Biradar, V., & Fatima, R. (2017, September). A novel practice for face classification. In 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC) (pp. 822-825). IEEE.
  • Referans 4 Kanade, T. (1974). Picture processing system by computer complex and recognition of human faces.
  • Referans 5 Cox, I. J., Ghosn, J., & Yianilos, P. N. (1996, June). Feature-based face recognition using mixture-distance. In Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 209-216). IEEE.
  • Referans 6 Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 815-823).
  • Referans 7 Parkhi, O. M., Vedaldi, A., & Zisserman, A. (2015). Deep face recognition.
  • Referans 8 Taigman, Y., Yang, M., Ranzato, M. A., & Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1701-1708).
  • Referans 9 Kortylewski, A., Schneider, A., Gerig, T., Egger, B., Morel-Forster, A., & Vetter, T. (2018). Training deep face recognition systems with synthetic data. arXiv preprint arXiv:1802.05891.
  • Referans 10 Lv, J. J., Shao, X. H., Huang, J. S., Zhou, X. D., & Zhou, X. (2017). Data augmentation for face recognition. Neurocomputing, 230, 184-196.
  • Referans 11 Huang, G. B., Mattar, M., Berg, T., & Learned-Miller, E. (2008, October). Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Workshop on faces in'Real-Life'Images: detection, alignment, and recognition.
  • Referans 12 “https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator.”
  • Referans 13 Déniz, O., Bueno, G., Salido, J., & De la Torre, F. (2011). Face recognition using histograms of oriented gradients. Pattern recognition letters, 32(12), 1598-1603.
  • Referans 14 Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).
  • Referans 15 “https://cmusatyalab.github.io/openface/models-and-accuracies/#pre-trained-models.”
  • Referans 16 “https://github.com/timesler/facenet-pytorch/blob/master/models/inception_resnet_v1.py.”
  • Referans 17 Danielsson, P. E. (1980). Euclidean distance mapping. Computer Graphics and image processing, 14(3), 227-248.

Effects of Data Augmentation Techniques on Face Recognition System Based on Deep Learning

Year 2021, , 76 - 80, 30.04.2021
https://doi.org/10.47769/izufbed.880581

Abstract

In the last decade, Deep Learning particulary Convolutional Neural Networks is the fastest growing area of Machine Learning and Deep Neural Networks. Amongs the many Deep Neural Networks, Convolutional Neural Networks are one of the main tools used for image analysis and classification tasks. Convolutional Neural Network-based models performs high performance in face recognition tasks. The performance of the relevant models depends on their architecture and hyper-parameters. In addition, the size of the dataset in which the models are trained has a large impact on performance. Main goal of this study is to perform data augmentation based affine transform method and to analyze the effect of this data enhancement technique on face recognition system based on Convolutional Neural Networks. Face recognition system based on Convolutional Neural Networks was performed using Support Vector Machines and K-Nearest Neighbor classification algorithms. Following, the performance of the two algorithms was compared. All experiments in our study were carried out on the Labeled Faces in the Wild (LFW) dataset. Obtained results demonstrates that applied data augmentation technique increase the performance of face recognition system, in face verification task for 1.8%, whereas for classification task 2.2% for Support Vector Machines and 2.5% for K-Nearest Neighbor. Finnaly, face recognition system achieved 94.4% accuracy in verification phase, 97.1% (Support Vector Machines) and 96.3% (K-Nearest Neighbor) for classification task.

References

  • Referans 1 Fukushima, K. (1988). Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural networks, 1(2), 119-130..
  • Referans 2 LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
  • Referans 3 Pawar, K. B., Mirajkar, F., Biradar, V., & Fatima, R. (2017, September). A novel practice for face classification. In 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC) (pp. 822-825). IEEE.
  • Referans 4 Kanade, T. (1974). Picture processing system by computer complex and recognition of human faces.
  • Referans 5 Cox, I. J., Ghosn, J., & Yianilos, P. N. (1996, June). Feature-based face recognition using mixture-distance. In Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 209-216). IEEE.
  • Referans 6 Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 815-823).
  • Referans 7 Parkhi, O. M., Vedaldi, A., & Zisserman, A. (2015). Deep face recognition.
  • Referans 8 Taigman, Y., Yang, M., Ranzato, M. A., & Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1701-1708).
  • Referans 9 Kortylewski, A., Schneider, A., Gerig, T., Egger, B., Morel-Forster, A., & Vetter, T. (2018). Training deep face recognition systems with synthetic data. arXiv preprint arXiv:1802.05891.
  • Referans 10 Lv, J. J., Shao, X. H., Huang, J. S., Zhou, X. D., & Zhou, X. (2017). Data augmentation for face recognition. Neurocomputing, 230, 184-196.
  • Referans 11 Huang, G. B., Mattar, M., Berg, T., & Learned-Miller, E. (2008, October). Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Workshop on faces in'Real-Life'Images: detection, alignment, and recognition.
  • Referans 12 “https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator.”
  • Referans 13 Déniz, O., Bueno, G., Salido, J., & De la Torre, F. (2011). Face recognition using histograms of oriented gradients. Pattern recognition letters, 32(12), 1598-1603.
  • Referans 14 Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).
  • Referans 15 “https://cmusatyalab.github.io/openface/models-and-accuracies/#pre-trained-models.”
  • Referans 16 “https://github.com/timesler/facenet-pytorch/blob/master/models/inception_resnet_v1.py.”
  • Referans 17 Danielsson, P. E. (1980). Euclidean distance mapping. Computer Graphics and image processing, 14(3), 227-248.
There are 17 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section Articles
Authors

Erdal Alimovski 0000-0003-0909-2047

Gökhan Erdemir 0000-0003-4095-6333

Publication Date April 30, 2021
Submission Date February 15, 2021
Acceptance Date March 17, 2021
Published in Issue Year 2021

Cite

APA Alimovski, E., & Erdemir, G. (2021). Veri Artırma Tekniklerinin Derin Öğrenmeye Dayalı Yüz Tanıma Sisteminde Etkisi. İstanbul Sabahattin Zaim Üniversitesi Fen Bilimleri Enstitüsü Dergisi, 3(1), 76-80. https://doi.org/10.47769/izufbed.880581

20503

Bu eser Creative Commons Atıf-GayriTicari 4.0 Uluslararası Lisansı ile lisanslanmıştır.