Araştırma Makalesi
BibTex RIS Kaynak Göster

Farklı Evrişimsel Sinir Ağı Mimarilerinin Yüz İfade Analizi Alanındaki Başarımlarının İncelenmesi

Yıl 2020, Cilt: 11 Sayı: 1, 123 - 133, 27.03.2020
https://doi.org/10.24012/dumf.679793

Öz

Evrişimsel Sinir ağları (ESA), son yıllarda birçok çalışma tarafından özellik çıkarıcı olarak sıklıkla kullanılmaktadır. El ile çıkarılan farklı özellik çıkarma algoritmalarının aksine, etkileşim gerektirmeden otomatik olarak özellik çıkaran ESA’ların yardımıyla, birçok problem ve çalışma alanındaki başarımlar daha ileriye taşınmıştır. Bu çalışmada, farklı mimari özelliklere sahip olan ESA’ların yüz ifade analizi üzerindeki başarımları incelenmiştir. Öncelikle farklı ESA mimarileri tanıtılmış ve bu mimarilerin birbirlerinden farklılaştıkları kısımlar açıklanmıştır. FER2013 veri seti kullanılarak bütün ağ mimarileri üzerinde gerçekleştirilen eğitim ve doğrulama işlemleri sonucunda her bir mimariye ait başarım ve kayıp grafikleri sunulmuştur. Son olarak farklı mimarilerin yüz ifade analizi üzerindeki başarımlarının sebepleri tartışılmış ve gelecek çalışmalar için önerilerde bulunulmuştur.

Kaynakça

  • [1] C. Darwin, The expression of the emotions in man and animals. London: John Murray, 1872.
  • [2] P. Ekman et al., “Universals and Cultural Differences in the Judgments of Facial Expressions of Emotion,” J. Pers. Soc. Psychol., vol. 53, no. 4, pp. 712–717, 1987.
  • [3] S. Li and W. Deng, “Deep Facial Expression Recognition: A Survey,” arXiv Prepr. arXiv1804.08348, pp. 1–22, 2018.
  • [4] G. Sandbach, S. Zafeiriou, M. Pantic, and L. Yin, “Static and dynamic 3D facial expression recognition: A comprehensive survey,” Image Vis. Comput., vol. 30, no. 10, pp. 683–697, 2012.
  • [5] M. Ramzan, H. U. Khan, S. M. Awan, A. Ismail, M. Ilyas, and A. Mahmood, “A Survey on State-of-the-Art Drowsiness Detection Techniques,” IEEE Access, vol. 7, pp. 61904–61919, 2019.
  • [6] A. M. Barreto, “Application of facial expression studies on the field of marketing,” Emot. Expr. brain face, no. June, pp. 163–189, 2017.
  • [7] P. M. Blom et al., “Towards personalised gaming via facial expression recognition,” Proc. 10th AAAI Conf. Artif. Intell. Interact. Digit. Entertain. AIIDE 2014, pp. 30–36, 2014.
  • [8] L. Zhang, M. Jiang, D. Farid, and M. A. Hossain, “Intelligent facial emotion recognition and semantic-based topic detection for a humanoid robot,” Expert Syst. Appl., vol. 40, no. 13, pp. 5160–5168, 2013.
  • [9] Y. Zhou and B. E. Shi, “Photorealistic facial expression synthesis by the conditional difference adversarial autoencoder,” 2017 7th Int. Conf. Affect. Comput. Intell. Interact. ACII 2017, vol. 2018-Janua, pp. 370–376, 2017.
  • [10] G. Muhammad, M. Alsulaiman, S. U. Amin, A. Ghoneim, and M. F. Alhamid, “A Facial-Expression Monitoring System for Improved Healthcare in Smart Cities,” IEEE Access, vol. 5, pp. 10871–10881, 2017.
  • [11] L. Wang, R. F. Li, K. Wang, and J. Chen, “Feature representation for facial expression recognition based on FACS and LBP,” Int. J. Autom. Comput., vol. 11, no. 5, pp. 459–468, 2014.
  • [12] Y. Chang, C. Hu, R. Feris, and M. Turk, “Manifold based analysis of facial expression,” Image Vis. Comput., vol. 24, no. 6, pp. 605–614, 2006.
  • [13] R. Shbib and S. Zhou, “Facial Expression Analysis using Active Shape Model,” Int. J. Signal Process. Image Process. Pattern Recognit., vol. 8, no. 1, pp. 9–22, 2015.
  • [14] U. Tekguc, H. Soyel, and H. Demirel, “Feature selection for person-independent 3D facial expression recognition using NSGA-II,” Comput. Inf. …, pp. 35–38, 2009.
  • [15] H. Soyel and H. Demirel, “Facial expression recognition based on discriminative scale invariant feature transform,” Electron. Lett., vol. 46, no. 5, pp. 343–345, 2010.
  • [16] D. Al Chanti and A. Caplier, “Improving bag-of-Visual-Words towards effective facial expressive image classification,” VISIGRAPP 2018 - Proc. 13th Int. Jt. Conf. Comput. Vision, Imaging Comput. Graph. Theory Appl., vol. 5, pp. 145–152, 2018.
  • [17] A. T. Lopes, E. de Aguiar, A. F. De Souza, and T. Oliveira-Santos, “Facial expression recognition with Convolutional Neural Networks: Coping with few data and the training sample order,” Pattern Recognit., vol. 61, pp. 610–628, 2017.
  • [18] V. Tümen, Ö. F. Söylemez, and B. Ergen, “Facial emotion recognition on a dataset using Convolutional Neural Network,” IDAP 2017 - Int. Artif. Intell. Data Process. Symp., 2017.
  • [19] I. J. Goodfellow et al., “Challenges in representation learning: A report on three machine learning contests,” Neural Networks, vol. 64, pp. 59–63, 2015.
  • [20] Y. Tang, “Deep Learning using Linear Support Vector Machines,” 2013.
  • [21] M. I. Georgescu, R. T. Ionescu, and M. Popescu, “Local learning with deep and handcrafted features for facial expression recognition,” IEEE Access, vol. 7, pp. 64827–64836, 2019.
  • [22] S. Han et al., “DSD: Dense-sparse-dense training for deep neural networks,” 5th Int. Conf. Learn. Represent. ICLR 2017 - Conf. Track Proc., 2019.
  • [23] T. Connie, M. Al-Shabi, W. P. Cheah, and M. Goh, “Facial expression recognition using a hybrid CNN-SIFT aggregator,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10607 LNAI, pp. 139–149, 2017.
  • [24] B. K. Kim, J. Roh, S. Y. Dong, and S. Y. Lee, “Hierarchical committee of deep convolutional neural networks for robust facial expression recognition,” J. Multimodal User Interfaces, vol. 10, no. 2, pp. 173–189, 2016.
  • [25] Z. Yu and C. Zhang, “Image based static facial expression recognition with multiple deep network learning,” ICMI 2015 - Proc. 2015 ACM Int. Conf. Multimodal Interact., pp. 435–442, 2015.
  • [26] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2323, 1998.
  • [27] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015.
  • [28] A. Krizhevsky and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Neural Information Processing Systems, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2012, pp. 1–9.
  • [29] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” CoRR, vol. abs/1409.1, 2015.
  • [30] C. Szegedy et al., “Going deeper with convolutions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, pp. 1–9, 2015.
  • [31] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 2818–2826, 2016.
  • [32] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 1800–1807, 2017.
  • [33] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” 2016 IEEE Conf. Comput. Vis. Pattern Recognit., vol. abs/1512.0, pp. 770–778, 2016.
  • [34] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9908 LNCS, pp. 630–645, 2016.
  • [35] A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017.
  • [36] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4510–4520, 2018.
  • [37] M. Lin, Q. Chen, and S. Yan, “Network in network,” 2nd Int. Conf. Learn. Represent. ICLR 2014 - Conf. Track Proc., 2014.
  • [38] I. J. Goodfellow et al., “Challenges in representation learning: A report on three machine learning contests,” Neural Networks, vol. 64, pp. 59–63, 2015.
  • [39] S. Rifai, Y. Bengio, A. Courville, P. Vincent, and M. Mirza, “Disentangling factors of variation for facial expression recognition,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 7577 LNCS, no. PART 6, pp. 808–822, 2012.
  • [40] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” J. Mach. Learn. Res., vol. 9, pp. 249–256, 2010.
  • [41] D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., 2015.
Toplam 41 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Bölüm Makaleler
Yazarlar

Ömer Faruk Söylemez 0000-0002-4076-5230

Burhan Ergen 0000-0003-3244-2615

Yayımlanma Tarihi 27 Mart 2020
Gönderilme Tarihi 25 Ocak 2020
Yayımlandığı Sayı Yıl 2020 Cilt: 11 Sayı: 1

Kaynak Göster

IEEE Ö. F. Söylemez ve B. Ergen, “Farklı Evrişimsel Sinir Ağı Mimarilerinin Yüz İfade Analizi Alanındaki Başarımlarının İncelenmesi”, DÜMF MD, c. 11, sy. 1, ss. 123–133, 2020, doi: 10.24012/dumf.679793.
DUJE tarafından yayınlanan tüm makaleler, Creative Commons Atıf 4.0 Uluslararası Lisansı ile lisanslanmıştır. Bu, orijinal eser ve kaynağın uygun şekilde belirtilmesi koşuluyla, herkesin eseri kopyalamasına, yeniden dağıtmasına, yeniden düzenlemesine, iletmesine ve uyarlamasına izin verir. 24456