Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2024, Cilt: 12 Sayı: 3, 214 - 223, 30.09.2024
https://doi.org/10.17694/bajece.1486140

Öz

Kaynakça

  • [1] P. Ekman, “Facial expression and emotion.” American Psychologist, vol. 48, no. 4, pp. 384–392, 1993. [Online]. Available: https: //doi.apa.org/doi/10.1037/0003-066X.48.4.384
  • [2] L. E. Ishii, J. C. Nellis, K. D. Boahene, P. Byrne, and M. Ishii, “The importance and psychology of facial expression,” Otolaryngologic Clinics of North America, vol. 51, no. 6, pp. 1011–1017, 2018-12. [Online]. Available: https: //linkinghub.elsevier.com/retrieve/pii/S003066651830121X
  • [3] G. S. Shergill, A. Sarrafzadeh, O. Diegel, and A. Shekar, “Computerized sales assistants: the application of computer technology to measure consumer interest-a conceptual framework,” 2008, publisher: California State University.
  • [4] X.-Y. Tang, W.-Y. Peng, S.-R. Liu, and J.-W. Xiong, “Classroom teaching evaluation based on facial expression recognition,” in Proceedings of the 2020 9th International Conference on Educational and Information Technology, ser. ICEIT 2020. Association for Computing Machinery, 2020-04-23, pp. 62–67. [Online]. Available: https://doi.org/10.1145/3383923.3383949
  • [5] M. Sajjad, M. Nasir, F. U. M. Ullah, K. Muhammad, A. K. Sangaiah, and S. W. Baik, “Raspberry pi assisted facial expression recognition framework for smart security in law-enforcement services,” Information Sciences, vol. 479, pp. 416–431, 2019-04. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0020025518305425
  • [6] G. Fu, Y. Yu, J. Ye, Y. Zheng, W. Li, N. Cui, and Q. Wang, “A method for diagnosing depression: Facial expression mimicry is evaluated by facial expression recognition,” Journal of Affective Disorders, vol. 323, pp. 809–818, 2023-02. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S016503272201388X
  • [7] P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion,” Journal of Personality and Social Psychology, vol. 17, pp. 124–129, 1971, place: US Publisher: American Psychological Association.
  • [8] N. A. Sheth and M. M. Goyani, “A comprehensive study of geometric and appearance based facial expression recognition methods,” Int J Sci Res Sci Eng Technol, vol. 4, no. 2, pp. 163–175, 2018-01-20. [Online]. Available: https://ijsrset.com/IJSRSET184229
  • [9] T. Gwyn, K. Roy, and M. Atay, “Face recognition using popular deep net architectures: A brief comparative study,” Future Internet, vol. 13, no. 7, p. 164, 2021.
  • [10] A. Saeed, A. Al-Hamadi, R. Niese, and M. Elzobi, “Frame-based facial expression recognition using geometrical features,” Adv. in Hum.-Comp. Int., vol. 2014, p. 4:4, 2014-01-01. [Online]. Available: https://doi.org/10.1155/2014/408953
  • [11] J.-H. Kim, B.-G. Kim, P. P. Roy, and D.-M. Jeong, “Efficient facial expression recognition algorithm based on hierarchical deep neural network structure,” IEEE Access, vol. 7, pp. 41 273–41 285, 2019, conference Name: IEEE Access.
  • [12] A. Barman and P. Dutta, “Facial expression recognition using distance and shape signature features,” Pattern Recognition Letters, vol. 145, pp. 254–261, 2021-05. [Online]. Available: https://linkinghub.elsevier. com/retrieve/pii/S0167865517302246
  • [13] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  • [14] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” 2023-08-01. [Online]. Available: http://arxiv.org/abs/1706.03762
  • [15] P. K. A. Vasu, J. Gabriel, J. Zhu, O. Tuzel, and A. Ranjan, “FastViT: A fast hybrid vision transformer using structural reparameterization,” 2023-08-17. [Online]. Available: http://arxiv.org/abs/2303.14189
  • [16] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022.
  • [17] C.-F. Chen, Q. Fan, and R. Panda, “CrossViT: Cross-attention multiscale vision transformer for image classification,” 2021-08-22. [Online]. Available: http://arxiv.org/abs/2103.14899
  • [18] B. Heo, S. Yun, D. Han, S. Chun, J. Choe, and S. J. Oh, “Rethinking spatial dimensions of vision transformers,” 2021-08-17. [Online]. Available: http://arxiv.org/abs/2103.16302
  • [19] H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jegou, “Training data-efficient image transformers & distillation ´ through attention,” 2021-01-15. [Online]. Available: http://arxiv.org/abs/ 2012.12877
  • [20] M. Rahul, N. Kohli, R. Agarwal, and S. Mishra, “Facial expression recognition using geometric features and modified hidden markov model,” International Journal of Grid and Utility Computing, vol. 10, no. 5, pp. 488–496, 2019-01, publisher: Inderscience Publishers. [Online]. Available: https://www.inderscienceonline.com/ doi/abs/10.1504/IJGUC.2019.102018
  • [21] H. Chouhayebi, J. Riffi, M. A. Mahraz, A. Yahyaouy, H. Tairi, and N. Alioua, “Facial expression recognition based on geometric features,” in 2020 International Conference on Intelligent Systems and Computer Vision (ISCV), 2020-06, pp. 1–6.
  • [22] G. Sharma, L. Singh, and S. Gautam, “Automatic facial expression recognition using combined geometric features,” 3D Res, vol. 10, no. 2, p. 14, 2019-04-01. [Online]. Available: https://doi.org/10.1007/ s13319-019-0224-0
  • [23] D. A. Ibrahim, D. A. Zebari, F. Y. H. Ahmed, and D. Q. Zeebaree, “Facial expression recognition using aggregated handcrafted descriptors based appearance method,” in 2021 IEEE 11th International Conference on System Engineering and Technology (ICSET), 2021-11, pp. 177–182, ISSN: 2470-640X.
  • [24] H. Kaya, F. Gurpinar, S. Afshar, and A. A. Salah, “Contrasting and ¨ combining least squares based learners for emotion recognition in the wild,” in Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, 2015-11-09, pp. 459–466. [Online]. Available: https://dl.acm.org/doi/10.1145/2818346.2830588
  • [25] D. Liu, X. Ouyang, S. Xu, P. Zhou, K. He, and S. Wen, “SAANet: Siamese action-units attention network for improving dynamic facial expression recognition,” Neurocomputing, vol. 413, pp. 145–157, 2020-11-06. [Online]. Available: https://www.sciencedirect. com/science/article/pii/S092523122031050X
  • [26] X. Pan, G. Ying, G. Chen, H. Li, and W. Li, “A deep spatial and temporal aggregation framework for video-based facial expression recognition,” IEEE Access, vol. 7, pp. 48 807–48 815, 2019. [Online]. Available: https://ieeexplore.ieee.org/document/8674456/
  • [27] M. Z. Uddin, W. Khaksar, and J. Torresen, “Facial expression recognition using salient features and convolutional neural network,” IEEE Access, vol. 5, pp. 26 146–26 161, 2017. [Online]. Available: http://ieeexplore.ieee.org/document/8119492/
  • [28] S. Minaee, M. Minaei, and A. Abdolrashidi, “Deep-emotion: Facial expression recognition using attentional convolutional network,” Sensors, vol. 21, no. 9, p. 3046, 2021-04-27. [Online]. Available: https://www.mdpi.com/1424-8220/21/9/3046
  • [29] M. G. Calvo and D. Lundqvist, “Facial expressions of emotion (KDEF): Identification under different display-duration conditions,” Behav Res, vol. 40, no. 1, pp. 109–115, 2008-02-01. [Online]. Available: https://doi.org/10.3758/BRM.40.1.109
  • [30] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops. IEEE, 2010-06, pp. 94–101. [Online]. Available: http://ieeexplore.ieee.org/document/5543262/
  • [31] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, ¨ E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: An imperative style, highperformance deep learning library,” 2019-12-03. [Online]. Available: http://arxiv.org/abs/1912.01703
  • [32] L. Wang, Z. He, B. Meng, K. Liu, Q. Dou, and X. Yang, “Two-pathway attention network for real-time facial expression recognition,” Journal of Real-Time Image Processing, vol. 18, no. 4, pp. 1173–1182, 2021.
  • [33] S. Subudhiray, H. K. Palo, and N. Das, “Effective recognition of facial emotions using dual transfer learned feature vectors and support vector machine,” International Journal of Information Technology, vol. 15, no. 1, pp. 301–313, 2023.
  • [34] J. X. Yu, K. M. Lim, and C. P. Lee, “Move-cnns: Model averaging ensemble of convolutional neural networks for facial expression recognition.” IAENG International Journal of Computer Science, vol. 48, no. 3, 2021.
  • [35] Q. Hu, C. Wu, J. Chi, X. Yu, and H. Wang, “Multi-level feature fusion facial expression recognition network,” in 2020 Chinese Control And Decision Conference (CCDC). IEEE, 2020, pp. 5267–5272.
  • [36] K. Mohan, A. Seal, O. Krejcar, and A. Yazidi, “Fer-net: facial expression recognition using deep neural net,” Neural Computing and Applications, vol. 33, no. 15, pp. 9125–9136, 2021.
  • [37] N. Kumar HN, A. S. Kumar, G. Prasad MS, and M. A. Shah, “Automatic facial expression recognition combining texture and shape features from prominent facial regions,” IET Image Processing, vol. 17, no. 4, pp. 1111–1125, 2023.
  • [38] M. Kas, Y. Ruichek, R. Messoussi et al., “New framework for personindependent facial expression recognition combining textural and shape analysis through new feature extraction approach,” Information Sciences, vol. 549, pp. 200–220, 2021.
  • [39] S. Eng, H. Ali, A. Cheah, and Y. Chong, “Facial expression recognition in jaffe and kdef datasets using histogram of oriented gradients and support vector machine,” in IOP Conference series: materials science and engineering, vol. 705, no. 1. IOP Publishing, 2019, p. 012031.
  • [40] R. V. Puthanidam and T.-S. Moh, “A hybrid approach for facial expression recognition,” in Proceedings of the 12th International Conference on Ubiquitous Information Management and Communication, 2018, pp. 1–8.
  • [41] A. J. Obaid and H. K. Alrammahi, “An intelligent facial expression recognition system using a hybrid deep convolutional neural network for multimedia applications,” Applied Sciences, vol. 13, no. 21, p. 12049, 2023.
  • [42] Y. Yaddaden, M. Adda, and A. Bouzouane, “Facial expression recognition using locally linear embedding with lbp and hog descriptors,” in 2020 2nd International Workshop on Human-Centric Smart Environments for Health and Well-being (IHSH). IEEE, 2021, pp. 221–226.
  • [43] S. Barra, S. Hossain, C. Pero, and S. Umer, “A facial expression recognition approach for social iot frameworks,” Big Data Research, vol. 30, p. 100353, 2022.
  • [44] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13. Springer, 2014, pp. 818–833.

Face Expression Recognition via transformer-based classification models

Yıl 2024, Cilt: 12 Sayı: 3, 214 - 223, 30.09.2024
https://doi.org/10.17694/bajece.1486140

Öz

Facial Expression Recognition (FER) tasks have widely studied in the literature since it has many applications. Fast development of technology in deep learning computer vision algorithms, especially, transformer-based classification models, makes it hard to select most appropriate models. Using complex model may increase accuracy performance but decreasing inference time which is a crucial in near real-time applications. On the other hand, small models may not give desired results. In this study, we aimed to examine performance of 5 different relatively small transformer-based image classification algorithms for FER tasks. We used vanilla ViT, PiT, Swin, DeiT, and CrossViT with considering their trainable parameter size and architectures. Each model has 20-30M trainable parameters which means relatively small. Moreover, each model has different architectures. As an illustration, CrossViT focuses on image using multi-scale patches and PiT model introduces convolution layers and pooling techniques to vanilla ViT model. We obtained all results for widely used FER datasets: CK+ and KDEF. We observed that, PiT model achieves the best accuracy scores 0.9513 and 0.9090 for CK+ and KDEF datasets, respectively

Kaynakça

  • [1] P. Ekman, “Facial expression and emotion.” American Psychologist, vol. 48, no. 4, pp. 384–392, 1993. [Online]. Available: https: //doi.apa.org/doi/10.1037/0003-066X.48.4.384
  • [2] L. E. Ishii, J. C. Nellis, K. D. Boahene, P. Byrne, and M. Ishii, “The importance and psychology of facial expression,” Otolaryngologic Clinics of North America, vol. 51, no. 6, pp. 1011–1017, 2018-12. [Online]. Available: https: //linkinghub.elsevier.com/retrieve/pii/S003066651830121X
  • [3] G. S. Shergill, A. Sarrafzadeh, O. Diegel, and A. Shekar, “Computerized sales assistants: the application of computer technology to measure consumer interest-a conceptual framework,” 2008, publisher: California State University.
  • [4] X.-Y. Tang, W.-Y. Peng, S.-R. Liu, and J.-W. Xiong, “Classroom teaching evaluation based on facial expression recognition,” in Proceedings of the 2020 9th International Conference on Educational and Information Technology, ser. ICEIT 2020. Association for Computing Machinery, 2020-04-23, pp. 62–67. [Online]. Available: https://doi.org/10.1145/3383923.3383949
  • [5] M. Sajjad, M. Nasir, F. U. M. Ullah, K. Muhammad, A. K. Sangaiah, and S. W. Baik, “Raspberry pi assisted facial expression recognition framework for smart security in law-enforcement services,” Information Sciences, vol. 479, pp. 416–431, 2019-04. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0020025518305425
  • [6] G. Fu, Y. Yu, J. Ye, Y. Zheng, W. Li, N. Cui, and Q. Wang, “A method for diagnosing depression: Facial expression mimicry is evaluated by facial expression recognition,” Journal of Affective Disorders, vol. 323, pp. 809–818, 2023-02. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S016503272201388X
  • [7] P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion,” Journal of Personality and Social Psychology, vol. 17, pp. 124–129, 1971, place: US Publisher: American Psychological Association.
  • [8] N. A. Sheth and M. M. Goyani, “A comprehensive study of geometric and appearance based facial expression recognition methods,” Int J Sci Res Sci Eng Technol, vol. 4, no. 2, pp. 163–175, 2018-01-20. [Online]. Available: https://ijsrset.com/IJSRSET184229
  • [9] T. Gwyn, K. Roy, and M. Atay, “Face recognition using popular deep net architectures: A brief comparative study,” Future Internet, vol. 13, no. 7, p. 164, 2021.
  • [10] A. Saeed, A. Al-Hamadi, R. Niese, and M. Elzobi, “Frame-based facial expression recognition using geometrical features,” Adv. in Hum.-Comp. Int., vol. 2014, p. 4:4, 2014-01-01. [Online]. Available: https://doi.org/10.1155/2014/408953
  • [11] J.-H. Kim, B.-G. Kim, P. P. Roy, and D.-M. Jeong, “Efficient facial expression recognition algorithm based on hierarchical deep neural network structure,” IEEE Access, vol. 7, pp. 41 273–41 285, 2019, conference Name: IEEE Access.
  • [12] A. Barman and P. Dutta, “Facial expression recognition using distance and shape signature features,” Pattern Recognition Letters, vol. 145, pp. 254–261, 2021-05. [Online]. Available: https://linkinghub.elsevier. com/retrieve/pii/S0167865517302246
  • [13] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  • [14] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” 2023-08-01. [Online]. Available: http://arxiv.org/abs/1706.03762
  • [15] P. K. A. Vasu, J. Gabriel, J. Zhu, O. Tuzel, and A. Ranjan, “FastViT: A fast hybrid vision transformer using structural reparameterization,” 2023-08-17. [Online]. Available: http://arxiv.org/abs/2303.14189
  • [16] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022.
  • [17] C.-F. Chen, Q. Fan, and R. Panda, “CrossViT: Cross-attention multiscale vision transformer for image classification,” 2021-08-22. [Online]. Available: http://arxiv.org/abs/2103.14899
  • [18] B. Heo, S. Yun, D. Han, S. Chun, J. Choe, and S. J. Oh, “Rethinking spatial dimensions of vision transformers,” 2021-08-17. [Online]. Available: http://arxiv.org/abs/2103.16302
  • [19] H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jegou, “Training data-efficient image transformers & distillation ´ through attention,” 2021-01-15. [Online]. Available: http://arxiv.org/abs/ 2012.12877
  • [20] M. Rahul, N. Kohli, R. Agarwal, and S. Mishra, “Facial expression recognition using geometric features and modified hidden markov model,” International Journal of Grid and Utility Computing, vol. 10, no. 5, pp. 488–496, 2019-01, publisher: Inderscience Publishers. [Online]. Available: https://www.inderscienceonline.com/ doi/abs/10.1504/IJGUC.2019.102018
  • [21] H. Chouhayebi, J. Riffi, M. A. Mahraz, A. Yahyaouy, H. Tairi, and N. Alioua, “Facial expression recognition based on geometric features,” in 2020 International Conference on Intelligent Systems and Computer Vision (ISCV), 2020-06, pp. 1–6.
  • [22] G. Sharma, L. Singh, and S. Gautam, “Automatic facial expression recognition using combined geometric features,” 3D Res, vol. 10, no. 2, p. 14, 2019-04-01. [Online]. Available: https://doi.org/10.1007/ s13319-019-0224-0
  • [23] D. A. Ibrahim, D. A. Zebari, F. Y. H. Ahmed, and D. Q. Zeebaree, “Facial expression recognition using aggregated handcrafted descriptors based appearance method,” in 2021 IEEE 11th International Conference on System Engineering and Technology (ICSET), 2021-11, pp. 177–182, ISSN: 2470-640X.
  • [24] H. Kaya, F. Gurpinar, S. Afshar, and A. A. Salah, “Contrasting and ¨ combining least squares based learners for emotion recognition in the wild,” in Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, 2015-11-09, pp. 459–466. [Online]. Available: https://dl.acm.org/doi/10.1145/2818346.2830588
  • [25] D. Liu, X. Ouyang, S. Xu, P. Zhou, K. He, and S. Wen, “SAANet: Siamese action-units attention network for improving dynamic facial expression recognition,” Neurocomputing, vol. 413, pp. 145–157, 2020-11-06. [Online]. Available: https://www.sciencedirect. com/science/article/pii/S092523122031050X
  • [26] X. Pan, G. Ying, G. Chen, H. Li, and W. Li, “A deep spatial and temporal aggregation framework for video-based facial expression recognition,” IEEE Access, vol. 7, pp. 48 807–48 815, 2019. [Online]. Available: https://ieeexplore.ieee.org/document/8674456/
  • [27] M. Z. Uddin, W. Khaksar, and J. Torresen, “Facial expression recognition using salient features and convolutional neural network,” IEEE Access, vol. 5, pp. 26 146–26 161, 2017. [Online]. Available: http://ieeexplore.ieee.org/document/8119492/
  • [28] S. Minaee, M. Minaei, and A. Abdolrashidi, “Deep-emotion: Facial expression recognition using attentional convolutional network,” Sensors, vol. 21, no. 9, p. 3046, 2021-04-27. [Online]. Available: https://www.mdpi.com/1424-8220/21/9/3046
  • [29] M. G. Calvo and D. Lundqvist, “Facial expressions of emotion (KDEF): Identification under different display-duration conditions,” Behav Res, vol. 40, no. 1, pp. 109–115, 2008-02-01. [Online]. Available: https://doi.org/10.3758/BRM.40.1.109
  • [30] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops. IEEE, 2010-06, pp. 94–101. [Online]. Available: http://ieeexplore.ieee.org/document/5543262/
  • [31] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, ¨ E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: An imperative style, highperformance deep learning library,” 2019-12-03. [Online]. Available: http://arxiv.org/abs/1912.01703
  • [32] L. Wang, Z. He, B. Meng, K. Liu, Q. Dou, and X. Yang, “Two-pathway attention network for real-time facial expression recognition,” Journal of Real-Time Image Processing, vol. 18, no. 4, pp. 1173–1182, 2021.
  • [33] S. Subudhiray, H. K. Palo, and N. Das, “Effective recognition of facial emotions using dual transfer learned feature vectors and support vector machine,” International Journal of Information Technology, vol. 15, no. 1, pp. 301–313, 2023.
  • [34] J. X. Yu, K. M. Lim, and C. P. Lee, “Move-cnns: Model averaging ensemble of convolutional neural networks for facial expression recognition.” IAENG International Journal of Computer Science, vol. 48, no. 3, 2021.
  • [35] Q. Hu, C. Wu, J. Chi, X. Yu, and H. Wang, “Multi-level feature fusion facial expression recognition network,” in 2020 Chinese Control And Decision Conference (CCDC). IEEE, 2020, pp. 5267–5272.
  • [36] K. Mohan, A. Seal, O. Krejcar, and A. Yazidi, “Fer-net: facial expression recognition using deep neural net,” Neural Computing and Applications, vol. 33, no. 15, pp. 9125–9136, 2021.
  • [37] N. Kumar HN, A. S. Kumar, G. Prasad MS, and M. A. Shah, “Automatic facial expression recognition combining texture and shape features from prominent facial regions,” IET Image Processing, vol. 17, no. 4, pp. 1111–1125, 2023.
  • [38] M. Kas, Y. Ruichek, R. Messoussi et al., “New framework for personindependent facial expression recognition combining textural and shape analysis through new feature extraction approach,” Information Sciences, vol. 549, pp. 200–220, 2021.
  • [39] S. Eng, H. Ali, A. Cheah, and Y. Chong, “Facial expression recognition in jaffe and kdef datasets using histogram of oriented gradients and support vector machine,” in IOP Conference series: materials science and engineering, vol. 705, no. 1. IOP Publishing, 2019, p. 012031.
  • [40] R. V. Puthanidam and T.-S. Moh, “A hybrid approach for facial expression recognition,” in Proceedings of the 12th International Conference on Ubiquitous Information Management and Communication, 2018, pp. 1–8.
  • [41] A. J. Obaid and H. K. Alrammahi, “An intelligent facial expression recognition system using a hybrid deep convolutional neural network for multimedia applications,” Applied Sciences, vol. 13, no. 21, p. 12049, 2023.
  • [42] Y. Yaddaden, M. Adda, and A. Bouzouane, “Facial expression recognition using locally linear embedding with lbp and hog descriptors,” in 2020 2nd International Workshop on Human-Centric Smart Environments for Health and Well-being (IHSH). IEEE, 2021, pp. 221–226.
  • [43] S. Barra, S. Hossain, C. Pero, and S. Umer, “A facial expression recognition approach for social iot frameworks,” Big Data Research, vol. 30, p. 100353, 2022.
  • [44] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13. Springer, 2014, pp. 818–833.
Toplam 44 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yazılım Mühendisliği (Diğer), Elektrik Mühendisliği (Diğer)
Bölüm Araştırma Makalesi
Yazarlar

Muhammed Cihad Arslanoğlu 0009-0007-6158-5187

Hüseyin Acar 0000-0001-5127-4632

Abdülkadir Albayrak 0000-0002-0738-871X

Erken Görünüm Tarihi 24 Ekim 2024
Yayımlanma Tarihi 30 Eylül 2024
Gönderilme Tarihi 18 Mayıs 2024
Kabul Tarihi 20 Ağustos 2024
Yayımlandığı Sayı Yıl 2024 Cilt: 12 Sayı: 3

Kaynak Göster

APA Arslanoğlu, M. C., Acar, H., & Albayrak, A. (2024). Face Expression Recognition via transformer-based classification models. Balkan Journal of Electrical and Computer Engineering, 12(3), 214-223. https://doi.org/10.17694/bajece.1486140

All articles published by BAJECE are licensed under the Creative Commons Attribution 4.0 International License. This permits anyone to copy, redistribute, remix, transmit and adapt the work provided the original work and source is appropriately cited.Creative Commons Lisansı