Research Article
BibTex RIS Cite

Facial Expression Recognition Techniques and Comparative Analysis Using Classification Algorithms

Year 2023, , 596 - 607, 28.09.2023
https://doi.org/10.17798/bitlisfen.1214468

Abstract

With the development of technology and hardware possibilities, it has become possible to analyze the changes that occur as a result of the reflection of emotional state on facial expression with computer vision applications. Facial expression analysis systems are used in applications such as security systems, early diagnosis of certain diseases in the medical world, human-computer interaction, safe driving. Facial expression analysis systems developed using image data consist of 3 basic stages. These; extracting the face area from the input data, extracting the feature vectors of the data and classifying the feature vectors. In this study, the features of the dataset were obtained with the AlexNet model, which is one of the deep learning models that achieved successful results in classification problems. In the study in which the comparative analysis of the obtained results is presented, accuracy of 89.7%, 87.8% and 81.7% was obtained with machine learning techniques.

References

  • [1] P. Ekman, W. Friesen, Facial action coding system, Consulting Psychologists Press; 1978.
  • [2] A. Mehrabian, “Communication Without Words,” in communication theory, Routledge, 2017, pp. 193–200. doi: 10.4324/9781315080918-15.
  • [3] G. Simcock et al., “Associations between Facial Emotion Recognition and Mental Health in Early Adolescence.,” Int J Environ Res Public Health, vol. 17, no. 1, p. 330, Jan. 2020, doi: 10.3390/ijerph17010330.
  • [4] A. Wabnegger et al., “Facial emotion recognition in Parkinson’s disease: An fMRI investigation,” PLoS One, vol. 10, no. 8, p. e0136110, Aug. 2015, doi: 10.1371/journal.pone.0136110.
  • [5] D. S. Kosson, Y. Suchy, A. R. Mayer, and J. Libby, “Facial affect recognition in criminal psychopaths.,” Emotion, vol. 2, no. 4, pp. 398–411, Dec. 2002, doi: 10.1037/1528-3542.2.4.398.
  • [6] V. V Ramalingam, A. Pandian, A. Jaiswal, and N. Bhatia, “Emotion detection from text,” J Phys Conf Ser, vol. 1000, no. 1, p. 012027, Apr. 2018, doi: 10.1088/1742-6596/1000/1/012027.
  • [7] C. S. Ooi, K. P. Seng, L. M. Ang, and L. W. Chew, “A new approach of audio emotion recognition,” Expert Syst Appl, vol. 41, no. 13, pp. 5858–5869, Oct. 2014, doi: 10.1016/J.ESWA.2014.03.026.
  • [8] Y. P. Lin et al., “EEG-based emotion recognition in music listening,” IEEE Trans Biomed Eng, vol. 57, no. 7, pp. 1798–1806, Jul. 2010, doi: 10.1109/TBME.2010.2048568.
  • [9] M. Suk and B. Prabhakaran, “Real-Time Mobile Facial Expression Recognition System -- A Case Study,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE, Jun. 2014, pp. 132–137. doi: 10.1109/CVPRW.2014.25.
  • [10] A. Nicolai and A. Choi, “Facial Emotion Recognition Using Fuzzy Systems,” in 2015 IEEE International Conference on Systems, Man, and Cybernetics, IEEE, Oct. 2015, pp. 2216–2221. doi: 10.1109/SMC.2015.387.
  • [11] F. Z. Salmam, A. Madani, and M. Kissi, “Facial Expression Recognition Using Decision Trees,” in 2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV), IEEE, pp. 125–130, Mar. 2016. doi: 10.1109/CGiV.2016.33.
  • [12] Ju Jia, Yan Xu, Sida Zhang, and Xianglong Xue, “The facial expression recognition method of random forest based on improved PCA extracting feature,” in 2016 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), IEEE, Aug., pp. 1–5, 2016. doi: 10.1109/ICSPCC.2016.7753643.
  • [13] S. M. González-Lozoya, J. de la Calleja, L. Pellegrin, H. J. Escalante, M. A. Medina, and A. Benitez-Ruiz, “Recognition of facial expressions based on CNN features,” Multimed Tools Appl, vol. 79, no. 19–20, pp. 13987–14007, May 2020, doi: 10.1007/s11042-020-08681-4.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, , pp. 770–778, Jun. 2016. doi: 10.1109/CVPR.2016.90.
  • [15] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, Sep. 2014, doi: 10.48550/arxiv.1409.1556.
  • [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun ACM, vol. 60, no. 6, pp. 84–90, May 2017, doi: 10.1145/3065386.
  • [17] S. Minaee, M. Minaei, and A. Abdolrashidi, “Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network,” Sensors, vol. 21, no. 9, p. 3046, Apr. 2021, doi: 10.3390/s21093046.
  • [18] A. Barman and P. Dutta, “Facial expression recognition using distance and shape signature features,” Pattern Recognit Lett, vol. 145, pp. 254–261, May 2021, doi: 10.1016/j.patrec.2017.06.018.
  • [19] D. K. Jain, Z. Zhang, and K. Huang, “Multi angle optimal pattern-based deep learning for automatic facial expression recognition,” Pattern Recognit Lett, vol. 139, pp. 157–165, Nov. 2020, doi: 10.1016/j.patrec.2017.06.025.
  • [20] M. J. Lyons, “‘Excavating AI’ Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset”, Accessed: Sep. 05, 2022. [Online]. Available: https://excavating.ai/
  • [21] M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with Gabor wavelets,” Proceedings - 3rd IEEE International Conference on Automatic Face and Gesture Recognition, FG 1998, pp. 200–205, 1998, doi: 10.1109/AFGR.1998.670949.
  • [22] O. Langner, R. Dotsch, G. Bijlstra, D. H. J. Wigboldus, S. T. Hawk, and A. van Knippenberg, “Presentation and validation of the Radboud Faces Database,” http://dx.doi.org/10.1080/02699930903485076, vol. 24, no. 8, pp. 1377–1388, Dec. 2010, doi: 10.1080/02699930903485076.
  • [23] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, CVPRW 2010, pp. 94–101, 2010, doi: 10.1109/CVPRW.2010.5543262.
  • [24] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks,” IEEE Signal Process Lett, vol. 23, no. 10, pp. 1499–1503, 2016, doi: 10.1109/LSP.2016.2603342.
  • [25] Z. Li, F. Liu, W. Yang, S. Peng, J. Zhou, and S. Member, “A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects; A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 12, pp. 6999–7019, 2022, doi: 10.1109/TNNLS.2021.3084827.
  • [26] D. Kaya, “The mRMR-CNN based influential support decision system approach to classify EEG signals”, Measurement, vol. 156 no. 107602, p.107602, 2020, http: //dx.doi.org/10.1016/j.measurement.2020.107602.
  • [27] D. Kaya, “Automated gender‐Parkinson's disease detection at the same time via a hybrid deep model using human voice.” Concurrency and Computation: Practice and Experience, vol. 34, no.26, 2022.
  • [28] S. Balakrishnama and A. Ganapathiraju, “Institute for Signal And Information Processing Linear Discriminant Analysis-A Brief Tutorial” 1998.
  • [29] X. Haijun, P. Fang, W. Ling, and L. Hongwei, “Ad hoc-based feature selection and support vector machine classifier for intrusion detection,” Proceedings of 2007 IEEE International Conference on Grey Systems and Intelligent Services, GSIS 2007, pp. 1117–1121, 2007, doi: 10.1109/GSIS.2007.4443446.
  • [30] T. M. Mitchell, “Does machine learning really work?” AI magazine, vol. 18, no. 3, pp. 11–20, 1997
Year 2023, , 596 - 607, 28.09.2023
https://doi.org/10.17798/bitlisfen.1214468

Abstract

References

  • [1] P. Ekman, W. Friesen, Facial action coding system, Consulting Psychologists Press; 1978.
  • [2] A. Mehrabian, “Communication Without Words,” in communication theory, Routledge, 2017, pp. 193–200. doi: 10.4324/9781315080918-15.
  • [3] G. Simcock et al., “Associations between Facial Emotion Recognition and Mental Health in Early Adolescence.,” Int J Environ Res Public Health, vol. 17, no. 1, p. 330, Jan. 2020, doi: 10.3390/ijerph17010330.
  • [4] A. Wabnegger et al., “Facial emotion recognition in Parkinson’s disease: An fMRI investigation,” PLoS One, vol. 10, no. 8, p. e0136110, Aug. 2015, doi: 10.1371/journal.pone.0136110.
  • [5] D. S. Kosson, Y. Suchy, A. R. Mayer, and J. Libby, “Facial affect recognition in criminal psychopaths.,” Emotion, vol. 2, no. 4, pp. 398–411, Dec. 2002, doi: 10.1037/1528-3542.2.4.398.
  • [6] V. V Ramalingam, A. Pandian, A. Jaiswal, and N. Bhatia, “Emotion detection from text,” J Phys Conf Ser, vol. 1000, no. 1, p. 012027, Apr. 2018, doi: 10.1088/1742-6596/1000/1/012027.
  • [7] C. S. Ooi, K. P. Seng, L. M. Ang, and L. W. Chew, “A new approach of audio emotion recognition,” Expert Syst Appl, vol. 41, no. 13, pp. 5858–5869, Oct. 2014, doi: 10.1016/J.ESWA.2014.03.026.
  • [8] Y. P. Lin et al., “EEG-based emotion recognition in music listening,” IEEE Trans Biomed Eng, vol. 57, no. 7, pp. 1798–1806, Jul. 2010, doi: 10.1109/TBME.2010.2048568.
  • [9] M. Suk and B. Prabhakaran, “Real-Time Mobile Facial Expression Recognition System -- A Case Study,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE, Jun. 2014, pp. 132–137. doi: 10.1109/CVPRW.2014.25.
  • [10] A. Nicolai and A. Choi, “Facial Emotion Recognition Using Fuzzy Systems,” in 2015 IEEE International Conference on Systems, Man, and Cybernetics, IEEE, Oct. 2015, pp. 2216–2221. doi: 10.1109/SMC.2015.387.
  • [11] F. Z. Salmam, A. Madani, and M. Kissi, “Facial Expression Recognition Using Decision Trees,” in 2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV), IEEE, pp. 125–130, Mar. 2016. doi: 10.1109/CGiV.2016.33.
  • [12] Ju Jia, Yan Xu, Sida Zhang, and Xianglong Xue, “The facial expression recognition method of random forest based on improved PCA extracting feature,” in 2016 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), IEEE, Aug., pp. 1–5, 2016. doi: 10.1109/ICSPCC.2016.7753643.
  • [13] S. M. González-Lozoya, J. de la Calleja, L. Pellegrin, H. J. Escalante, M. A. Medina, and A. Benitez-Ruiz, “Recognition of facial expressions based on CNN features,” Multimed Tools Appl, vol. 79, no. 19–20, pp. 13987–14007, May 2020, doi: 10.1007/s11042-020-08681-4.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, , pp. 770–778, Jun. 2016. doi: 10.1109/CVPR.2016.90.
  • [15] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, Sep. 2014, doi: 10.48550/arxiv.1409.1556.
  • [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun ACM, vol. 60, no. 6, pp. 84–90, May 2017, doi: 10.1145/3065386.
  • [17] S. Minaee, M. Minaei, and A. Abdolrashidi, “Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network,” Sensors, vol. 21, no. 9, p. 3046, Apr. 2021, doi: 10.3390/s21093046.
  • [18] A. Barman and P. Dutta, “Facial expression recognition using distance and shape signature features,” Pattern Recognit Lett, vol. 145, pp. 254–261, May 2021, doi: 10.1016/j.patrec.2017.06.018.
  • [19] D. K. Jain, Z. Zhang, and K. Huang, “Multi angle optimal pattern-based deep learning for automatic facial expression recognition,” Pattern Recognit Lett, vol. 139, pp. 157–165, Nov. 2020, doi: 10.1016/j.patrec.2017.06.025.
  • [20] M. J. Lyons, “‘Excavating AI’ Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset”, Accessed: Sep. 05, 2022. [Online]. Available: https://excavating.ai/
  • [21] M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with Gabor wavelets,” Proceedings - 3rd IEEE International Conference on Automatic Face and Gesture Recognition, FG 1998, pp. 200–205, 1998, doi: 10.1109/AFGR.1998.670949.
  • [22] O. Langner, R. Dotsch, G. Bijlstra, D. H. J. Wigboldus, S. T. Hawk, and A. van Knippenberg, “Presentation and validation of the Radboud Faces Database,” http://dx.doi.org/10.1080/02699930903485076, vol. 24, no. 8, pp. 1377–1388, Dec. 2010, doi: 10.1080/02699930903485076.
  • [23] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, CVPRW 2010, pp. 94–101, 2010, doi: 10.1109/CVPRW.2010.5543262.
  • [24] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks,” IEEE Signal Process Lett, vol. 23, no. 10, pp. 1499–1503, 2016, doi: 10.1109/LSP.2016.2603342.
  • [25] Z. Li, F. Liu, W. Yang, S. Peng, J. Zhou, and S. Member, “A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects; A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 12, pp. 6999–7019, 2022, doi: 10.1109/TNNLS.2021.3084827.
  • [26] D. Kaya, “The mRMR-CNN based influential support decision system approach to classify EEG signals”, Measurement, vol. 156 no. 107602, p.107602, 2020, http: //dx.doi.org/10.1016/j.measurement.2020.107602.
  • [27] D. Kaya, “Automated gender‐Parkinson's disease detection at the same time via a hybrid deep model using human voice.” Concurrency and Computation: Practice and Experience, vol. 34, no.26, 2022.
  • [28] S. Balakrishnama and A. Ganapathiraju, “Institute for Signal And Information Processing Linear Discriminant Analysis-A Brief Tutorial” 1998.
  • [29] X. Haijun, P. Fang, W. Ling, and L. Hongwei, “Ad hoc-based feature selection and support vector machine classifier for intrusion detection,” Proceedings of 2007 IEEE International Conference on Grey Systems and Intelligent Services, GSIS 2007, pp. 1117–1121, 2007, doi: 10.1109/GSIS.2007.4443446.
  • [30] T. M. Mitchell, “Does machine learning really work?” AI magazine, vol. 18, no. 3, pp. 11–20, 1997
There are 30 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Araştırma Makalesi
Authors

Gamze Ballıkaya 0000-0001-7380-1181

Duygu Kaya 0000-0002-6453-631X

Early Pub Date September 23, 2023
Publication Date September 28, 2023
Submission Date December 5, 2022
Acceptance Date May 9, 2023
Published in Issue Year 2023

Cite

IEEE G. Ballıkaya and D. Kaya, “Facial Expression Recognition Techniques and Comparative Analysis Using Classification Algorithms”, Bitlis Eren Üniversitesi Fen Bilimleri Dergisi, vol. 12, no. 3, pp. 596–607, 2023, doi: 10.17798/bitlisfen.1214468.



Bitlis Eren Üniversitesi
Fen Bilimleri Dergisi Editörlüğü

Bitlis Eren Üniversitesi Lisansüstü Eğitim Enstitüsü        
Beş Minare Mah. Ahmet Eren Bulvarı, Merkez Kampüs, 13000 BİTLİS        
E-posta: fbe@beu.edu.tr