Research Article
BibTex RIS Cite

EduFERA: A Real-Time Student Facial Emotion Recognition Approach

Year 2021, , 690 - 695, 31.12.2021
https://doi.org/10.31590/ejosat.1039184

Abstract

The use of video conferencing tools in education has increased dramatically in recent years. Especially after the COVID-19 outbreak, many classes have been moved to online platforms due to social distancing precautions. While this trend eliminates physical dependencies in education and provides a continuous educational environment, it also creates some problems in the long term. Primarily, many instructors and students have reported issues concerning the lack of emotional interaction between participants. During in-place education, the speaker receives immediate emotional feedback through the expressions of the audience. However, it is not possible to fully utilize this valuable feedback in online lectures since current tools can only display a limited number of faces on the screen at a time. In order to alleviate this problem and promote the online education experience one step closer to in-place education, this study presents EduFERA that provides a real-time emotional assessment of students based on their facial expressions during video conferencing. Empirically, several state-of-the-art techniques have been employed for face recognition and facial emotion assessment. The resulting optimal model has been deployed as a Flask Web API with a user-friendly ReactJS frontend, which can be integrated as an extension to current online lecturing systems.

Supporting Institution

TUBITAK 2209-A and Eskişehir Technical University Scientific Research Projects Commission

Project Number

1919B012001659 and 21LTP030

Thanks

This study was supported by TUBITAK 2209-A under the grant no: 1919B012001659 and Eskişehir Technical University Scientific Research Projects Commission under the grant no: 21LTP030.

References

  • Aguilera-Hermida, A. P. (2020). College students' use and acceptance of emergency online learning due to COVID-19. International Journal of Educational Research Open, 1, 100011.
  • Albanie, S., & Vedaldi, A. (2016). Learning grimaces by watching tv. arXiv preprint arXiv:1610.02255.
  • Albanie, S., Nagrani, A., Vedaldi, A., & Zisserman, A. (2018). Emotion recognition in speech using cross-modal transfer in the wild. Proceedings of the 26th ACM International Conference on Multimedia, (s. 292-301).
  • Baber, H. (2020). Determinants of students' perceived learning outcome and satisfaction in online learning during the pandemic of COVID-19. Journal of Education and e-Learning Research, 7(3), 285-292.
  • Barsoum, E., Zhang, C., Ferrer, C. C., & Zhang, Z. (2016). Training deep networks for facial expression recognition with crowd-sourced label distribution. Proceedings of the 18th ACM International Conference on Multimodel Interaction, (s. 279-283).
  • Farrell, C. C., Markham, C., & Deegan, C. (2019). Real-time detection and analysis of facial features to measure student engagement with learning objects. IMVIP 2019: Irish Machine Vision & Image Processing.
  • Goodfellow, I. J., Erhan, D., Carrier, P. L., Courville, A., Mirza, M., Hamner, B., . . . Bengio, Y. (2013). Challenges in representation learning: a report on three machine learning contests. International Conference on Neural Information Processing, (s. 117-124).
  • Picard, R. W. (2000). Affective computing. MIT Press.
  • Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychological Review, 110(1), 145-172.
  • Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: a unified embedding for face recognition and clustering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (s. 815-823).
  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional neural networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  • Sutskever, I., Martens, J., Dahl, G., & Hinton, G. (2013). On the importance of initialization and momentum in deep learning. International Conference on Machine Learning (s. 1139-1147). PMLR.
  • Xie, X., Siau, K., & Nah, F. F.-H. (2020). COVID-19 pandemic - online education in the new normal and the next normal. Journal of Information Technology Case and Application Research, 22(3), 175-187.
  • Zeng, H., Shu, X., Wang, Y., Wang, Y., Zhang, L., Pong, T.-C., & Qu, H. (2020). EmotionCues: emotion-oriented visual summarization of classroom videos. IEEE Transactions on Visualization and Computer Graphics, 27(7), 3168-3181.
  • Zhou, W., Cheng, J., Lei, X., Benes, B., & Adamo, N. (2020). Deep-learning-based emotion recognition from real-time videos. International Conference on Human-Computer Interaction (s. 321-332). Cham: Springer.

EduFERA: Gerçek Zamanlı Öğrenci İfadelerinden Duygu Tanımlama Yaklaşımı

Year 2021, , 690 - 695, 31.12.2021
https://doi.org/10.31590/ejosat.1039184

Abstract

Son yıllarda video konferans araçlarının eğitim alanında kullanımında oldukça önemli bir artış gerçekleşmiştir. Özellikle COVID-19 salgını döneminde uygulanan sosyal mesafe tedbirleri, birçok dersin çevrimiçi platformlarda yürütülmesini gerektirmiştir. Bu trend, fiziksel bağımlılıkları ortadan kaldırarak sürdürülebilir bir eğitim ortamı sağlarken, uzun vadede bazı problemleri de beraberinde getirmiştir. Birçok eğitimci ve öğrencinin belirttiği üzere, çevrimiçi derslerde katılımcılar arası duygusal etkileşimde bir eksiklik yaşanmaktadır. Yüz yüze eğitim süreçlerinde, bir konuşmacı hitap ettiği kitleden anlık duygusal bir geri bildirim alabilmektedir. Öte yandan, video konferans araçları belirli bir zaman diliminde ekranda kısıtlı sayıda yüz gösterebilmekte; bu durum da dersin işleyişi açısından oldukça önemli olan duygusal geri bildirimden tam anlamda faydalanılamamasına neden olmaktadır. Bu çalışmada duygu etkileşimi probleminin üstesinden gelmek ve çevrimiçi eğitim deneyimini yüz yüze eğitim kalitesine yaklaştırmak amacıyla geliştirilen EduFERA tanıtılmaktadır. EduFERA modeli, video konferansı esnasında öğrencilerin yüz ifadelerini gerçek zamanlı işleyerek duygusal değerlendirmelerde bulunmayı hedefler. Modelin geliştirme sürecinde literatürdeki etkin yüz tanıma ve yüz ifadesinden duygu çıkarma yaklaşımları incelenmiştir. Deneysel sonuçlar ve analizler sonucunda elde edilen optimum model, Flask Web API aracılığında kullanıcı dostu bir ReactJS arayüzü ile mevcut çevrimiçi eğitim sistemlerine bir eklenti olarak hizmete açılmıştır.

Project Number

1919B012001659 and 21LTP030

References

  • Aguilera-Hermida, A. P. (2020). College students' use and acceptance of emergency online learning due to COVID-19. International Journal of Educational Research Open, 1, 100011.
  • Albanie, S., & Vedaldi, A. (2016). Learning grimaces by watching tv. arXiv preprint arXiv:1610.02255.
  • Albanie, S., Nagrani, A., Vedaldi, A., & Zisserman, A. (2018). Emotion recognition in speech using cross-modal transfer in the wild. Proceedings of the 26th ACM International Conference on Multimedia, (s. 292-301).
  • Baber, H. (2020). Determinants of students' perceived learning outcome and satisfaction in online learning during the pandemic of COVID-19. Journal of Education and e-Learning Research, 7(3), 285-292.
  • Barsoum, E., Zhang, C., Ferrer, C. C., & Zhang, Z. (2016). Training deep networks for facial expression recognition with crowd-sourced label distribution. Proceedings of the 18th ACM International Conference on Multimodel Interaction, (s. 279-283).
  • Farrell, C. C., Markham, C., & Deegan, C. (2019). Real-time detection and analysis of facial features to measure student engagement with learning objects. IMVIP 2019: Irish Machine Vision & Image Processing.
  • Goodfellow, I. J., Erhan, D., Carrier, P. L., Courville, A., Mirza, M., Hamner, B., . . . Bengio, Y. (2013). Challenges in representation learning: a report on three machine learning contests. International Conference on Neural Information Processing, (s. 117-124).
  • Picard, R. W. (2000). Affective computing. MIT Press.
  • Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychological Review, 110(1), 145-172.
  • Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: a unified embedding for face recognition and clustering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (s. 815-823).
  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional neural networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  • Sutskever, I., Martens, J., Dahl, G., & Hinton, G. (2013). On the importance of initialization and momentum in deep learning. International Conference on Machine Learning (s. 1139-1147). PMLR.
  • Xie, X., Siau, K., & Nah, F. F.-H. (2020). COVID-19 pandemic - online education in the new normal and the next normal. Journal of Information Technology Case and Application Research, 22(3), 175-187.
  • Zeng, H., Shu, X., Wang, Y., Wang, Y., Zhang, L., Pong, T.-C., & Qu, H. (2020). EmotionCues: emotion-oriented visual summarization of classroom videos. IEEE Transactions on Visualization and Computer Graphics, 27(7), 3168-3181.
  • Zhou, W., Cheng, J., Lei, X., Benes, B., & Adamo, N. (2020). Deep-learning-based emotion recognition from real-time videos. International Conference on Human-Computer Interaction (s. 321-332). Cham: Springer.
There are 15 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Articles
Authors

Kaouther Mouheb 0000-0002-8991-9405

Ali Yürekli 0000-0001-8690-7559

Nedzma Dervisbegovic 0000-0003-3739-5336

Ridwan Ali Mohammed 0000-0002-5029-2887

Burcu Yılmazel 0000-0001-8917-6499

Project Number 1919B012001659 and 21LTP030
Publication Date December 31, 2021
Published in Issue Year 2021

Cite

APA Mouheb, K., Yürekli, A., Dervisbegovic, N., Mohammed, R. A., et al. (2021). EduFERA: A Real-Time Student Facial Emotion Recognition Approach. Avrupa Bilim Ve Teknoloji Dergisi(32), 690-695. https://doi.org/10.31590/ejosat.1039184