Research Article
BibTex RIS Cite

An Emotion Recognition Model Using Facial Expressions in Distance Learning

Year 2022, , 770 - 778, 30.09.2022
https://doi.org/10.17798/bitlisfen.1079499

Abstract

The most important factor on the success of the student is the student's readiness for the lesson, motivation, cognitive and emotional state. In face-to-face education, the educator can follow the student visually throughout the lesson and can observe his emotional state. One of the most important disadvantages of distance learning is that the emotional state of the student cannot be followed instantly. In addition, the processing time of emotion detection, in which real-time emotion detection will be performed, should be short. In this study, a method for emotion recognition is proposed by using distance and slope information between facial landmarks. In addition, the feature size was reduced by detecting only those that are effective for emotion recognition among the distance and slope information with statistical analysis. According to the results obtained, the proposed method and feature set achieved 86.11% success. In addition, the processing time is at a level that can be used in distance learning and can detect real-time emotion.

References

  • N. Saberi and G. A. Montazer, ‘A new approach for learners’ modeling in e-learning environment using LMS logs analysis’, in 6th National and 3rd International conference of e-Learning and e-Teaching, 2012, pp. 25–33.
  • M. Imani and G. A. Montazer, ‘A survey of emotion recognition methods with emphasis on E-Learning environments’, Journal of Network and Computer Applications, vol. 147, p. 102423, 2019.
  • C. Villiger et al., ‘Effectiveness of an extracurricular program for struggling readers: A comparative study with parent tutors and volunteer tutors’, Learning and Instruction, vol. 60, pp. 54–65, 2019.
  • A. A. Kardan and Y. Einavypour, ‘Multi-Criteria Learners Classification for Selecting an Appropriate Teaching Method’, in Proceedings of the World Congress on Engineering and Computer Science, 2008, pp. 22–24.
  • K. P. Truong, D. A. Van Leeuwen, and F. M. De Jong, ‘Speech-based recognition of self-reported and observed emotion in a dimensional space’, Speech communication, vol. 54, no. 9, pp. 1049–1063, 2012.
  • C. Busso and S. S. Narayanan, ‘The expression and perception of emotions: Comparing assessments of self versus others’, presented at the Ninth annual conference of the international speech communication association, 2008.
  • D. Morrison and L. C. De Silva, ‘Voting ensembles for spoken affect classification’, Journal of Network and Computer Applications, vol. 30, no. 4, pp. 1356–1365, 2007.
  • N. Sadoughi and C. Busso, ‘Speech-driven animation with meaningful behaviors’, Speech Communication, vol. 110, pp. 90–100, 2019.
  • E. Mendoza and G. Carballo, ‘Vocal tremor and psychological stress’, Journal of Voice, vol. 13, no. 1, pp. 105–112, 1999.
  • M. Pantic and L. J. Rothkrantz, ‘Toward an affect-sensitive multimodal human-computer interaction’, Proceedings of the IEEE, vol. 91, no. 9, pp. 1370–1390, 2003.
  • H. Cao, R. Verma, and A. Nenkova, ‘Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech’, Computer speech & language, vol. 29, no. 1, pp. 186–202, 2015.
  • F. Chenchah and Z. Lachiri, ‘A bio-inspired emotion recognition system under real-life conditions’, Applied Acoustics, vol. 115, pp. 6–14, 2017.
  • W. Dai, D. Han, Y. Dai, and D. Xu, ‘Emotion recognition and affective computing on vocal social media’, Information & Management, vol. 52, no. 7, pp. 777–788, 2015.
  • C.-C. Lee, E. Mower, C. Busso, S. Lee, and S. Narayanan, ‘Emotion recognition using a hierarchical binary decision tree approach’, Speech Communication, vol. 53, no. 9–10, pp. 1162–1171, 2011.
  • K. Mannepalli, P. N. Sastry, and M. Suman, ‘A novel adaptive fractional deep belief networks for speaker emotion recognition’, Alexandria Engineering Journal, vol. 56, no. 4, pp. 485–497, 2017.
  • S. Mariooryad and C. Busso, ‘Compensating for speaker or lexical variabilities in speech for emotion recognition’, Speech Communication, vol. 57, pp. 1–12, 2014.
  • V. V. Nanavare and S. K. Jagtap, ‘Recognition of human emotions from speech processing’, Procedia Computer Science, vol. 49, pp. 24–32, 2015.
  • C. S. Ooi, K. P. Seng, L.-M. Ang, and L. W. Chew, ‘A new approach of audio emotion recognition’, Expert systems with applications, vol. 41, no. 13, pp. 5858–5869, 2014.
  • T. Özseven, ‘A novel feature selection method for speech emotion recognition’, Applied Acoustics, vol. 146, pp. 320–326, 2019.
  • T. M. Rajisha, A. P. Sunija, and K. S. Riyas, ‘Performance analysis of Malayalam language speech emotion recognition system using ANN/SVM’, Procedia Technology, vol. 24, pp. 1097–1104, 2016.
  • P. Vasuki and C. Aravindan, ‘Improving emotion recognition from speech using sensor fusion techniques’, in TENCON 2012 IEEE Region 10 Conference, 2012, pp. 1–6.
  • P. Boersma, ‘Praat, a system for doing phonetics by computer’, Glot. Int., vol. 5, no. 9, pp. 341–345, 2001.
  • F. Eyben, M. Wöllmer, and B. Schuller, ‘Opensmile: the munich versatile and fast open-source audio feature extractor’, in Proceedings of the 18th ACM international conference on Multimedia, 2010, pp. 1459–1462.
  • F. Eyben, M. Wöllmer, and B. Schuller, ‘OpenEAR—introducing the Munich open-source emotion and affect recognition toolkit’, in 2009 3rd international conference on affective computing and intelligent interaction and workshops, 2009, pp. 1–6.
  • T. Özseven and M. Düğenci, ‘SPeech ACoustic (SPAC): A novel tool for speech feature extraction and classification’, Applied Acoustics, vol. 136, pp. 1–8, 2018.
  • A. Jaimes and N. Sebe, ‘Multimodal human–computer interaction: A survey’, Computer vision and image understanding, vol. 108, no. 1–2, pp. 116–134, 2007.
  • M. IqbalQuraishi, J. Pal Choudhury, M. De, and P. Chakraborty, ‘A framework for the recognition of human emotion using soft computing models’, International Journal of Computer Applications, vol. 40, no. 17, pp. 50–55, 2012.
  • K. Shanmugarajah, S. Gaind, A. Clarke, and P. E. Butler, ‘The role of disgust emotions in the observer response to facial disfigurement’, Body Image, vol. 9, no. 4, pp. 455–461, 2012.
  • C. Darwin, ‘The expression of the emotions in man and animals (1872)’, The Portable Darwin, pp. 364–393, 1993.
  • P. Ekman and W. V. Friesen, ‘Constants across cultures in the face and emotion.’, Journal of personality and social psychology, vol. 17, no. 2, p. 124, 1971.
  • D. Ghimire and J. Lee, ‘Geometric feature-based facial expression recognition in image sequences using multi-class adaboost and support vector machines’, Sensors, vol. 13, no. 6, pp. 7714–7734, 2013.
  • F. Nasoz, C. L. Lisetti, K. Alvarez, and N. Finkelstein, ‘Emotion recognition from physiological signals for user modeling of affect’, presented at the Proceedings of the 3rd Workshop on Affective and Attitude User Modelling (Pittsburgh, PA, USA, 2003.
  • S. Koelstra et al., ‘Deap: A database for emotion analysis; using physiological signals’, IEEE transactions on affective computing, vol. 3, no. 1, pp. 18–31, 2011.
  • L. Li, L. Cheng, and K. Qian, ‘An e-learning system model based on affective computing’, in 2008 international conference on cyberworlds, 2008, pp. 45–50.
  • J. G. Daugman, ‘Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression’, IEEE Transactions on acoustics, speech, and signal processing, vol. 36, no. 7, pp. 1169–1179, 1988.
  • M. Turk and A. Pentland, ‘Eigenfaces for recognition’, Journal of cognitive neuroscience, vol. 3, no. 1, pp. 71–86, 1991.
  • K. Delac, M. Grgic, and S. Grgic, ‘Independent comparative study of PCA, ICA, and LDA on the FERET data set’, International Journal of Imaging Systems and Technology, vol. 15, no. 5, pp. 252–260, 2005.
  • S. D’Mello, R. W. Picard, and A. Graesser, ‘Toward an affect-sensitive AutoTutor’, IEEE Intelligent Systems, vol. 22, no. 4, pp. 53–61, 2007.
  • A. K. Oryina and A. O. Adedolapo, ‘Emotion recognition for user centred e-learning’, in 2016 IEEE 40th Annual Computer Software and Applications Conference (COMPSAC), 2016, vol. 2, pp. 509–514.
  • Y. Guo, D. Tao, J. Yu, H. Xiong, Y. Li, and D. Tao, ‘Deep neural networks with relativity learning for facial expression recognition’, in 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), 2016, pp. 1–6.
  • K. Bahreini, R. Nadolski, and W. Westera, ‘Data fusion for real-time multimodal emotion recognition through webcams and microphones in e-learning’, International Journal of Human–Computer Interaction, vol. 32, no. 5, pp. 415–430, 2016.
  • U. Ayvaz, H. Gürüler, and M. O. Devrim, ‘Use of facial emotion recognition in e-learning systems’, Information technologies and teaching aids, no. 60, вип. 4, pp. 95–104, 2017.
  • F. L. Gambo, G. M. Wajiga, and E. J. Garba, ‘A Conceptual Framework for Detection of Learning Style from Facial Expressions using Convolutional Neural Network’, in 2019 2nd International Conference of the IEEE Nigeria Computer Chapter (NigeriaComputConf), 2019, pp. 1–5.
  • M. Megahed and A. Mohammed, ‘Modeling adaptive E-Learning environment using facial expressions and fuzzy logic’, Expert Systems with Applications, vol. 157, p. 113460, Nov. 2020, doi: 10.1016/j.eswa.2020.113460.
  • A. Pise, H. Vadapalli, and I. Sanders, ‘Facial emotion recognition using temporal relational network: an application to E-learning’, Multimedia Tools and Applications, pp. 1–21, 2020.
  • S.-Y. Lin, C.-M. Wu, S.-L. Chen, T.-L. Lin, and Y.-W. Tseng, ‘Continuous Facial Emotion Recognition Method Based on Deep Learning of Academic Emotions’, Sensors and Materials, vol. 32, no. 10, pp. 3243–3259, 2020.
  • I. H. Witten, E. Frank, L. E. Trigg, M. A. Hall, G. Holmes, and S. J. Cunningham, ‘Weka: Practical machine learning tools and techniques with Java implementations’, (Working paper 99/11). Hamilton, New Zealand: University of Waikato, Department of Computer Science., 1999.
  • D. Yang, A. Alsadoon, P. C. Prasad, A. K. Singh, and A. Elchouemi, ‘An emotion recognition model based on facial recognition in virtual learning environment’, Procedia Computer Science, vol. 125, pp. 2–10, 2018.
  • P. Ekman and W. V. Friesen, Facial action coding system: Investigator’s guide. Consulting Psychologists Press, 1978.
  • M. J. Den Uyl and H. Van Kuilenburg, ‘The FaceReader: Online facial expression recognition’, in Proceedings of measuring behavior, 2005, vol. 30, no. 2, pp. 589–590.
  • M. Soltani, H. Zarzour, M. C. Babahenini, M. Hammad, A.-S. Mohammad, and Y. Jararweh, ‘An emotional feedback based on facial action coding system for MOOCs with computer-based assessment’, in 2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS), 2019, pp. 286–290.
  • H. Hesham, M. Nabawy, O. Safwat, Y. Khalifa, H. Metawie, and A. Mohammed, ‘Detecting Education level using Facial Expressions in E-learning Systems’, in 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), 2020, pp. 1–6.
  • N. C. Ebner, M. Riediger, and U. Lindenberger, ‘FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation’, Behavior research methods, vol. 42, no. 1, pp. 351–362, 2010.
  • ‘Luxand - Detect and Recognize Faces and Facial Features with Luxand FaceSDK’. https://www.luxand.com/facesdk/ (accessed Jun. 16, 2021).

Uzaktan Eğitimde Yüz İfadeleri Kullanılarak Duygu Tanıma Modeli

Year 2022, , 770 - 778, 30.09.2022
https://doi.org/10.17798/bitlisfen.1079499

Abstract

Öğrenci başarısı üzerindeki en önemli etken öğrencinin derse hazır bulunuşluğu, motivasyonu, bilişsel ve duygusal durumudur. Yüz yüze gerçekleştirilen eğitimde eğitimci öğrenciyi ders boyunca görsel olarak takip edebilmekte ve duygusal durumu ile ilgili gözlemde bulunabilmektedir. Uzaktan eğitimdeki en önemli dezavantajlardan birisi öğrencinin duygusal durumunun anlık olarak takip edilememesidir. Ayrıca, gerçek zamanlı duygu tespiti gerçekleştirileceği için duygu tespiti işlem süresinin kısa olması gereklidir. Bu çalışmada, yüz noktaları arasındaki uzaklık ve eğim bilgisi kullanılarak duygu tespiti için yöntem önerilmiştir. Ayrıca, istatistiksel analiz ile uzaklık ve eğim bilgilerinden sadece duygu tespiti üzerinde etkili olanlar tespit edilerek öznitelik boyutu azaltılmıştır. Elde edilen sonuçlara göre önerilen yöntem ve öznitelik kümesi %86.11 başarı sağlamıştır. Ayrıca, işlem süresi uzaktan eğitimde kullanılabilecek ve gerçek zamanlı duygu tespiti yapabilecek seviyededir.

References

  • N. Saberi and G. A. Montazer, ‘A new approach for learners’ modeling in e-learning environment using LMS logs analysis’, in 6th National and 3rd International conference of e-Learning and e-Teaching, 2012, pp. 25–33.
  • M. Imani and G. A. Montazer, ‘A survey of emotion recognition methods with emphasis on E-Learning environments’, Journal of Network and Computer Applications, vol. 147, p. 102423, 2019.
  • C. Villiger et al., ‘Effectiveness of an extracurricular program for struggling readers: A comparative study with parent tutors and volunteer tutors’, Learning and Instruction, vol. 60, pp. 54–65, 2019.
  • A. A. Kardan and Y. Einavypour, ‘Multi-Criteria Learners Classification for Selecting an Appropriate Teaching Method’, in Proceedings of the World Congress on Engineering and Computer Science, 2008, pp. 22–24.
  • K. P. Truong, D. A. Van Leeuwen, and F. M. De Jong, ‘Speech-based recognition of self-reported and observed emotion in a dimensional space’, Speech communication, vol. 54, no. 9, pp. 1049–1063, 2012.
  • C. Busso and S. S. Narayanan, ‘The expression and perception of emotions: Comparing assessments of self versus others’, presented at the Ninth annual conference of the international speech communication association, 2008.
  • D. Morrison and L. C. De Silva, ‘Voting ensembles for spoken affect classification’, Journal of Network and Computer Applications, vol. 30, no. 4, pp. 1356–1365, 2007.
  • N. Sadoughi and C. Busso, ‘Speech-driven animation with meaningful behaviors’, Speech Communication, vol. 110, pp. 90–100, 2019.
  • E. Mendoza and G. Carballo, ‘Vocal tremor and psychological stress’, Journal of Voice, vol. 13, no. 1, pp. 105–112, 1999.
  • M. Pantic and L. J. Rothkrantz, ‘Toward an affect-sensitive multimodal human-computer interaction’, Proceedings of the IEEE, vol. 91, no. 9, pp. 1370–1390, 2003.
  • H. Cao, R. Verma, and A. Nenkova, ‘Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech’, Computer speech & language, vol. 29, no. 1, pp. 186–202, 2015.
  • F. Chenchah and Z. Lachiri, ‘A bio-inspired emotion recognition system under real-life conditions’, Applied Acoustics, vol. 115, pp. 6–14, 2017.
  • W. Dai, D. Han, Y. Dai, and D. Xu, ‘Emotion recognition and affective computing on vocal social media’, Information & Management, vol. 52, no. 7, pp. 777–788, 2015.
  • C.-C. Lee, E. Mower, C. Busso, S. Lee, and S. Narayanan, ‘Emotion recognition using a hierarchical binary decision tree approach’, Speech Communication, vol. 53, no. 9–10, pp. 1162–1171, 2011.
  • K. Mannepalli, P. N. Sastry, and M. Suman, ‘A novel adaptive fractional deep belief networks for speaker emotion recognition’, Alexandria Engineering Journal, vol. 56, no. 4, pp. 485–497, 2017.
  • S. Mariooryad and C. Busso, ‘Compensating for speaker or lexical variabilities in speech for emotion recognition’, Speech Communication, vol. 57, pp. 1–12, 2014.
  • V. V. Nanavare and S. K. Jagtap, ‘Recognition of human emotions from speech processing’, Procedia Computer Science, vol. 49, pp. 24–32, 2015.
  • C. S. Ooi, K. P. Seng, L.-M. Ang, and L. W. Chew, ‘A new approach of audio emotion recognition’, Expert systems with applications, vol. 41, no. 13, pp. 5858–5869, 2014.
  • T. Özseven, ‘A novel feature selection method for speech emotion recognition’, Applied Acoustics, vol. 146, pp. 320–326, 2019.
  • T. M. Rajisha, A. P. Sunija, and K. S. Riyas, ‘Performance analysis of Malayalam language speech emotion recognition system using ANN/SVM’, Procedia Technology, vol. 24, pp. 1097–1104, 2016.
  • P. Vasuki and C. Aravindan, ‘Improving emotion recognition from speech using sensor fusion techniques’, in TENCON 2012 IEEE Region 10 Conference, 2012, pp. 1–6.
  • P. Boersma, ‘Praat, a system for doing phonetics by computer’, Glot. Int., vol. 5, no. 9, pp. 341–345, 2001.
  • F. Eyben, M. Wöllmer, and B. Schuller, ‘Opensmile: the munich versatile and fast open-source audio feature extractor’, in Proceedings of the 18th ACM international conference on Multimedia, 2010, pp. 1459–1462.
  • F. Eyben, M. Wöllmer, and B. Schuller, ‘OpenEAR—introducing the Munich open-source emotion and affect recognition toolkit’, in 2009 3rd international conference on affective computing and intelligent interaction and workshops, 2009, pp. 1–6.
  • T. Özseven and M. Düğenci, ‘SPeech ACoustic (SPAC): A novel tool for speech feature extraction and classification’, Applied Acoustics, vol. 136, pp. 1–8, 2018.
  • A. Jaimes and N. Sebe, ‘Multimodal human–computer interaction: A survey’, Computer vision and image understanding, vol. 108, no. 1–2, pp. 116–134, 2007.
  • M. IqbalQuraishi, J. Pal Choudhury, M. De, and P. Chakraborty, ‘A framework for the recognition of human emotion using soft computing models’, International Journal of Computer Applications, vol. 40, no. 17, pp. 50–55, 2012.
  • K. Shanmugarajah, S. Gaind, A. Clarke, and P. E. Butler, ‘The role of disgust emotions in the observer response to facial disfigurement’, Body Image, vol. 9, no. 4, pp. 455–461, 2012.
  • C. Darwin, ‘The expression of the emotions in man and animals (1872)’, The Portable Darwin, pp. 364–393, 1993.
  • P. Ekman and W. V. Friesen, ‘Constants across cultures in the face and emotion.’, Journal of personality and social psychology, vol. 17, no. 2, p. 124, 1971.
  • D. Ghimire and J. Lee, ‘Geometric feature-based facial expression recognition in image sequences using multi-class adaboost and support vector machines’, Sensors, vol. 13, no. 6, pp. 7714–7734, 2013.
  • F. Nasoz, C. L. Lisetti, K. Alvarez, and N. Finkelstein, ‘Emotion recognition from physiological signals for user modeling of affect’, presented at the Proceedings of the 3rd Workshop on Affective and Attitude User Modelling (Pittsburgh, PA, USA, 2003.
  • S. Koelstra et al., ‘Deap: A database for emotion analysis; using physiological signals’, IEEE transactions on affective computing, vol. 3, no. 1, pp. 18–31, 2011.
  • L. Li, L. Cheng, and K. Qian, ‘An e-learning system model based on affective computing’, in 2008 international conference on cyberworlds, 2008, pp. 45–50.
  • J. G. Daugman, ‘Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression’, IEEE Transactions on acoustics, speech, and signal processing, vol. 36, no. 7, pp. 1169–1179, 1988.
  • M. Turk and A. Pentland, ‘Eigenfaces for recognition’, Journal of cognitive neuroscience, vol. 3, no. 1, pp. 71–86, 1991.
  • K. Delac, M. Grgic, and S. Grgic, ‘Independent comparative study of PCA, ICA, and LDA on the FERET data set’, International Journal of Imaging Systems and Technology, vol. 15, no. 5, pp. 252–260, 2005.
  • S. D’Mello, R. W. Picard, and A. Graesser, ‘Toward an affect-sensitive AutoTutor’, IEEE Intelligent Systems, vol. 22, no. 4, pp. 53–61, 2007.
  • A. K. Oryina and A. O. Adedolapo, ‘Emotion recognition for user centred e-learning’, in 2016 IEEE 40th Annual Computer Software and Applications Conference (COMPSAC), 2016, vol. 2, pp. 509–514.
  • Y. Guo, D. Tao, J. Yu, H. Xiong, Y. Li, and D. Tao, ‘Deep neural networks with relativity learning for facial expression recognition’, in 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), 2016, pp. 1–6.
  • K. Bahreini, R. Nadolski, and W. Westera, ‘Data fusion for real-time multimodal emotion recognition through webcams and microphones in e-learning’, International Journal of Human–Computer Interaction, vol. 32, no. 5, pp. 415–430, 2016.
  • U. Ayvaz, H. Gürüler, and M. O. Devrim, ‘Use of facial emotion recognition in e-learning systems’, Information technologies and teaching aids, no. 60, вип. 4, pp. 95–104, 2017.
  • F. L. Gambo, G. M. Wajiga, and E. J. Garba, ‘A Conceptual Framework for Detection of Learning Style from Facial Expressions using Convolutional Neural Network’, in 2019 2nd International Conference of the IEEE Nigeria Computer Chapter (NigeriaComputConf), 2019, pp. 1–5.
  • M. Megahed and A. Mohammed, ‘Modeling adaptive E-Learning environment using facial expressions and fuzzy logic’, Expert Systems with Applications, vol. 157, p. 113460, Nov. 2020, doi: 10.1016/j.eswa.2020.113460.
  • A. Pise, H. Vadapalli, and I. Sanders, ‘Facial emotion recognition using temporal relational network: an application to E-learning’, Multimedia Tools and Applications, pp. 1–21, 2020.
  • S.-Y. Lin, C.-M. Wu, S.-L. Chen, T.-L. Lin, and Y.-W. Tseng, ‘Continuous Facial Emotion Recognition Method Based on Deep Learning of Academic Emotions’, Sensors and Materials, vol. 32, no. 10, pp. 3243–3259, 2020.
  • I. H. Witten, E. Frank, L. E. Trigg, M. A. Hall, G. Holmes, and S. J. Cunningham, ‘Weka: Practical machine learning tools and techniques with Java implementations’, (Working paper 99/11). Hamilton, New Zealand: University of Waikato, Department of Computer Science., 1999.
  • D. Yang, A. Alsadoon, P. C. Prasad, A. K. Singh, and A. Elchouemi, ‘An emotion recognition model based on facial recognition in virtual learning environment’, Procedia Computer Science, vol. 125, pp. 2–10, 2018.
  • P. Ekman and W. V. Friesen, Facial action coding system: Investigator’s guide. Consulting Psychologists Press, 1978.
  • M. J. Den Uyl and H. Van Kuilenburg, ‘The FaceReader: Online facial expression recognition’, in Proceedings of measuring behavior, 2005, vol. 30, no. 2, pp. 589–590.
  • M. Soltani, H. Zarzour, M. C. Babahenini, M. Hammad, A.-S. Mohammad, and Y. Jararweh, ‘An emotional feedback based on facial action coding system for MOOCs with computer-based assessment’, in 2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS), 2019, pp. 286–290.
  • H. Hesham, M. Nabawy, O. Safwat, Y. Khalifa, H. Metawie, and A. Mohammed, ‘Detecting Education level using Facial Expressions in E-learning Systems’, in 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), 2020, pp. 1–6.
  • N. C. Ebner, M. Riediger, and U. Lindenberger, ‘FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation’, Behavior research methods, vol. 42, no. 1, pp. 351–362, 2010.
  • ‘Luxand - Detect and Recognize Faces and Facial Features with Luxand FaceSDK’. https://www.luxand.com/facesdk/ (accessed Jun. 16, 2021).
There are 54 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Araştırma Makalesi
Authors

Beyza Esin Özseven 0000-0003-4888-8259

Naim Cagman 0000-0003-3037-1868

Publication Date September 30, 2022
Submission Date February 26, 2022
Acceptance Date August 23, 2022
Published in Issue Year 2022

Cite

IEEE B. Esin Özseven and N. Cagman, “An Emotion Recognition Model Using Facial Expressions in Distance Learning”, Bitlis Eren Üniversitesi Fen Bilimleri Dergisi, vol. 11, no. 3, pp. 770–778, 2022, doi: 10.17798/bitlisfen.1079499.



Bitlis Eren Üniversitesi
Fen Bilimleri Dergisi Editörlüğü

Bitlis Eren Üniversitesi Lisansüstü Eğitim Enstitüsü        
Beş Minare Mah. Ahmet Eren Bulvarı, Merkez Kampüs, 13000 BİTLİS        
E-posta: fbe@beu.edu.tr