Research Article
BibTex RIS Cite

Automatic Classification of Basic Emotions Using Deep Learning Techniques

Year 2025, Volume: 5 Issue: 2, 75 - 88, 23.12.2025

Abstract

This study aims to develop an advanced artificial intelligence system capable of automatically classifying seven basic emotions (anger, disgust, fear, happiness, neutrality, sadness, and surprise) through facial expressions. Utilizing Long Short-Term Memory neural networks, the system is designed to capture temporal variations in emotional expressions with high accuracy, robustness, and scalability. During the model development process, dataset diversity was ensured, data augmentation techniques such as rotation, cropping, and brightness adjustments were applied, and transfer learning was incorporated to enhance learning efficiency. The study thoroughly examines the impact of data organization on model performance and analyzes how different data representation methods affect accuracy rates. Experimental results demonstrate that the Long Short-Term Memory based architecture effectively captures temporal dynamics in facial expressions, outperforming traditional methods in emotion recognition tasks. The system’s real-time processing capability makes it suitable for applications in healthcare, education, and security. Ethical considerations, including data privacy, informed consent, and bias mitigation, have been prioritized to ensure fair and responsible AI deployment. The findings highlight the significant potential of emotion recognition technology in human-computer interaction and emphasize the need for future research on multimodal emotion recognition, integration of diverse data sources, and the establishment of ethical guidelines to prevent misuse.

References

  • O. Arriaga, P. G. Ploger, and M. Valdenegro, “Real-time convolutional neural networks for emotion and gender classification,” arXiv preprint, arXiv:1710.07557, 2017.
  • M. S. Bartlett, J. R. Movellan, G. Littlewort, B. Braathen, M. G. Frank, and T. J. Sejnowski, “Towards automatic recognition of spontaneous facial actions,” in What the Face Reveals, 2nd ed., P. Ekman, Ed. Oxford, UK: Oxford Univ. Press, 2002.
  • X. Cao, D. Wipf, F. Wen, and G. Duan, “A practical transfer learning algorithm for face verification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 3208–3215, 2014, doi: 10.1109/CVPR.2014.410.
  • G. Chechik, V. Sharma, U. Shalit, and S. Bengio, “Large scale online learning of image similarity through ranking,” J. Mach. Learn. Res., vol. 11, pp. 1109–1135, 2010.
  • L. Chen, B. C. Ko, and D. Tao, “Anomaly detection by correspondence analysis,” IEEE Trans. Image Process., vol. 19, no. 7, pp. 2026–2039, 2010, doi: 10.1109/TIP.201.
  • S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2005.
  • J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon, “Information-theoretic metric learning,” in Proc. 24th Int. Conf. Mach. Learn. (ICML), ACM, 2007.
  • A. Dhall, R. Goecke, and J. Joshi, “Emotion recognition in the wild challenge: Baseline, data, and protocol,” in Proc. Eur. Conf. Comput. Vis. (ECCV) Workshop, 2014.
  • H. K. Ekenel, R. Stiefelhagen, and M. A. R. Ahad, “Face recognition across poses: A review,” in Handbook of Face Recognition, pp. 219–244, Springer, 2014.
  • P. Ekman, “Facial expressions of emotion: An old controversy and new findings,” Philos. Trans. Roy. Soc. B Biol. Sci., vol. 335, no. 1273, pp. 63–69, 1992.
  • P. Ekman, “Facial expressions of emotion: An old controversy and new findings,” Philos. Trans. Roy. Soc. B Biol. Sci., vol. 372, no. 1727, p. 20160352, 2017. [Online]. Available: http://rstb.royalsocietypublishing.org/
  • R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing Using MATLAB, 3rd ed. Gatesmark Publ., 2018.
  • I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio, “An empirical investigation of catastrophic forgetting in gradient-based neural networks,” arXiv preprint, arXiv:1312.6211v3, 2015.
  • R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, “Multiplie,” Image Vis. Comput., vol. 28, no. 5, pp. 807–813, 2005, doi: 10.1016/j.imavis.2004.06.025.
  • R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. Cambridge Univ. Press, 2003.
  • S. Haziqa, “Diffusion models in AI – Everything you need to know,” AI Research Blog, Mar. 31, 2023. [Online]. Available: https://example.com
  • K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification,” in Proc. Int. Conf. Comput. Vis. (ICCV), 2015, doi: 10.1109/ICCV.2015.123.
  • S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997, doi: 10.1162/neco.1997.9.8.1735.
  • E. Hoffer and N. Ailon, “Deep metric learning using triplet network,” in Proc. Int. Conf. Mach. Learn. (ICML), 2015.
  • H.-W. Ng, V. D. Nguyen, V. Vonikakis, and S. Winkler, “Deep learning for emotion recognition on small datasets using transfer learning,” in Proc. ACM Int. Conf. Multimodal Interaction (ICMI’15), Seattle, WA, USA, 2015, doi: 10.1145/2818346.2830593.
  • J. Hu, D. Zhang, and J. Ye, “Discriminative locality alignment: A family of new algorithms for unsupervised feature selection,” in Proc. Int. Conf. Mach. Learn. (ICML), 2010.
  • G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017.
  • A. K. Jain, Fundamentals of Digital Image Processing. Prentice-Hall, 1989.
  • P. Jain, B. Kulis, and K. Grauman, “Fast exact search in Hamming space with multi-index hashing,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 6, pp. 1070–1081, 2010.
  • P. Jain, B. Raj, and B. Kulis, “Online metric learning and fast similarity search,” in Proc. 27th Int. Conf. Mach. Learn. (ICML), 2010.
  • M. M. Kasar, D. Bhattacharyya, and T. H. Kim, “Face recognition using neural network: A review,” Int. J. Security and Its Applications, vol. 10, no. 3, pp. 81–100, 2016.
  • B. Kulis and K. Grauman, “Kernelized locality-sensitive hashing for scalable image search,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2009.
  • B. Kulis, P. Jain, K. Grauman, and T. Darrell, “Free bits and pieces in metric learning,” in Proc. 29th Int. Conf. Mach. Learn. (ICML), 2012.
  • S. Li and W. Deng, “Deep facial expression recognition: A survey,” IEEE Trans. Affective Comput., vol. 13, no. 3, pp. 1195–1215, 2022.
  • T.-H. S. Li, P.-H. Kuo, T.-N. Tsai, and P.-C. Luan, “CNN and LSTM based facial expression analysis model for a humanoid robot,” IEEE Access, vol. 7, pp. 93998–94011, 2019, doi: 10.1109/ACCESS.2019.2928364.
  • Y. Ling et al., “Diffusion models: A comprehensive survey of methods and applications,” arXiv preprint, arXiv:2209.00796, 2023.
  • J. Liu, C. Fang, and C. Wu, “A fusion face recognition approach based on 7-layer deep learning neural network,” J. Electr. Comput. Eng., Article ID 8637260, 2016, doi: 10.1155/2016/8637260.
  • J. Lu, J. Hu, and Y. P. Tan, “Discriminative multi-metric learning for face verification in the wild,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2015.
  • P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, and Z. Ambadar, “The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), 2010, doi: 10.1109/CVPRW.2010.5543262.
  • R. N. V. J. Mohan, “Angle oriented based image analysis using L-axial semi-circular model,” Asian J. Math. Comput. Res., vol. 10, no. 4, pp. 320–331, 2016.
  • R. N. V. J. Mohan, “Cluster optimization using fuzzy rough images,” Int. J. Multimedia Image Process., vol. 10, no. 1, pp. 505–510, 2020, doi: 10.20533/ijmip.2042.4647.2020.0062.
  • A. Mollahosseini, D. Chan, and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” IEEE Xplore, pp. 1–8, 2016, doi: 10.1109/WACV.2016.7477450.
  • A. Mollahosseini, B. Hasani, and M. H. Mahoor, “AffectNet: A database for facial expression, valence, and arousal computing in the wild,” IEEE Trans. Affective Comput., vol. 10, no. 1, pp. 18–31, 2019, doi: 10.1109/TAFFC.2017.2740923.
  • B. Ni, Y. Song, S. Yan, and I. S. Dhillon, “A scalable approach to personalized search in image collections,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2011.
  • Ö. Özer, “Enhancing law enforcement through pose-based facial recognition and image normalization techniques,” in Building EmbodiedAI Systems: The Agents, the Architecture Principles, Challenges, and Application Domains, P. Dutta, Ed. Springer, 2025, ch. 12, doi: 10.1007/978-3-031-68256-8_12.
  • O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in Proc. Brit. Mach. Vis. Conf. (BMVC), 2015.
  • R. Plutchik, “A general psychoevolutionary theory of emotion,” in Emotion: Theory, Research, and Experience, Volume 1: Theories of Emotion, R. Plutchik and H. Kellerman, Eds. Academic Press, 1980.
  • W. K. Pratt, Digital Image Processing: PIKS Scientific Inside, 4th ed. Wiley-Interscience, 2007.
  • S. J. Prince, Computer Vision: Models, Learning, and Inference. Cambridge Univ. Press, 2012.
  • J. A. Russell, “A circumplex model of affect,” J. Personality Social Psychol., vol. 39, no. 6, pp. 1161–1178, 1980.
  • L. Saranya and K. Umamaheswari, “Multiple face analysis and liveness detection using CNN,” EasyChair Preprint, no. 6547, 2021.
  • School College Listings, “50 machine learning lessons,” 2023. [Online]. Available: https://www.schoolandcollegelistings.com/TW/Unknown/364466140259355/Taipei.AI?#google_vignette.
  • C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: A comprehensive study,” Image Vis. Comput., vol. 27, no. 6, pp. 803–816, 2009, doi: 10.1016/j.imavis.2008.08.005.
  • M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis and Machine Vision, 4th ed. Cengage Learning, 2014.
  • L. Stark and J. Hoey, “The ethics of emotion in artificial intelligence systems,” in Proc. 2021 ACM Conf. Fairness, Accountability, and Transparency (FAccT ’21), 2021, doi: 10.1145/3442188.3445939.
  • R. Szeliski, Computer Vision: Algorithms and Applications. Springer, 2010.
  • Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “DeepFace: Closing the gap to human-level performance in face verification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2014.
  • M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in Proc. 36th Int. Conf. Mach. Learn. (ICML), Long Beach, CA, USA, PMLR, vol. 97, 2019.
  • D. Tang, B. Qin, T. Liu, and Z. Li, “Learning sentence representation for emotion classification on microblogs,” in Proc. Natural Lang. Process. Chinese Comput. Conf. (NLPCC), pp. 212–223, Springer-Verlag, 2013.
  • B. Thomas, A. Bhatt, and S. N. Singh, “Recognition of facial emotions using CNN architecture and FER2013,” 2024, doi: 10.1109/ICEECT61758.2024.10739309.
  • E. Trucco and A. Verri, Introductory Techniques for 3-D Computer Vision. Prentice-Hall, 1998.
  • M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci., vol. 3, no. 1, pp. 71–86, 1991, doi: 10.1162/jocn.1991.3.1.71.
  • A. Vidhya, “Tuning the hyperparameters and layers of neural network deep learning,” Analytics Vidhya, 2023. [Online]. Available: https://www.analyticsvidhya.com.
  • O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra, “Matching networks for one-shot learning,” in Advances Neural Inf. Process. Syst. (NIPS), 2016.
  • J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong, “Locality-constrained linear coding for image classification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2010.
  • K. Q. Weinberger and L. K. Saul, “Distance metric learning for large margin nearest neighbor classification,” J. Mach. Learn. Res., vol. 10, pp. 207–244, 2009.
  • E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell, “Distance metric learning with application to clustering with side-information,” in Advances Neural Inf. Process. Syst. (NIPS), 2003.
  • L. Yang, L. Zhang, J. Dong, T. Mei, and D. Zhang, “Neural aggregation network for video face recognition,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Springer, 2018.
  • M. Yang, J. Yuan, and Y. Wu, “Local similarity preservation for person-independent facial expression recognition,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Springer, 2012.
  • Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Facial landmark detection by deep multi-task learning,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Springer, 2014.
  • L. Zhao, D. Tao, X. Li, X. Wu, and X. Shao, “Face recognition under varying lighting conditions using self-quotient image,” IEEE Trans. Image Process., vol. 22, no. 6, pp. 2483–2498, 2013

Derin Öğrenme Teknikleri Kullanılarak Temel Duyguların Otomatik Sınıflandırılması

Year 2025, Volume: 5 Issue: 2, 75 - 88, 23.12.2025

Abstract

Bu çalışma, yüz ifadeleri aracılığıyla yedi temel duygunun (öfke, tiksinme, korku, mutluluk, nötr, üzüntü ve şaşkınlık) otomatik olarak sınıflandırılmasını sağlayan gelişmiş bir yapay zeka sistemi geliştirmeyi amaçlamaktadır. Uzun Kısa Süreli Bellek sinir ağları kullanılarak, duygu ifadelerinin zamansal değişimlerini yakalamada yüksek doğruluk ve ölçeklenebilirlik sağlanması hedeflenmiştir. Modelin geliştirilme sürecinde veri çeşitliliği gözetilmiş, veri artırma teknikleri (döndürme, kırpma, parlaklık ayarı) uygulanmış ve transfer öğrenme yöntemi ile modelin öğrenme kapasitesi artırılmıştır. Çalışmada, veri organizasyonunun model performansı üzerindeki etkisi detaylı olarak incelenmiş ve farklı veri temsil yöntemlerinin modelin doğruluk oranlarına nasıl etki ettiği analiz edilmiştir. Testler sonucunda, Uzun Kısa Süreli Bellek tabanlı mimarinin duygu tanıma görevlerinde zamansal dinamikleri başarılı bir şekilde yakaladığı ve geleneksel yöntemlere kıyasla üstün bir performans sergilediği ortaya konmuştur. Gerçek zamanlı çalışabilme kapasitesine sahip olan sistemin, sağlık, eğitim ve güvenlik gibi çeşitli alanlarda uygulanabilirliği vurgulanmıştır. Aynı zamanda, etik hususlar göz önünde bulundurularak veri gizliliği, kullanıcı rızası ve modelde oluşabilecek olası önyargıların önlenmesine yönelik çalışmalar yapılmıştır. Elde edilen sonuçlar, duygu tanıma teknolojilerinin insan-bilgisayar etkileşiminde büyük bir potansiyele sahip olduğunu gösterirken, gelecekte çok modlu duygu tanıma sistemleri, farklı veri türlerinin entegrasyonu ve etik kullanım standartları üzerine daha fazla araştırma yapılması gerektiğini ortaya koymaktadır.

References

  • O. Arriaga, P. G. Ploger, and M. Valdenegro, “Real-time convolutional neural networks for emotion and gender classification,” arXiv preprint, arXiv:1710.07557, 2017.
  • M. S. Bartlett, J. R. Movellan, G. Littlewort, B. Braathen, M. G. Frank, and T. J. Sejnowski, “Towards automatic recognition of spontaneous facial actions,” in What the Face Reveals, 2nd ed., P. Ekman, Ed. Oxford, UK: Oxford Univ. Press, 2002.
  • X. Cao, D. Wipf, F. Wen, and G. Duan, “A practical transfer learning algorithm for face verification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 3208–3215, 2014, doi: 10.1109/CVPR.2014.410.
  • G. Chechik, V. Sharma, U. Shalit, and S. Bengio, “Large scale online learning of image similarity through ranking,” J. Mach. Learn. Res., vol. 11, pp. 1109–1135, 2010.
  • L. Chen, B. C. Ko, and D. Tao, “Anomaly detection by correspondence analysis,” IEEE Trans. Image Process., vol. 19, no. 7, pp. 2026–2039, 2010, doi: 10.1109/TIP.201.
  • S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2005.
  • J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon, “Information-theoretic metric learning,” in Proc. 24th Int. Conf. Mach. Learn. (ICML), ACM, 2007.
  • A. Dhall, R. Goecke, and J. Joshi, “Emotion recognition in the wild challenge: Baseline, data, and protocol,” in Proc. Eur. Conf. Comput. Vis. (ECCV) Workshop, 2014.
  • H. K. Ekenel, R. Stiefelhagen, and M. A. R. Ahad, “Face recognition across poses: A review,” in Handbook of Face Recognition, pp. 219–244, Springer, 2014.
  • P. Ekman, “Facial expressions of emotion: An old controversy and new findings,” Philos. Trans. Roy. Soc. B Biol. Sci., vol. 335, no. 1273, pp. 63–69, 1992.
  • P. Ekman, “Facial expressions of emotion: An old controversy and new findings,” Philos. Trans. Roy. Soc. B Biol. Sci., vol. 372, no. 1727, p. 20160352, 2017. [Online]. Available: http://rstb.royalsocietypublishing.org/
  • R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing Using MATLAB, 3rd ed. Gatesmark Publ., 2018.
  • I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio, “An empirical investigation of catastrophic forgetting in gradient-based neural networks,” arXiv preprint, arXiv:1312.6211v3, 2015.
  • R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, “Multiplie,” Image Vis. Comput., vol. 28, no. 5, pp. 807–813, 2005, doi: 10.1016/j.imavis.2004.06.025.
  • R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. Cambridge Univ. Press, 2003.
  • S. Haziqa, “Diffusion models in AI – Everything you need to know,” AI Research Blog, Mar. 31, 2023. [Online]. Available: https://example.com
  • K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification,” in Proc. Int. Conf. Comput. Vis. (ICCV), 2015, doi: 10.1109/ICCV.2015.123.
  • S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997, doi: 10.1162/neco.1997.9.8.1735.
  • E. Hoffer and N. Ailon, “Deep metric learning using triplet network,” in Proc. Int. Conf. Mach. Learn. (ICML), 2015.
  • H.-W. Ng, V. D. Nguyen, V. Vonikakis, and S. Winkler, “Deep learning for emotion recognition on small datasets using transfer learning,” in Proc. ACM Int. Conf. Multimodal Interaction (ICMI’15), Seattle, WA, USA, 2015, doi: 10.1145/2818346.2830593.
  • J. Hu, D. Zhang, and J. Ye, “Discriminative locality alignment: A family of new algorithms for unsupervised feature selection,” in Proc. Int. Conf. Mach. Learn. (ICML), 2010.
  • G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017.
  • A. K. Jain, Fundamentals of Digital Image Processing. Prentice-Hall, 1989.
  • P. Jain, B. Kulis, and K. Grauman, “Fast exact search in Hamming space with multi-index hashing,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 6, pp. 1070–1081, 2010.
  • P. Jain, B. Raj, and B. Kulis, “Online metric learning and fast similarity search,” in Proc. 27th Int. Conf. Mach. Learn. (ICML), 2010.
  • M. M. Kasar, D. Bhattacharyya, and T. H. Kim, “Face recognition using neural network: A review,” Int. J. Security and Its Applications, vol. 10, no. 3, pp. 81–100, 2016.
  • B. Kulis and K. Grauman, “Kernelized locality-sensitive hashing for scalable image search,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2009.
  • B. Kulis, P. Jain, K. Grauman, and T. Darrell, “Free bits and pieces in metric learning,” in Proc. 29th Int. Conf. Mach. Learn. (ICML), 2012.
  • S. Li and W. Deng, “Deep facial expression recognition: A survey,” IEEE Trans. Affective Comput., vol. 13, no. 3, pp. 1195–1215, 2022.
  • T.-H. S. Li, P.-H. Kuo, T.-N. Tsai, and P.-C. Luan, “CNN and LSTM based facial expression analysis model for a humanoid robot,” IEEE Access, vol. 7, pp. 93998–94011, 2019, doi: 10.1109/ACCESS.2019.2928364.
  • Y. Ling et al., “Diffusion models: A comprehensive survey of methods and applications,” arXiv preprint, arXiv:2209.00796, 2023.
  • J. Liu, C. Fang, and C. Wu, “A fusion face recognition approach based on 7-layer deep learning neural network,” J. Electr. Comput. Eng., Article ID 8637260, 2016, doi: 10.1155/2016/8637260.
  • J. Lu, J. Hu, and Y. P. Tan, “Discriminative multi-metric learning for face verification in the wild,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2015.
  • P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, and Z. Ambadar, “The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), 2010, doi: 10.1109/CVPRW.2010.5543262.
  • R. N. V. J. Mohan, “Angle oriented based image analysis using L-axial semi-circular model,” Asian J. Math. Comput. Res., vol. 10, no. 4, pp. 320–331, 2016.
  • R. N. V. J. Mohan, “Cluster optimization using fuzzy rough images,” Int. J. Multimedia Image Process., vol. 10, no. 1, pp. 505–510, 2020, doi: 10.20533/ijmip.2042.4647.2020.0062.
  • A. Mollahosseini, D. Chan, and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” IEEE Xplore, pp. 1–8, 2016, doi: 10.1109/WACV.2016.7477450.
  • A. Mollahosseini, B. Hasani, and M. H. Mahoor, “AffectNet: A database for facial expression, valence, and arousal computing in the wild,” IEEE Trans. Affective Comput., vol. 10, no. 1, pp. 18–31, 2019, doi: 10.1109/TAFFC.2017.2740923.
  • B. Ni, Y. Song, S. Yan, and I. S. Dhillon, “A scalable approach to personalized search in image collections,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2011.
  • Ö. Özer, “Enhancing law enforcement through pose-based facial recognition and image normalization techniques,” in Building EmbodiedAI Systems: The Agents, the Architecture Principles, Challenges, and Application Domains, P. Dutta, Ed. Springer, 2025, ch. 12, doi: 10.1007/978-3-031-68256-8_12.
  • O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in Proc. Brit. Mach. Vis. Conf. (BMVC), 2015.
  • R. Plutchik, “A general psychoevolutionary theory of emotion,” in Emotion: Theory, Research, and Experience, Volume 1: Theories of Emotion, R. Plutchik and H. Kellerman, Eds. Academic Press, 1980.
  • W. K. Pratt, Digital Image Processing: PIKS Scientific Inside, 4th ed. Wiley-Interscience, 2007.
  • S. J. Prince, Computer Vision: Models, Learning, and Inference. Cambridge Univ. Press, 2012.
  • J. A. Russell, “A circumplex model of affect,” J. Personality Social Psychol., vol. 39, no. 6, pp. 1161–1178, 1980.
  • L. Saranya and K. Umamaheswari, “Multiple face analysis and liveness detection using CNN,” EasyChair Preprint, no. 6547, 2021.
  • School College Listings, “50 machine learning lessons,” 2023. [Online]. Available: https://www.schoolandcollegelistings.com/TW/Unknown/364466140259355/Taipei.AI?#google_vignette.
  • C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: A comprehensive study,” Image Vis. Comput., vol. 27, no. 6, pp. 803–816, 2009, doi: 10.1016/j.imavis.2008.08.005.
  • M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis and Machine Vision, 4th ed. Cengage Learning, 2014.
  • L. Stark and J. Hoey, “The ethics of emotion in artificial intelligence systems,” in Proc. 2021 ACM Conf. Fairness, Accountability, and Transparency (FAccT ’21), 2021, doi: 10.1145/3442188.3445939.
  • R. Szeliski, Computer Vision: Algorithms and Applications. Springer, 2010.
  • Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “DeepFace: Closing the gap to human-level performance in face verification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2014.
  • M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in Proc. 36th Int. Conf. Mach. Learn. (ICML), Long Beach, CA, USA, PMLR, vol. 97, 2019.
  • D. Tang, B. Qin, T. Liu, and Z. Li, “Learning sentence representation for emotion classification on microblogs,” in Proc. Natural Lang. Process. Chinese Comput. Conf. (NLPCC), pp. 212–223, Springer-Verlag, 2013.
  • B. Thomas, A. Bhatt, and S. N. Singh, “Recognition of facial emotions using CNN architecture and FER2013,” 2024, doi: 10.1109/ICEECT61758.2024.10739309.
  • E. Trucco and A. Verri, Introductory Techniques for 3-D Computer Vision. Prentice-Hall, 1998.
  • M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci., vol. 3, no. 1, pp. 71–86, 1991, doi: 10.1162/jocn.1991.3.1.71.
  • A. Vidhya, “Tuning the hyperparameters and layers of neural network deep learning,” Analytics Vidhya, 2023. [Online]. Available: https://www.analyticsvidhya.com.
  • O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra, “Matching networks for one-shot learning,” in Advances Neural Inf. Process. Syst. (NIPS), 2016.
  • J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong, “Locality-constrained linear coding for image classification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2010.
  • K. Q. Weinberger and L. K. Saul, “Distance metric learning for large margin nearest neighbor classification,” J. Mach. Learn. Res., vol. 10, pp. 207–244, 2009.
  • E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell, “Distance metric learning with application to clustering with side-information,” in Advances Neural Inf. Process. Syst. (NIPS), 2003.
  • L. Yang, L. Zhang, J. Dong, T. Mei, and D. Zhang, “Neural aggregation network for video face recognition,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Springer, 2018.
  • M. Yang, J. Yuan, and Y. Wu, “Local similarity preservation for person-independent facial expression recognition,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Springer, 2012.
  • Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Facial landmark detection by deep multi-task learning,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Springer, 2014.
  • L. Zhao, D. Tao, X. Li, X. Wu, and X. Shao, “Face recognition under varying lighting conditions using self-quotient image,” IEEE Trans. Image Process., vol. 22, no. 6, pp. 2483–2498, 2013
There are 66 citations in total.

Details

Primary Language English
Subjects Deep Learning, Data Management and Data Science (Other), Artificial Intelligence (Other)
Journal Section Research Article
Authors

Özen Özer 0000-0001-6476-0664

Nadir Subaşı 0000-0002-5657-9002

Submission Date July 23, 2025
Acceptance Date November 18, 2025
Publication Date December 23, 2025
Published in Issue Year 2025 Volume: 5 Issue: 2

Cite

IEEE Ö. Özer and N. Subaşı, “Automatic Classification of Basic Emotions Using Deep Learning Techniques”, Journal of Artificial Intelligence and Data Science, vol. 5, no. 2, pp. 75–88, 2025.

All articles published by JAIDA are licensed under a Creative Commons Attribution 4.0 International License.

88x31.png