Research Article
BibTex RIS Cite

Bilgisayar Kullanıcılarına Yönelik Duygusal İfade Tespiti

Year 2017, Volume: 10 Issue: 2, 231 - 239, 28.04.2017
https://doi.org/10.17671/gazibtd.309307

Abstract

Bilgisayar
kullanımının yaygınlaştığı günümüzde, insan-bilgisayar etkileşimi ile ilgili
yenilikçi çalışmalar hız kazanmıştır. Bu yeniliklerden bir tanesi de bilgisayar
kullanıcısı bireylerin duygusal durumlarının, makine ile öğrenilmesidir. Ofis
ortamlarında bilgisayarla çalışan bireylerin duygu durumlarının tespit
edilebilmesi, özellikle bu kişilerin moral durumu ile iş performansı ilişkisi
hakkında anlamlı bilgiler sunabilir. Bu fikirden hareketle, bilgisayar
kullanıcısının yüz ifadelerine dayalı anlık duygu tespiti gerçekleştiren
prototip bir sistem geliştirilmiştir. Geliştirilen bu sistem sırasıyla; yüz
tespiti, yüz işaretçilerinin tespiti, yüz işaretçilerine dayalı özniteliklerden
oluşan eğitim veri setinin oluşturulması ve kural-tabanlı sınıflandırıcı ile
anlık duygusal durum tespitini gerçekleştirmektedir. Çalışmanın özgünlüğünü
ifade eden özniteliklerin ayırt edici karakteristiğini anlamak amacıyla mevcut
eğitim veri seti destek vektör makineleri ile durağan bir şekilde
sınıflandırılmıştır. Sonuç olarak, sistemin başarımı 10-katlı çapraz doğrulama
ile %96,1 olarak tespit edilmiştir.

References

  • [1] I. A. Essa, A. P. Pentland, “Coding, analysis, interpretation and recognition of facial expressions”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 757–763, 1997.
  • [2] F. B. Mandal, S. Srivastava, “Real Time Facial Expression Recognition using a Novel Method”, The International Journal of Multimedia & Its Applications, 4(2), 2012.
  • [3] V. Conti et al., “Usability Analysis of a Novel Biometric Authentication Approach for Android-Based Mobile Devices”, Journal of Telecommunications and Information Technology, 4, 34-43, 2014.
  • [4] A.A. Mange et al., “Gaze and blinking base human machine interaction system”, IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), 2015.
  • [5] C. O’Reilly et al., “Recent developments in the study of rapid human movements with the kinematic theory: Applications to handwriting and signature synthesis”, Pattern Recognition Letters, 35(1), 224-235, 2014.
  • [6] A. Salem et al., “Analysis of strong password using keystroke dynamics authentication in touch screen devices”, Cybersecurity and Cyberforensics Conference CCC 2016, 15-21, 2016.
  • [7] K. Sehairi et al., “Real-time implementation of human action recognition system based on motion analysis”, Studies in Computational Intelligence, 143-164, 2016.
  • [8] G. Castellano et al., “Recognising Human Emotions from Body Movement and Gesture Dynamics”, 2nd international conference on Affective Computing and Intelligent Interaction, 71-82, 2007.
  • [9] S. Borra and N.K. Jayant, “Attendance Management System Using Hybrid Face Recognition Techniques”, Conference on Advances in Signal Processing (CASP2016), 2016.
  • [10] M. Moon and J.P. Phillips, “Computational and performance aspects of PCA-based face recognition algorithm”, Perception, 30(3), 303-321, 2001.
  • [11] P. Ekman, D. Keltner, “Universal Facial Expression of Emotion: An Old Controversy and New Findings”, J Nonverbal Behav, 21(1), 3-21, 1997.
  • [12] P. Ekman, “Basic Emotions”, Hand Book of Cognition and Emotion, Editör: T. Dalgleish ve M. J. Power, Wiley, New York, A.B.D., 45-60, 1999.
  • [13] N. L. Etcoff, J. J. Magee, “Categorical perception of facial expressions.”, Cognition, 44, 227-240, 1992.
  • [14] D. Roberson, L. Damjanovic, M. Kikutani, “Show and tell: The role of language in categorizing facial expressions of emotion.”, Emotion Review, 2, 255-260, 2010.
  • [15] J. A. Russell, “A circumplex model of affect.” Journal of Personality and Social Psychology, 39, 1161-1178, 1980.
  • [16] J. A. Russell, “Core affect and the psychological construction of emotion.”, Psychological Review, 110(1), 145-172, 2003.
  • [17] M. Katsikitis, “The classification of facial expressions of emotion: A multidimensional-scaling approach.”, Perception, 26, 613-626, 1997.
  • [18] T. Takehara, N. Suzuki, “Differential processes of emotion space over time.”, North American Journal of Psychology, 3, 217-228, 2001.
  • [19] T. Fujimura, YT. Matsuda, “Categorical and dimensional perceptions in decoding emotional facial expressions”, Cognition and Emotion, 26(4), 587-601, 2012.
  • [20] K. Mikolajczyk, C. Schmid, A. Zisserman, “Human detection based on a probabilistic assembly of robust part detectors”, Proc. of ECCV, 2004.
  • [21] P. A. Viola, M. J. Jones, “Robust Real-Time Face Detection”, Int J Comput Vision, 57(2), 137–154, 2004.
  • [22] X. Shen, Z. Lin, J. Brandt, Y. Wu, Detecting and aligning faces by image retrieval, 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3460–3467, 2013.
  • [23] Z. Zhang et al., “A survey on face detection in the wild: Past, present and future”, Computer Vision and Image Understanding, 138, 1-24, 2015.
  • [24] D. Chen, S. Ren, Y. Wei, X. Cao, J. Sun, “Joint cascade face detection and alignment.”, European Conference on Computer Vision (ECCV) 2014, 2014.
  • [25] P. Ekman, W. V. Friesen, “Measuring facial movement”, Environmental psychology and nonverbal behavior, 1(1), 56-75, 1976.
  • [26] G. Donato, M. S. Bartlett, J. C. Hager, P. Ekman, T. J. Sejnowski, “Classifying facial actions”, IEEE Trans. Pattern Anal. Mach. Intell, 21(10), 974-989, 1999.
  • [27] I. A. Essa, A. P. Pentland, “Facial Expression Recognition using a Dynamic Model and Motion Energy”, International Conference on Computer Vision ’95, Cambridge, MA, 360-367, 1995.
  • [28] F. Tsalakanidou, S. Malassiotis, “Real-time 2D+ 3D facial action and expression recognition”, Pattern Recognition, 43(5), 1763-1775, 2010.
  • [29] L. Zhang et al., “Adaptive facial point detection and emotion recognition for a humanoid robot”, Computer Vision and Image Understanding, 140, 93-114, 2015.
  • [30] Chanthaphan et al., “Novel facial feature extraction technique for facial emotion recognition system by using depth sensor”, International Journal of Innovative Computing, 12(6), 2067-2087, 2016.
  • [31] C.L. Huang and Y.M. Huang, “Facial Expression Recognition Using Model-Based Feature Extraction and Action Parameters Classification,” J. Visual Comm. and Image Representation, 8(3), 278-290, 1997.
  • [32] M. Pantic and L. Rothkrantz, “Expert System for Automatic Analysis of Facial Expression”, Image and Vision Computing J., 18(11), 881-905, 2000.
  • [33] P.S. Subramanyam, S.A. Fegade, “Face and Facial Expression Recognition - A Comparative Study”, International Journal of Computer Science and Mobile Computing, 2(1), 2013.
  • [34] H. Kobayashi and F. Hara, “Facial Interaction between Animated 3D Face Robot and Human Beings,”, Proc. International Conf. on Systems, Man, Cybernetics, 4, 3732-3737, 1997.
  • [35] H. Rowley, S. Baluja, T. Kanade, “Neural Network-Based Face Detection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20,(1), 23 – 38, 1998.
  • [36] Y. Freund, R. E. Schapire, “A desicion-theoretic generalization of on-line learning and an application to boosting”, European Conference On Computational Learning Theory, Springer-Verlag London, UK, 23-37, 1995.
  • [37] Internet: OpenCV Documentation, Cascade Classification: Haar Feature-based Cascade Classifier for Object Detection, http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html, 14.03.2017.
  • [38] D. Hefenbrock, et al., “Accelerating Viola-Jones Face Detection to FPGA-Level using GPUs”, 18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines, 2010.
  • [39] H. Jia, Y. Zhang, et al., “Accelerating Viola-Jones Face Detection Algorithm On GPUs”, IEEE 14th International Conference on High Performance Computing and Communications, 2012.
  • [40] A.W.Y. Wai, S.M. Tahir, and Y.C. Chang, “GPU Acceleration of Real Time Viola-Jones Face Detection”, IEEE International Conference on Control System, Computing and Engineering, 2015.
  • [41] L. Sun, S. Zhang, et al. “Acceleration Algorithm for CUDA-based Face Detection”, IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC 2013), 2013.
  • [42] C. Sagonas et al., “300 Faces in-the-Wild Challenge: The first facial landmark localization Challenge”, International Conference on Computer Vision, 2013.
  • [43] P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman, and N. Kumar, “Localizing parts of faces using a consensus of exemplars.”, 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 545–552, 2011.
  • [44] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang. “Interactive facial feature localization.”, Computer Vision–ECCV 2012, 679–692, 2012.
  • [45] X. Zhu and D. Ramanan, “Face detection, pose estimation, and landmark localization in the wild.”, 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2879–2886, 2012.
  • [46] M. Kostinger, P. Wohlhart, P. M. Roth, and H. Bischof, “Annotated facial landmarks in the wild: A large-scale, realworld database for facial landmark localization.”, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), 2144–2151, 2011.
  • [47] Internet: 300 Faces In-The-Wild Challenge (300-W), IMAVIS 2014, https://ibug.doc.ic.ac.uk/resources/300-W_IMAVIS/, 14.03.2017.
  • [48] D. E. King, “Dlib-ml: A Machine Learning Toolkit”, J Mach Learn Res., 10, 1755-1758, 2009.
  • [49] Internet: Dlib: Image Processing Shape Predictor, http://dlib.net/imaging.html#shape_predictor, 14.03.2017.
  • [50] N. Cristianini, J.S. Taylor, “An Introduction to Support Vector Machines and other kernel based learning methods”, Cambridge University Press, 2001.
  • [51] U. Ayvaz, M. Peker, H. Gürüler, “Uykuda periyodik bacak hareketleri bozukluğunun bir karar destek sistemi ile tespiti”, Uluslararası Mühendislik Teknolojileri ve Uygulamalı Bilimler Konferansı ICETAS 2016, Afyon Kocatepe Üniversitesi, Afyon, 1138-1141, 2016.
  • [52] K. P. Soman, R. Loganathan, V. Ajay, “Machine learning with SVM and other kernel methods”, PHI Learning Pvt. Ltd., New Delhi, India, 2009.
  • [53] Ü. Aydoğan, “Destek vektör makinelerinde kullanılan çekirdek fonksiyonların sınıflama performanslarının karşılaştırılması”, Yüksek Lisans Tezi, Hacettepe Üniversitesi, Sağlık Bilimleri Enstitüsü, 2010.
  • [54] Internet: M. Bernstein, The Radial Basis Function Kernel, http://pages.cs.wisc.edu/~matthewb/, 14.03.2017.
  • [55] Internet: Orange Data Mining Tool, http://orange.biolab.si, 14.03.2017.
  • [56] S. Datta, D. Sen, R. Balasubramanian, “Integrating Geometric and Textural Features for Facial Emotion Classification Using SVM Frameworks.”, Proceedings of International Conference on Computer Vision and Image Processing. Advances in Intelligent Systems and Computing, 459, 619-628, 2016.
  • [57] A. Basu, A. Routray, S. Shit, and A. K. Deb, “Human emotion recognition from facial thermal 253 image based on fused statistical feature and multi-class svm.”, 2015 Annual IEEE India Conference 254 (INDICON), 1–5, 2015.
  • [58] R. Niese et al., “Facial expression recognition based on geometric and optical flow features in colour image sequences”, IET Computer Vision, 6(2), 79-89, 2012.
  • [59] H. Candra et al., “Classification of facial-emotion expression in the application of psychotherapy using Viola-Jones and Edge-Histogram of Oriented Gradient,”, 2016 IEEE 38th Annual International Conference of the Engineering in Medicine and Biology Society (EMBC), 423-426, 2016.
Year 2017, Volume: 10 Issue: 2, 231 - 239, 28.04.2017
https://doi.org/10.17671/gazibtd.309307

Abstract

References

  • [1] I. A. Essa, A. P. Pentland, “Coding, analysis, interpretation and recognition of facial expressions”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 757–763, 1997.
  • [2] F. B. Mandal, S. Srivastava, “Real Time Facial Expression Recognition using a Novel Method”, The International Journal of Multimedia & Its Applications, 4(2), 2012.
  • [3] V. Conti et al., “Usability Analysis of a Novel Biometric Authentication Approach for Android-Based Mobile Devices”, Journal of Telecommunications and Information Technology, 4, 34-43, 2014.
  • [4] A.A. Mange et al., “Gaze and blinking base human machine interaction system”, IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), 2015.
  • [5] C. O’Reilly et al., “Recent developments in the study of rapid human movements with the kinematic theory: Applications to handwriting and signature synthesis”, Pattern Recognition Letters, 35(1), 224-235, 2014.
  • [6] A. Salem et al., “Analysis of strong password using keystroke dynamics authentication in touch screen devices”, Cybersecurity and Cyberforensics Conference CCC 2016, 15-21, 2016.
  • [7] K. Sehairi et al., “Real-time implementation of human action recognition system based on motion analysis”, Studies in Computational Intelligence, 143-164, 2016.
  • [8] G. Castellano et al., “Recognising Human Emotions from Body Movement and Gesture Dynamics”, 2nd international conference on Affective Computing and Intelligent Interaction, 71-82, 2007.
  • [9] S. Borra and N.K. Jayant, “Attendance Management System Using Hybrid Face Recognition Techniques”, Conference on Advances in Signal Processing (CASP2016), 2016.
  • [10] M. Moon and J.P. Phillips, “Computational and performance aspects of PCA-based face recognition algorithm”, Perception, 30(3), 303-321, 2001.
  • [11] P. Ekman, D. Keltner, “Universal Facial Expression of Emotion: An Old Controversy and New Findings”, J Nonverbal Behav, 21(1), 3-21, 1997.
  • [12] P. Ekman, “Basic Emotions”, Hand Book of Cognition and Emotion, Editör: T. Dalgleish ve M. J. Power, Wiley, New York, A.B.D., 45-60, 1999.
  • [13] N. L. Etcoff, J. J. Magee, “Categorical perception of facial expressions.”, Cognition, 44, 227-240, 1992.
  • [14] D. Roberson, L. Damjanovic, M. Kikutani, “Show and tell: The role of language in categorizing facial expressions of emotion.”, Emotion Review, 2, 255-260, 2010.
  • [15] J. A. Russell, “A circumplex model of affect.” Journal of Personality and Social Psychology, 39, 1161-1178, 1980.
  • [16] J. A. Russell, “Core affect and the psychological construction of emotion.”, Psychological Review, 110(1), 145-172, 2003.
  • [17] M. Katsikitis, “The classification of facial expressions of emotion: A multidimensional-scaling approach.”, Perception, 26, 613-626, 1997.
  • [18] T. Takehara, N. Suzuki, “Differential processes of emotion space over time.”, North American Journal of Psychology, 3, 217-228, 2001.
  • [19] T. Fujimura, YT. Matsuda, “Categorical and dimensional perceptions in decoding emotional facial expressions”, Cognition and Emotion, 26(4), 587-601, 2012.
  • [20] K. Mikolajczyk, C. Schmid, A. Zisserman, “Human detection based on a probabilistic assembly of robust part detectors”, Proc. of ECCV, 2004.
  • [21] P. A. Viola, M. J. Jones, “Robust Real-Time Face Detection”, Int J Comput Vision, 57(2), 137–154, 2004.
  • [22] X. Shen, Z. Lin, J. Brandt, Y. Wu, Detecting and aligning faces by image retrieval, 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3460–3467, 2013.
  • [23] Z. Zhang et al., “A survey on face detection in the wild: Past, present and future”, Computer Vision and Image Understanding, 138, 1-24, 2015.
  • [24] D. Chen, S. Ren, Y. Wei, X. Cao, J. Sun, “Joint cascade face detection and alignment.”, European Conference on Computer Vision (ECCV) 2014, 2014.
  • [25] P. Ekman, W. V. Friesen, “Measuring facial movement”, Environmental psychology and nonverbal behavior, 1(1), 56-75, 1976.
  • [26] G. Donato, M. S. Bartlett, J. C. Hager, P. Ekman, T. J. Sejnowski, “Classifying facial actions”, IEEE Trans. Pattern Anal. Mach. Intell, 21(10), 974-989, 1999.
  • [27] I. A. Essa, A. P. Pentland, “Facial Expression Recognition using a Dynamic Model and Motion Energy”, International Conference on Computer Vision ’95, Cambridge, MA, 360-367, 1995.
  • [28] F. Tsalakanidou, S. Malassiotis, “Real-time 2D+ 3D facial action and expression recognition”, Pattern Recognition, 43(5), 1763-1775, 2010.
  • [29] L. Zhang et al., “Adaptive facial point detection and emotion recognition for a humanoid robot”, Computer Vision and Image Understanding, 140, 93-114, 2015.
  • [30] Chanthaphan et al., “Novel facial feature extraction technique for facial emotion recognition system by using depth sensor”, International Journal of Innovative Computing, 12(6), 2067-2087, 2016.
  • [31] C.L. Huang and Y.M. Huang, “Facial Expression Recognition Using Model-Based Feature Extraction and Action Parameters Classification,” J. Visual Comm. and Image Representation, 8(3), 278-290, 1997.
  • [32] M. Pantic and L. Rothkrantz, “Expert System for Automatic Analysis of Facial Expression”, Image and Vision Computing J., 18(11), 881-905, 2000.
  • [33] P.S. Subramanyam, S.A. Fegade, “Face and Facial Expression Recognition - A Comparative Study”, International Journal of Computer Science and Mobile Computing, 2(1), 2013.
  • [34] H. Kobayashi and F. Hara, “Facial Interaction between Animated 3D Face Robot and Human Beings,”, Proc. International Conf. on Systems, Man, Cybernetics, 4, 3732-3737, 1997.
  • [35] H. Rowley, S. Baluja, T. Kanade, “Neural Network-Based Face Detection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20,(1), 23 – 38, 1998.
  • [36] Y. Freund, R. E. Schapire, “A desicion-theoretic generalization of on-line learning and an application to boosting”, European Conference On Computational Learning Theory, Springer-Verlag London, UK, 23-37, 1995.
  • [37] Internet: OpenCV Documentation, Cascade Classification: Haar Feature-based Cascade Classifier for Object Detection, http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html, 14.03.2017.
  • [38] D. Hefenbrock, et al., “Accelerating Viola-Jones Face Detection to FPGA-Level using GPUs”, 18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines, 2010.
  • [39] H. Jia, Y. Zhang, et al., “Accelerating Viola-Jones Face Detection Algorithm On GPUs”, IEEE 14th International Conference on High Performance Computing and Communications, 2012.
  • [40] A.W.Y. Wai, S.M. Tahir, and Y.C. Chang, “GPU Acceleration of Real Time Viola-Jones Face Detection”, IEEE International Conference on Control System, Computing and Engineering, 2015.
  • [41] L. Sun, S. Zhang, et al. “Acceleration Algorithm for CUDA-based Face Detection”, IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC 2013), 2013.
  • [42] C. Sagonas et al., “300 Faces in-the-Wild Challenge: The first facial landmark localization Challenge”, International Conference on Computer Vision, 2013.
  • [43] P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman, and N. Kumar, “Localizing parts of faces using a consensus of exemplars.”, 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 545–552, 2011.
  • [44] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang. “Interactive facial feature localization.”, Computer Vision–ECCV 2012, 679–692, 2012.
  • [45] X. Zhu and D. Ramanan, “Face detection, pose estimation, and landmark localization in the wild.”, 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2879–2886, 2012.
  • [46] M. Kostinger, P. Wohlhart, P. M. Roth, and H. Bischof, “Annotated facial landmarks in the wild: A large-scale, realworld database for facial landmark localization.”, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), 2144–2151, 2011.
  • [47] Internet: 300 Faces In-The-Wild Challenge (300-W), IMAVIS 2014, https://ibug.doc.ic.ac.uk/resources/300-W_IMAVIS/, 14.03.2017.
  • [48] D. E. King, “Dlib-ml: A Machine Learning Toolkit”, J Mach Learn Res., 10, 1755-1758, 2009.
  • [49] Internet: Dlib: Image Processing Shape Predictor, http://dlib.net/imaging.html#shape_predictor, 14.03.2017.
  • [50] N. Cristianini, J.S. Taylor, “An Introduction to Support Vector Machines and other kernel based learning methods”, Cambridge University Press, 2001.
  • [51] U. Ayvaz, M. Peker, H. Gürüler, “Uykuda periyodik bacak hareketleri bozukluğunun bir karar destek sistemi ile tespiti”, Uluslararası Mühendislik Teknolojileri ve Uygulamalı Bilimler Konferansı ICETAS 2016, Afyon Kocatepe Üniversitesi, Afyon, 1138-1141, 2016.
  • [52] K. P. Soman, R. Loganathan, V. Ajay, “Machine learning with SVM and other kernel methods”, PHI Learning Pvt. Ltd., New Delhi, India, 2009.
  • [53] Ü. Aydoğan, “Destek vektör makinelerinde kullanılan çekirdek fonksiyonların sınıflama performanslarının karşılaştırılması”, Yüksek Lisans Tezi, Hacettepe Üniversitesi, Sağlık Bilimleri Enstitüsü, 2010.
  • [54] Internet: M. Bernstein, The Radial Basis Function Kernel, http://pages.cs.wisc.edu/~matthewb/, 14.03.2017.
  • [55] Internet: Orange Data Mining Tool, http://orange.biolab.si, 14.03.2017.
  • [56] S. Datta, D. Sen, R. Balasubramanian, “Integrating Geometric and Textural Features for Facial Emotion Classification Using SVM Frameworks.”, Proceedings of International Conference on Computer Vision and Image Processing. Advances in Intelligent Systems and Computing, 459, 619-628, 2016.
  • [57] A. Basu, A. Routray, S. Shit, and A. K. Deb, “Human emotion recognition from facial thermal 253 image based on fused statistical feature and multi-class svm.”, 2015 Annual IEEE India Conference 254 (INDICON), 1–5, 2015.
  • [58] R. Niese et al., “Facial expression recognition based on geometric and optical flow features in colour image sequences”, IET Computer Vision, 6(2), 79-89, 2012.
  • [59] H. Candra et al., “Classification of facial-emotion expression in the application of psychotherapy using Viola-Jones and Edge-Histogram of Oriented Gradient,”, 2016 IEEE 38th Annual International Conference of the Engineering in Medicine and Biology Society (EMBC), 423-426, 2016.
There are 59 citations in total.

Details

Journal Section Articles
Authors

Uğur Ayvaz This is me

Hüseyin Gürüler

Publication Date April 28, 2017
Submission Date April 26, 2017
Published in Issue Year 2017 Volume: 10 Issue: 2

Cite

APA Ayvaz, U., & Gürüler, H. (2017). Bilgisayar Kullanıcılarına Yönelik Duygusal İfade Tespiti. Bilişim Teknolojileri Dergisi, 10(2), 231-239. https://doi.org/10.17671/gazibtd.309307