Case Report
BibTex RIS Cite

Seyrek Ögrenme ile Yüz İşaret Yaması Bazlı Eylem Birimi Saptama

Year 2016, Volume: 9 Issue: 2, 21 - 26, 06.07.2017

Abstract

Yüz Eylem Kodlama
Sistemi (FACS), bir kişinin hatta bir hayvanın duygusal dışa vurum ya da acı
gibi yüz ifadelerini tanımlamada en çok kullanılan ve en yaygın olarak kabul
gören standarttır. Bu sistemde yüzlerce Eylem Ünitesi (AU) kombinasyonu
kullanılarak yüzdeki kas hareketleri tanımlanmaktadır.  Bu çalısmada yüz eylem ünitelerini yakalamak
amacıyla yüz işaret noktalarından elde edilen imge yamalarıyla seyrek ögrenme
tabanlı bir yöntem önerilmektedir. Önerilen yöntem bire karşı hepsi
yaklaşımıyla tüm eylem üniteleri için ayrı ayrı en ayırt edici yüz işaret
noktalarını çıkarmaktadır. CK+ veri kümesi üzerinde yapılan deneylerde,
önerilen yöntemin son yıllarda yapılan yama bazlı çalışmaların çoğundan daha
başarılı sonuç verdiği gözlenmiştir.

References

  • [1] Hjortsjö, Carl-Herman. Man's face and mimic language. Studen litteratur, 1969.
  • Ekman, Paul, Wallace V. Friesen. Facial action coding system. 1977.
  • Hager, J. C., P. Ekman, and W. V. Friesen. Facial action coding system. Salt Lake City, UT: A Human Face. ISBN 0-931835-01-1, 2002.
  • Lawrence Ian Reed, Michael A Sayette, and Jeffrey F Cohn. Impact of depression on response to comedy: a dynamic facial coding analysis. Journal of abnormal psychology, 116(4):804, 2007.
  • Amanda C Lints-Martindale, Thomas Hadjistavropoulos, Bruce Barber, and Stephen J Gibson. A psychophysical investigation of the facial action coding system as an index of pain variability among older adults with and without alzheimer’s disease. Pain Medicine, 8(8):678–689, 2007.
  • Patrick Lucey, Jeffrey F Cohn, Iain Matthews, Simon Lucey, Sridha Sridharan, Jessica Howlett, and Kenneth M Prkachin. Automatically detecting pain in video through facial action units. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 41(3):664–674, 2011.
  • Marina Davila-Ross, Goncalo Jesus, Jade Osborne, and Kim A Bard. Chimpanzees (pan troglodytes) produce the same types of ‘laugh faces’ when they emit laughter and when they are silent. PloS one, 10(6):e0127337, 2015.
  • Takuya Hashimoto, Sachio Hitramatsu, Toshiaki Tsuji, and Hiroshi Kobayashi. Development of the face robot saya for rich facial expressions. In SICE-ICASE, 2006. International Joint Conference, pages 5423–5428. IEEE, 2006.
  • Gianluca Donato, Marian Stewart Bartlett, Joseph C Hager, Paul Ekman, and Terrence J Sejnowski. Classifying facial actions. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 21(10):974–989, 1999.
  • Caifeng Shan, Shaogang Gong, and Peter W McOwan. Facial expression recognition based on local binary patterns: A comprehensive study. Image and Vision Computing, 27(6):803–816, 2009.
  • Lin Zhong, Qingshan Liu, Peng Yang, Bo Liu, Junzhou Huang, and Dimitris N Metaxas. Learning active facial patches for expression analysis. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2562–2569. IEEE, 2012.
  • Li-Rong Zhong, Quanwei Liu, Ping Yang, Jie Huang, and Dimitris N Metaxas. Learning multiscale active facial patches for expression analysis. 2014.
  • Kaili Zhao, Wen-Sheng Chu, Fernando De la Torre, Jeffrey F Cohn, and Honggang Zhang. Joint patch and multi-label learning for facial action unit detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2207–2216, 2015.
  • Marian Stewart Bartlett, Joseph C Hager, Paul Ekman, and Terrence J Sejnowski. Measuring facial expressions by computer image analysis. Psychophysiology, 36(02):253–263, 1999.
  • Ping Liu, Joey Tianyi Zhou, Ivor Wai-Hung Tsang, Zibo Meng, Shizhong Han, and Yan Tong. Feature disentangling machine-a novel approach of feature selection and disentangling in facial expression analysis. In Computer Vision–ECCV 2014, pages 151–166. Springer, 2014.
  • Sima Taheri, Qiang Qiu, and Rama Chellappa. Structure-preserving sparse decomposition for facial expression analysis. Image Processing, IEEE Transactions on, 23(8):3590–3603, 2014.
  • Patrick Lucey, Jeffrey F Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, and Iain Matthews. The extended cohn-kanade dataset (ck+):A complete dataset for action unit and emotion-specified expression. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages 94–101. IEEE, 2010.
  • Xiangxin Zhu and Deva Ramanan. Face detection, pose estimation, and landmark localization in the wild. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2879–2886. IEEE, 2012.
  • Guoying Zhao and Matti Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(6):915–928, 2007.
  • Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning, 20(3):273–297, 1995.
  • Ziheng Wang, Yongqiang Li, Shangfei Wang, and Qiang Ji. Capturing global semantic relationships for facial action unit recognition. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 3304–3311. IEEE, 2013.
  • Andreas Damianou, Carl Ek, Michalis Titsias, and Neil Lawrence. Manifold relevance determination. arXiv preprint arXiv:1206.4610, 2012.
  • Raquel Urtasun, Ariadna Quattoni, Neil Lawrence, and Trevor Darrell. Transferring nonlinear representations using gaussian processes with a shared latent space. 2008.
  • Stefanos Eleftheriadis, Ognjen Rudovic, and Maja Pantic. Discriminative shared gaussian processes for multiview and view-invariant facial expression recognition. Image Processing, IEEE Transactions on, 24(1):189–204, 2015.
  • Xiao Zhang, Mohammad H Mahoor, S Mohammad Mavadati, and Jeffrey F Cohn. A l p-norm mtmkl framework for simultaneous detection of multiple facial action units. In Applications of Computer Vision (WACV), 2014 IEEE Winter Conference on, pages 1104–1111. IEEE, 2014
Year 2016, Volume: 9 Issue: 2, 21 - 26, 06.07.2017

Abstract

References

  • [1] Hjortsjö, Carl-Herman. Man's face and mimic language. Studen litteratur, 1969.
  • Ekman, Paul, Wallace V. Friesen. Facial action coding system. 1977.
  • Hager, J. C., P. Ekman, and W. V. Friesen. Facial action coding system. Salt Lake City, UT: A Human Face. ISBN 0-931835-01-1, 2002.
  • Lawrence Ian Reed, Michael A Sayette, and Jeffrey F Cohn. Impact of depression on response to comedy: a dynamic facial coding analysis. Journal of abnormal psychology, 116(4):804, 2007.
  • Amanda C Lints-Martindale, Thomas Hadjistavropoulos, Bruce Barber, and Stephen J Gibson. A psychophysical investigation of the facial action coding system as an index of pain variability among older adults with and without alzheimer’s disease. Pain Medicine, 8(8):678–689, 2007.
  • Patrick Lucey, Jeffrey F Cohn, Iain Matthews, Simon Lucey, Sridha Sridharan, Jessica Howlett, and Kenneth M Prkachin. Automatically detecting pain in video through facial action units. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 41(3):664–674, 2011.
  • Marina Davila-Ross, Goncalo Jesus, Jade Osborne, and Kim A Bard. Chimpanzees (pan troglodytes) produce the same types of ‘laugh faces’ when they emit laughter and when they are silent. PloS one, 10(6):e0127337, 2015.
  • Takuya Hashimoto, Sachio Hitramatsu, Toshiaki Tsuji, and Hiroshi Kobayashi. Development of the face robot saya for rich facial expressions. In SICE-ICASE, 2006. International Joint Conference, pages 5423–5428. IEEE, 2006.
  • Gianluca Donato, Marian Stewart Bartlett, Joseph C Hager, Paul Ekman, and Terrence J Sejnowski. Classifying facial actions. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 21(10):974–989, 1999.
  • Caifeng Shan, Shaogang Gong, and Peter W McOwan. Facial expression recognition based on local binary patterns: A comprehensive study. Image and Vision Computing, 27(6):803–816, 2009.
  • Lin Zhong, Qingshan Liu, Peng Yang, Bo Liu, Junzhou Huang, and Dimitris N Metaxas. Learning active facial patches for expression analysis. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2562–2569. IEEE, 2012.
  • Li-Rong Zhong, Quanwei Liu, Ping Yang, Jie Huang, and Dimitris N Metaxas. Learning multiscale active facial patches for expression analysis. 2014.
  • Kaili Zhao, Wen-Sheng Chu, Fernando De la Torre, Jeffrey F Cohn, and Honggang Zhang. Joint patch and multi-label learning for facial action unit detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2207–2216, 2015.
  • Marian Stewart Bartlett, Joseph C Hager, Paul Ekman, and Terrence J Sejnowski. Measuring facial expressions by computer image analysis. Psychophysiology, 36(02):253–263, 1999.
  • Ping Liu, Joey Tianyi Zhou, Ivor Wai-Hung Tsang, Zibo Meng, Shizhong Han, and Yan Tong. Feature disentangling machine-a novel approach of feature selection and disentangling in facial expression analysis. In Computer Vision–ECCV 2014, pages 151–166. Springer, 2014.
  • Sima Taheri, Qiang Qiu, and Rama Chellappa. Structure-preserving sparse decomposition for facial expression analysis. Image Processing, IEEE Transactions on, 23(8):3590–3603, 2014.
  • Patrick Lucey, Jeffrey F Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, and Iain Matthews. The extended cohn-kanade dataset (ck+):A complete dataset for action unit and emotion-specified expression. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages 94–101. IEEE, 2010.
  • Xiangxin Zhu and Deva Ramanan. Face detection, pose estimation, and landmark localization in the wild. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2879–2886. IEEE, 2012.
  • Guoying Zhao and Matti Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(6):915–928, 2007.
  • Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning, 20(3):273–297, 1995.
  • Ziheng Wang, Yongqiang Li, Shangfei Wang, and Qiang Ji. Capturing global semantic relationships for facial action unit recognition. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 3304–3311. IEEE, 2013.
  • Andreas Damianou, Carl Ek, Michalis Titsias, and Neil Lawrence. Manifold relevance determination. arXiv preprint arXiv:1206.4610, 2012.
  • Raquel Urtasun, Ariadna Quattoni, Neil Lawrence, and Trevor Darrell. Transferring nonlinear representations using gaussian processes with a shared latent space. 2008.
  • Stefanos Eleftheriadis, Ognjen Rudovic, and Maja Pantic. Discriminative shared gaussian processes for multiview and view-invariant facial expression recognition. Image Processing, IEEE Transactions on, 24(1):189–204, 2015.
  • Xiao Zhang, Mohammad H Mahoor, S Mohammad Mavadati, and Jeffrey F Cohn. A l p-norm mtmkl framework for simultaneous detection of multiple facial action units. In Applications of Computer Vision (WACV), 2014 IEEE Winter Conference on, pages 1104–1111. IEEE, 2014
There are 25 citations in total.

Details

Subjects Engineering
Journal Section Makaleler(Araştırma)
Authors

DUYGU Cakır 0000-0003-1600-3989

Nafiz Arıca This is me

Publication Date July 6, 2017
Published in Issue Year 2016 Volume: 9 Issue: 2

Cite

APA Cakır, D., & Arıca, N. (2017). Seyrek Ögrenme ile Yüz İşaret Yaması Bazlı Eylem Birimi Saptama. Türkiye Bilişim Vakfı Bilgisayar Bilimleri Ve Mühendisliği Dergisi, 9(2), 21-26.
AMA Cakır D, Arıca N. Seyrek Ögrenme ile Yüz İşaret Yaması Bazlı Eylem Birimi Saptama. TBV-BBMD. July 2017;9(2):21-26.
Chicago Cakır, DUYGU, and Nafiz Arıca. “Seyrek Ögrenme Ile Yüz İşaret Yaması Bazlı Eylem Birimi Saptama”. Türkiye Bilişim Vakfı Bilgisayar Bilimleri Ve Mühendisliği Dergisi 9, no. 2 (July 2017): 21-26.
EndNote Cakır D, Arıca N (July 1, 2017) Seyrek Ögrenme ile Yüz İşaret Yaması Bazlı Eylem Birimi Saptama. Türkiye Bilişim Vakfı Bilgisayar Bilimleri ve Mühendisliği Dergisi 9 2 21–26.
IEEE D. Cakır and N. Arıca, “Seyrek Ögrenme ile Yüz İşaret Yaması Bazlı Eylem Birimi Saptama”, TBV-BBMD, vol. 9, no. 2, pp. 21–26, 2017.
ISNAD Cakır, DUYGU - Arıca, Nafiz. “Seyrek Ögrenme Ile Yüz İşaret Yaması Bazlı Eylem Birimi Saptama”. Türkiye Bilişim Vakfı Bilgisayar Bilimleri ve Mühendisliği Dergisi 9/2 (July 2017), 21-26.
JAMA Cakır D, Arıca N. Seyrek Ögrenme ile Yüz İşaret Yaması Bazlı Eylem Birimi Saptama. TBV-BBMD. 2017;9:21–26.
MLA Cakır, DUYGU and Nafiz Arıca. “Seyrek Ögrenme Ile Yüz İşaret Yaması Bazlı Eylem Birimi Saptama”. Türkiye Bilişim Vakfı Bilgisayar Bilimleri Ve Mühendisliği Dergisi, vol. 9, no. 2, 2017, pp. 21-26.
Vancouver Cakır D, Arıca N. Seyrek Ögrenme ile Yüz İşaret Yaması Bazlı Eylem Birimi Saptama. TBV-BBMD. 2017;9(2):21-6.

Article Acceptance

Use user registration/login to upload articles online.

The acceptance process of the articles sent to the journal consists of the following stages:

1. Each submitted article is sent to at least two referees at the first stage.

2. Referee appointments are made by the journal editors. There are approximately 200 referees in the referee pool of the journal and these referees are classified according to their areas of interest. Each referee is sent an article on the subject he is interested in. The selection of the arbitrator is done in a way that does not cause any conflict of interest.

3. In the articles sent to the referees, the names of the authors are closed.

4. Referees are explained how to evaluate an article and are asked to fill in the evaluation form shown below.

5. The articles in which two referees give positive opinion are subjected to similarity review by the editors. The similarity in the articles is expected to be less than 25%.

6. A paper that has passed all stages is reviewed by the editor in terms of language and presentation, and necessary corrections and improvements are made. If necessary, the authors are notified of the situation.

0

.   This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.