Research Article
BibTex RIS Cite

Derin Öğrenme ve Destek Vektör Makineleri İle Görüntüden Cinsiyet Tahmini

Year 2020, , 2208 - 2228, 31.07.2020
https://doi.org/10.29130/dubited.707316

Abstract

Yapılan çalışmada günümüzün popular konularından olan derin öğrenme algoritmaları üzerine bir uygulama geliştirilmiştir. Geliştirilen uygulamada görüntülerden yüz tespiti yapılıp ardından görüntüdeki kişinin cinsiyet tahmini yapılmıştır. Bu uygulamada Wiki görüntü veri tabanından elde edilen 62328 görüntü kullanılmıştır. Kullanılan görüntüler üzerinde, yüz görüntüsü bulunmayanlar veri setinden çıkartılarak yeni bir veri oluşturulmuştur. Oluşturulan veri setindeki görüntülerden, ileri derin öğrenme tekniklerinden biri olan Evrişimsel Sinir Ağları yöntemi kullanılarak öznitelikler çıkartılmıştır. Elde edilen özniteliklerde Destek Vektör Makinesi (DVM) ile sınıflandırılmıştır. Sınıflandırma başarı sonucu karmaşıklık matrisi ile gösterilmiş olup, %94,4 başarı oranı ile sınıflandırma işlemi gerçekleştirilmiştir.

References

  • [1] Eidinger, E., Enbar, R., & Hassner, T. (2014). “Age and gender estimation of unfiltered faces.” IEEE Transactions on Information Forensics and Security, vol:9, no.12, pp. 2170-2179.
  • [2] Levi, G., & Hassner, T. (2015). “Age and gender classification using convolutional neural networks.” In Proceedings of the IEEE conference on computer vision and pattern recognition workshops pp. 34-42.
  • [3] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). “You only look once: Unified, real-time object detection.” In Proceedings of the IEEE conference on computer vision and pattern recognition pp. 779-788.
  • [4] Khalid, S., Sajjad, S., Jabbar, S., & Chang, H. (2017). “Accurate and efficient shape matching approach using vocabularies of multi-feature space representations.” Journal of Real-Time Image Processing, 13(3), 449-465.
  • [5] Chen, Y., Jiang, H., Li, C., Jia, X., & Ghamisi, P. (2016). “Deep feature extraction and classification of hyperspectral images based on convolutional neural networks.” IEEE Transactions on Geoscience and Remote Sensing, vol: 54, no:10, pp 6232-6251.
  • [6] Gao, W., Cao, B., Shan, S., Chen, X., Zhou, D., Zhang, X., & Zhao, D. (2007). “The CAS-PEAL large-scale Chinese face database and baseline evaluations.” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol:38, no: 1, pp 149-161.
  • [7] Cas-Peal Face Database [Çevrimiçi]. Erişim: http://www.jdl.ac.cn/peal/index.html, Erişim Tarihi: 05.01.2020.
  • [8] Imdb-Wiki–500k+ face images with age and gender labels [Çevrimiçi]. Erişim: https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/, Erişim Tarihi:07.01.2020.
  • [9] Coco Common Objects in Context [Çevrimiçi]. Erişim:http://cocodataset.org/#download, Erişim Tarihi: 07.01.2020.
  • [10] Shan, C. (2012). “Learning local binary patterns for gender classification on real-world face images.” Pattern recognition letters, vol: 33, no:4, pp 431-437.
  • [11] Ren, H., & Li, Z. N. (2014, August). “Gender recognition using complexity-aware local features.” In 2014 22nd International Conference on Pattern Recognition (pp. 2389-2394). IEEE.
  • [12] Zhang, N., Paluri, M., Ranzato, M. A., Darrell, T., & Bourdev, L. (2014). “Panda: Pose aligned networks for deep attribute modeling.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1637-1644.
  • [13] Hu, S. Y. D., Jou, B., Jaech, A., & Savvides, M. (2011, October). “Fusion of region-based representations for gender identification.” In 2011 International Joint Conference on Biometrics (IJCB) (pp. 1-7). IEEE.Joint Conference on Biometrics (IJCB), 2011. IEEE (2011)
  • [14] Rai, P., & Khanna, P. (2014). “A gender classification system robust to occlusion using Gabor features based (2D) 2PCA.” Journal of Visual Communication and Image Representation, 25(5), 1118-1129.
  • [15] Leng, X., & Wang, Y. (2008, October). “Improving generalization for gender classification.” In 2008 15th IEEE International Conference on Image Processing (pp. 1656-1659). IEEE.
  • [16] Chen, D. Y., & Lin, K. Y. (2010). “Robust gender recognition for uncontrolled environment of real-life images.” IEEE Transactions on Consumer Electronics, 56(3), 1586-1592.
  • [17] Biswas, S., & Sil, J. (2014). “Gender recognition using fusion of spatial and temporal features.” In Advanced Computing, Networking and Informatics-Volume 1 (pp. 109-116). Springer, Cham.
  • [18] Mansanet, J., Albiol, A., & Paredes, R. (2016). “Local deep neural networks for gender recognition.” Pattern Recognition Letters, 70, 80-86.
  • [19] Zhang, K., Gao, C., Guo, L., Sun, M., Yuan, X., Han, T. X., ... & Li, B. (2017). “Age group and gender estimation in the wild with deep RoR architecture.” Special Section on Recent Advantages of Computer Vision Based on Chinese Conference on Computer Vision (CCCV) 2017, 5, 22492-22503.
  • [20] Agbo-Ajala, O., & Viriri, S. (2019). “Face-Based Age and Gender Classification Using Deep Learning Model.” In Pacific-Rim Symposium on Image and Video Technology (pp. 125-137). Springer, Cham.
  • [21] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). “Imagenet classification with deep convolutional neural networks.” In Advances in neural information processing systems (pp. 1097-1105).
  • [22] LeCun, Y., Bengio, Y., & Hinton, G. (2015). “Deep learning. nature,” 521(7553), 436-444.
  • [23] Goodfellow, I., Bengio, Y., & Courville, A. (2016). “Deep learning.” MIT press.
  • [24] Schmidhuber, J. (2015). “Deep learning in neural networks: An overview.” Neural networks, 61, 85-117.
  • [25] Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., & Lew, M. S. (2016). “Deep learning for visual understanding: A review.” Neurocomputing, 187, 27-48.
  • [26] Ciresan, D. C., Meier, U., Masci, J., Gambardella, L. M., & Schmidhuber, J. (2011, June). “Flexible, high performance convolutional neural networks for image classification.” In Twenty-Second International Joint Conference on Artificial Intelligence.
  • [27] Pang, Y., Sun, M., Jiang, X., & Li, X. (2017). “Convolution in convolution for network in network.” IEEE transactions on neural networks and learning systems, vol: 29, no:5, pp: 1587-1597.
  • [28] Arora, R., Basu, A., Mianjy, P., & Mukherjee, A. (2016). “Understanding deep neural networks with rectified linear units.” arXiv preprint arXiv:1611.01491.
  • [29] Metlek, S., Özkan, T., (2019). “Analysis of Perceived Service Quality and Customer Satisfaction in the Aviation Sector with Artificial Neural Networks.” Techno-Science, 2nd Internatioanl Conference on Technology and Science pp.853-864, 14-16 November, Burdur, Turkey
  • [30] Hinton, G. E.“A practical guide to training restricted Boltzmann machines.” In Neural networks: Tricks of the trade , pp. 599-619, 2019.
  • [31] https://www.kdnuggets.com/2018/09/dropout-convolutional-networks.html
  • [32] Demirci, D. A. (2007). “Vektör makineleri ile karakter tanıma” ( Yüksek lisans tezi, Bilgisayar Mühendisliği Bölümü, Yıldız Teknik Üniversitesi, İstanbul, Türkiye).
  • [33] Demir, H., Erdoğmuş, P., & Kekeçoğlu, M. “Destek Vektör Makineleri, YSA, K-Means ve KNN Kullanarak Arı Türlerinin Sınıflandırılması.” Düzce Üniversitesi Bilim ve Teknoloji Dergisi, c:6, s:1, ss 47-67.
  • [34] About MatConvNet [Çevrimiçi]. Erişim:http://www.wisdom.weizmann.ac.il/~vision/ courses/2016_2/FCN/matconvnet-fcn-master/matconvnet/doc/site/docs/about.md, Erişim Tarihi: 08.02.2020.
  • [35] Dr Andrea Vedaldi [Çevrimiçi]. Erişim:http://www.robots.ox.ac.uk/~vedaldi/code.html, Erişim Tarihi: 07.02.2020.
  • [36] Verma, G. K., Lather, J. S., & Kaushal, A. “MatConvNet-Based Fast Method for Cervical MR Images Classification.” In Computational Intelligence: Theories, Applications and Future Directions-vol:2 pp. 669-679, 2109 Springer, Singapore.
  • [37] Hossain, M. A., & Ali, M. M. (2019). “Recognition of handwritten digit using convolutional neural network (CNN).” Global Journal of Computer Science and Technology.
  • [38] Suthakaran, A., & Premaratne, S. (2020). “Detection of the affected area and classification of pests using convolutional neural networks from the leaf images”. International Journal of Computer Science Engineering (IJCSE).
Year 2020, , 2208 - 2228, 31.07.2020
https://doi.org/10.29130/dubited.707316

Abstract

References

  • [1] Eidinger, E., Enbar, R., & Hassner, T. (2014). “Age and gender estimation of unfiltered faces.” IEEE Transactions on Information Forensics and Security, vol:9, no.12, pp. 2170-2179.
  • [2] Levi, G., & Hassner, T. (2015). “Age and gender classification using convolutional neural networks.” In Proceedings of the IEEE conference on computer vision and pattern recognition workshops pp. 34-42.
  • [3] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). “You only look once: Unified, real-time object detection.” In Proceedings of the IEEE conference on computer vision and pattern recognition pp. 779-788.
  • [4] Khalid, S., Sajjad, S., Jabbar, S., & Chang, H. (2017). “Accurate and efficient shape matching approach using vocabularies of multi-feature space representations.” Journal of Real-Time Image Processing, 13(3), 449-465.
  • [5] Chen, Y., Jiang, H., Li, C., Jia, X., & Ghamisi, P. (2016). “Deep feature extraction and classification of hyperspectral images based on convolutional neural networks.” IEEE Transactions on Geoscience and Remote Sensing, vol: 54, no:10, pp 6232-6251.
  • [6] Gao, W., Cao, B., Shan, S., Chen, X., Zhou, D., Zhang, X., & Zhao, D. (2007). “The CAS-PEAL large-scale Chinese face database and baseline evaluations.” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol:38, no: 1, pp 149-161.
  • [7] Cas-Peal Face Database [Çevrimiçi]. Erişim: http://www.jdl.ac.cn/peal/index.html, Erişim Tarihi: 05.01.2020.
  • [8] Imdb-Wiki–500k+ face images with age and gender labels [Çevrimiçi]. Erişim: https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/, Erişim Tarihi:07.01.2020.
  • [9] Coco Common Objects in Context [Çevrimiçi]. Erişim:http://cocodataset.org/#download, Erişim Tarihi: 07.01.2020.
  • [10] Shan, C. (2012). “Learning local binary patterns for gender classification on real-world face images.” Pattern recognition letters, vol: 33, no:4, pp 431-437.
  • [11] Ren, H., & Li, Z. N. (2014, August). “Gender recognition using complexity-aware local features.” In 2014 22nd International Conference on Pattern Recognition (pp. 2389-2394). IEEE.
  • [12] Zhang, N., Paluri, M., Ranzato, M. A., Darrell, T., & Bourdev, L. (2014). “Panda: Pose aligned networks for deep attribute modeling.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1637-1644.
  • [13] Hu, S. Y. D., Jou, B., Jaech, A., & Savvides, M. (2011, October). “Fusion of region-based representations for gender identification.” In 2011 International Joint Conference on Biometrics (IJCB) (pp. 1-7). IEEE.Joint Conference on Biometrics (IJCB), 2011. IEEE (2011)
  • [14] Rai, P., & Khanna, P. (2014). “A gender classification system robust to occlusion using Gabor features based (2D) 2PCA.” Journal of Visual Communication and Image Representation, 25(5), 1118-1129.
  • [15] Leng, X., & Wang, Y. (2008, October). “Improving generalization for gender classification.” In 2008 15th IEEE International Conference on Image Processing (pp. 1656-1659). IEEE.
  • [16] Chen, D. Y., & Lin, K. Y. (2010). “Robust gender recognition for uncontrolled environment of real-life images.” IEEE Transactions on Consumer Electronics, 56(3), 1586-1592.
  • [17] Biswas, S., & Sil, J. (2014). “Gender recognition using fusion of spatial and temporal features.” In Advanced Computing, Networking and Informatics-Volume 1 (pp. 109-116). Springer, Cham.
  • [18] Mansanet, J., Albiol, A., & Paredes, R. (2016). “Local deep neural networks for gender recognition.” Pattern Recognition Letters, 70, 80-86.
  • [19] Zhang, K., Gao, C., Guo, L., Sun, M., Yuan, X., Han, T. X., ... & Li, B. (2017). “Age group and gender estimation in the wild with deep RoR architecture.” Special Section on Recent Advantages of Computer Vision Based on Chinese Conference on Computer Vision (CCCV) 2017, 5, 22492-22503.
  • [20] Agbo-Ajala, O., & Viriri, S. (2019). “Face-Based Age and Gender Classification Using Deep Learning Model.” In Pacific-Rim Symposium on Image and Video Technology (pp. 125-137). Springer, Cham.
  • [21] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). “Imagenet classification with deep convolutional neural networks.” In Advances in neural information processing systems (pp. 1097-1105).
  • [22] LeCun, Y., Bengio, Y., & Hinton, G. (2015). “Deep learning. nature,” 521(7553), 436-444.
  • [23] Goodfellow, I., Bengio, Y., & Courville, A. (2016). “Deep learning.” MIT press.
  • [24] Schmidhuber, J. (2015). “Deep learning in neural networks: An overview.” Neural networks, 61, 85-117.
  • [25] Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., & Lew, M. S. (2016). “Deep learning for visual understanding: A review.” Neurocomputing, 187, 27-48.
  • [26] Ciresan, D. C., Meier, U., Masci, J., Gambardella, L. M., & Schmidhuber, J. (2011, June). “Flexible, high performance convolutional neural networks for image classification.” In Twenty-Second International Joint Conference on Artificial Intelligence.
  • [27] Pang, Y., Sun, M., Jiang, X., & Li, X. (2017). “Convolution in convolution for network in network.” IEEE transactions on neural networks and learning systems, vol: 29, no:5, pp: 1587-1597.
  • [28] Arora, R., Basu, A., Mianjy, P., & Mukherjee, A. (2016). “Understanding deep neural networks with rectified linear units.” arXiv preprint arXiv:1611.01491.
  • [29] Metlek, S., Özkan, T., (2019). “Analysis of Perceived Service Quality and Customer Satisfaction in the Aviation Sector with Artificial Neural Networks.” Techno-Science, 2nd Internatioanl Conference on Technology and Science pp.853-864, 14-16 November, Burdur, Turkey
  • [30] Hinton, G. E.“A practical guide to training restricted Boltzmann machines.” In Neural networks: Tricks of the trade , pp. 599-619, 2019.
  • [31] https://www.kdnuggets.com/2018/09/dropout-convolutional-networks.html
  • [32] Demirci, D. A. (2007). “Vektör makineleri ile karakter tanıma” ( Yüksek lisans tezi, Bilgisayar Mühendisliği Bölümü, Yıldız Teknik Üniversitesi, İstanbul, Türkiye).
  • [33] Demir, H., Erdoğmuş, P., & Kekeçoğlu, M. “Destek Vektör Makineleri, YSA, K-Means ve KNN Kullanarak Arı Türlerinin Sınıflandırılması.” Düzce Üniversitesi Bilim ve Teknoloji Dergisi, c:6, s:1, ss 47-67.
  • [34] About MatConvNet [Çevrimiçi]. Erişim:http://www.wisdom.weizmann.ac.il/~vision/ courses/2016_2/FCN/matconvnet-fcn-master/matconvnet/doc/site/docs/about.md, Erişim Tarihi: 08.02.2020.
  • [35] Dr Andrea Vedaldi [Çevrimiçi]. Erişim:http://www.robots.ox.ac.uk/~vedaldi/code.html, Erişim Tarihi: 07.02.2020.
  • [36] Verma, G. K., Lather, J. S., & Kaushal, A. “MatConvNet-Based Fast Method for Cervical MR Images Classification.” In Computational Intelligence: Theories, Applications and Future Directions-vol:2 pp. 669-679, 2109 Springer, Singapore.
  • [37] Hossain, M. A., & Ali, M. M. (2019). “Recognition of handwritten digit using convolutional neural network (CNN).” Global Journal of Computer Science and Technology.
  • [38] Suthakaran, A., & Premaratne, S. (2020). “Detection of the affected area and classification of pests using convolutional neural networks from the leaf images”. International Journal of Computer Science Engineering (IJCSE).
There are 38 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section Articles
Authors

Sedat Metlek 0000-0002-0393-9908

Kiyas Kayaalp 0000-0002-6483-1124

Publication Date July 31, 2020
Published in Issue Year 2020

Cite

APA Metlek, S., & Kayaalp, K. (2020). Derin Öğrenme ve Destek Vektör Makineleri İle Görüntüden Cinsiyet Tahmini. Duzce University Journal of Science and Technology, 8(3), 2208-2228. https://doi.org/10.29130/dubited.707316
AMA Metlek S, Kayaalp K. Derin Öğrenme ve Destek Vektör Makineleri İle Görüntüden Cinsiyet Tahmini. DÜBİTED. July 2020;8(3):2208-2228. doi:10.29130/dubited.707316
Chicago Metlek, Sedat, and Kiyas Kayaalp. “Derin Öğrenme Ve Destek Vektör Makineleri İle Görüntüden Cinsiyet Tahmini”. Duzce University Journal of Science and Technology 8, no. 3 (July 2020): 2208-28. https://doi.org/10.29130/dubited.707316.
EndNote Metlek S, Kayaalp K (July 1, 2020) Derin Öğrenme ve Destek Vektör Makineleri İle Görüntüden Cinsiyet Tahmini. Duzce University Journal of Science and Technology 8 3 2208–2228.
IEEE S. Metlek and K. Kayaalp, “Derin Öğrenme ve Destek Vektör Makineleri İle Görüntüden Cinsiyet Tahmini”, DÜBİTED, vol. 8, no. 3, pp. 2208–2228, 2020, doi: 10.29130/dubited.707316.
ISNAD Metlek, Sedat - Kayaalp, Kiyas. “Derin Öğrenme Ve Destek Vektör Makineleri İle Görüntüden Cinsiyet Tahmini”. Duzce University Journal of Science and Technology 8/3 (July 2020), 2208-2228. https://doi.org/10.29130/dubited.707316.
JAMA Metlek S, Kayaalp K. Derin Öğrenme ve Destek Vektör Makineleri İle Görüntüden Cinsiyet Tahmini. DÜBİTED. 2020;8:2208–2228.
MLA Metlek, Sedat and Kiyas Kayaalp. “Derin Öğrenme Ve Destek Vektör Makineleri İle Görüntüden Cinsiyet Tahmini”. Duzce University Journal of Science and Technology, vol. 8, no. 3, 2020, pp. 2208-2, doi:10.29130/dubited.707316.
Vancouver Metlek S, Kayaalp K. Derin Öğrenme ve Destek Vektör Makineleri İle Görüntüden Cinsiyet Tahmini. DÜBİTED. 2020;8(3):2208-2.

Cited By








DERİN ÖĞRENME İLE BALIK TÜRLERİNİN TESPİTİ
International Journal of 3D Printing Technologies and Digital Industry
https://doi.org/10.46519/ij3dptdi.956221