Research Article
BibTex RIS Cite
Year 2021, , 1394 - 1407, 31.12.2021
https://doi.org/10.16984/saufenbilder.981927

Abstract

References

  • [1] “World Health Organization.” 2020https://www.who.int/.
  • [2] A. Abbas, M. M. Abdelsamea, and M. M. Gaber, "Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network," Appl. Intell., vol. 51, no. 2, pp. 854–864, Feb. 2021, doi:10.1007/s10489-020-01829-7.
  • [3] S. A. B. P and C. S. R. Annavarapu, "Deep learning-based improved snapshot ensemble technique for COVID-19 chest X-ray classification," Appl. Intell., Mar. 2021, doi:10.1007/s10489-021-02199-4.
  • [4] M. Turkoglu, "COVIDetectioNet: COVID-19 diagnosis system based on X-ray images using features selected from pre-learned deep features ensemble," Appl. Intell., vol. 51, no. 3, pp. 1213–1226, Mar. 2021, doi:10.1007/s10489-020-01888-w.
  • [5] H. Mittal, A. C. Pandey, R. Pal, and A. Tripathi, "A new clustering method for the diagnosis of CoVID19 using medical images," Appl. Intell., Jan. 2021, doi:10.1007/s10489-020-02122-3.
  • [6] A. Oulefki, S. Agaian, T. Trongtirakul, and A. Kassah Laouar, "Automatic COVID-19 lung infected region segmentation and measurement using CT-scans images," Pattern Recognit., vol. 114, p. 107747, 2021, doi:10.1016/j.patcog.2020.107747.
  • [7] D.-P. Fan et al., "Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images," IEEE Trans. Med. Imaging, vol. 39, no. 8, pp. 2626–2637, 2020, doi:10.1109/TMI.2020.2996645.
  • [8] M. Loey, G. Manogaran, M. H. N. Taha, and N. E. M. Khalifa, "A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the COVID-19 pandemic," Measurement, vol. 167, p. 108288, 2021, doi:10.1016/j.measurement.2020.108288.
  • [9] P. Mohan, A. J. Paul, and A. Chirania, "A Tiny CNN Architecture for Medical Face Mask Detection for Resource-Constrained Endpoints," ArVix, 2020http://arxiv.org/abs/2011.14858.
  • [10] B. Qin and D. Li, "Identifying facemask-wearing condition using image super-resolution with classification network to prevent COVID-19," Sensors (Switzerland), vol. 20, no. 18, pp. 1–23, 2020, doi:10.3390/s20185236.
  • [11] K. Bhambani, T. Jain, and K. A. Sultanpure, "Real-time Face Mask and Social Distancing Violation Detection System using YOLO," in Bangalore Humanitarian Technology Conference (B-HTC), 2020, pp. 1–6, doi:10.1109/B-HTC50970.2020.9297902.
  • [12] B. Sathyabama, A. Devpura, M. Maroti, and R. S. Rajput, "Monitoring Pandemic Precautionary Protocols using Real-time Surveillance and Artificial Intelligence," in 3rd International Conference on Intelligent Sustainable Systems (ICISS), 2020, pp. 1036–1041, doi:10.1109/ICISS49785.2020.9315934.
  • [13] S. Meivel, K. Indira Devi, S. Uma Maheswari, and J. Vijaya Menaka, "Real time data analysis of face mask detection and social distance measurement using Matlab," Mater. Today Proc., 2021, doi:10.1016/j.matpr.2020.12.1042.
  • [14] S. S. Ahamad and A.-S. Khan Pathan, "A formally verified authentication protocol in secure framework for mobile healthcare during COVID-19-like pandemic," Conn. Sci., pp. 1–23, 2020, doi:10.1080/09540091.2020.1854180.
  • [15] Y. Li, K. Guo, Y. Lu, and L. Liu, "Cropping and attention based approach for masked face recognition," Appl. Intell., Feb. 2021, doi:10.1007/s10489-020-02100-9.
  • [16] G. Yolcu, I. Oztel, S. Kazan, C. Oz, and F. Bunyak, "Deep learning-based face analysis system for monitoring customer interest," J. Ambient Intell. Humaniz. Comput., vol. 11, no. 1, pp. 237–248, Jan. 2020, doi:10.1007/s12652-019-01310-5.
  • [17] S. Zhang, C. Chi, Z. Lei, and S. Z. Li, "RefineFace: Refinement Neural Network for High Performance Face Detection," IEEE Trans. Pattern Anal. Mach. Intell., p. 1, 2020, doi:10.1109/TPAMI.2020.2997456.
  • [18] M. Tao et al., "Smartphone-based detection of leaf color levels in rice plants," Comput. Electron. Agric., vol. 173, p. 105431, 2020, doi:10.1016/j.compag.2020.105431.
  • [19] P. Emami, P. M. Pardalos, L. Elefteriadou, and S. Ranka, "Machine Learning Methods for Data Association in Multi-Object Tracking," ACM Comput. Surv., vol. 53, no. 4, pp. 1–34, 2020, doi:10.1145/3394659.
  • [20] N. Al-Shakarji, F. Bunyak, H. Aliakbarpour, G. Seetharaman, and K. Palaniappan, "Multi-Cue Vehicle Detection for Semantic Video Compression in Georegistered Aerial Videos," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019.
  • [21] G. V Santiago and A. J. Alvares, "Deployment framework for the Internet of water meters using computer vision on ARM platform," J. Ambient Intell. Smart Environ., vol. 12, no. 1, pp. 35–60, 2020, doi:10.3233/AIS-200544.
  • [22] K. Trejo, C. Angulo, S. Satoh, and M. Bono, "Towards robots reasoning about group behavior of museum visitors: Leader detection and group tracking," J. Ambient Intell. Smart Environ., vol. 10, no. 1, pp. 3–19, 2018, doi:10.3233/AIS-170467.
  • [23] H. Wu, L. Yang, S. Fu, and X. (Luke) Zhang, "Beyond remote control: Exploring natural gesture inputs for smart TV systems," J. Ambient Intell. Smart Environ., vol. 11, no. 4, pp. 335–354, 2019, doi:10.3233/AIS-190528.
  • [24] Y. Sun and Z. Yan, "Image target detection algorithm compression and pruning based on neural network," Comput. Sci. Inf. Syst., vol. 18, no. 2, pp. 499–516, 2021, doi:10.2298/csis200316007s.
  • [25] J. Ko and K. Cheoi, "A novel distant target region detection method using hybrid saliency-based attention model under complex textures," Comput. Sci. Inf. Syst., vol. 18, no. 2, pp. 379–399, 2021, doi:10.2298/csis200120001k.
  • [26] K. Ueki, T. Hayashida, and T. Kobayashi, "Subspace-based Age-group Classification Using Facial Images under Various Lighting Conditions," in 7th International Conference on Automatic Face and Gesture Recognition (FGR06), 2006, pp. 43–48, doi:10.1109/FGR.2006.102.
  • [27] K. S. Htet and M. M. Sein, "Market Intelligence Analysis on Age Estimation and Gender Classification on Events with deep learning hyperparameters optimization and SDN Controllers," in 2020 IEEE Conference on Computer Applications(ICCA), 2020, pp. 1–5, doi:10.1109/ICCA49400.2020.9022854.
  • [28] Z. Lin et al., "Establishment of age group classification for risk stratification in glioma patients," BMC Neurol., vol. 20, no. 1, p. 310, 2020, doi:10.1186/s12883-020-01888-w.
  • [29] E. Torres, S. L. Granizo, and M. Hernandez-Alvarez, "Gender and Age Classification Based on Human Features to Detect Illicit Activity in Suspicious Sites," in International Conference on Computational Science and Computational Intelligence (CSCI), 2019, pp. 416–419, doi:10.1109/CSCI49370.2019.00081.
  • [30] Y. H. Kwon and N. da V. Lobo, "Age Classification from Facial Images," Comput. Vis. Image Underst., vol. 74, no. 1, pp. 1–21, 1999, doi:10.1006/cviu.1997.0549.
  • [31] W. B. Horng, C. P. Lee, and C. W. Chen, "Classification of age groups based on facial features," Tamkang J. Sci. Eng., vol. 4, no. 3, pp. 183–192, 2001, doi:10.6180/jase.2001.4.3.05.
  • [32] P. Thukral, K. Mitra, and R. Chellappa, "A hierarchical approach for human age estimation," in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012, pp. 1529–1532, doi:10.1109/ICASSP.2012.6288182.
  • [33] A. Gunay and V. V Nabiyev, "Automatic age classification with LBP," in 23rd International Symposium on Computer and Information Sciences, 2008, pp. 1–4, doi:10.1109/ISCIS.2008.4717926.
  • [34] A. K. Soni, R. Kumar, and D. K. Kishore, "Estimation of age groups based on facial features," in 2015 International Conference on Applied and Theoretical Computing and Communication Technology (iCATccT), 2015, pp. 681–687, doi:10.1109/ICATCCT.2015.7456970.
  • [35] D.-V. Bratu, S.-A. Moraru, and L. Georgeta Guseila, "A Performance Comparison between Deep Learning Network and Haar Cascade on an IoT Device," in 2019 International Conference on Sensing and Instrumentation in IoT Era (ISSI), 2019, pp. 1–6, doi:10.1109/ISSI47111.2019.9043714.
  • [36] Z. Zhang, Y. Song, and H. Qi, "Age Progression/Regression by Conditional Adversarial Autoencoder," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4352–4360, doi:10.1109/CVPR.2017.463.
  • [37] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017, doi:10.1109/TPAMI.2016.2577031.
  • [38] J. Redmon, "Darknet: Open Source Neural Networks in C." 2016https://pjreddie.com/darknet/.
  • [39] J. Redmon and A. Farhadi, "YOLO9000: Better, Faster, Stronger," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6517–6525, doi:10.1109/CVPR.2017.690.
  • [40] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT Press, 2016http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.672.7118%7B&%7Drep=rep1%7B&%7Dtype=pdf.
  • [41] R. Mostafiz, M. S. Uddin, N.-A.- Alam, M. Mahfuz Reza, and M. M. Rahman, "Covid-19 detection in chest X-ray through random forest classifier using a hybridization of deep CNN and DWT optimized features," J. King Saud Univ. - Comput. Inf. Sci., 2020, doi:https://doi.org/10.1016/j.jksuci.2020.12.010.
  • [42] R. Hafiz, M. R. Haque, A. Rakshit, and M. S. Uddin, "Image-based soft drink type classification and dietary assessment system using deep convolutional neural network with transfer learning," J. King Saud Univ. - Comput. Inf. Sci., 2020, doi:https://doi.org/10.1016/j.jksuci.2020.08.015.
  • [43] Y. M. Kassim et al., "Clustering-Based Dual Deep Learning Architecture for Detecting Red Blood Cells in Malaria Diagnostic Smears," IEEE J. Biomed. Heal. Informatics, p. 1, 2020, doi:10.1109/JBHI.2020.3034863.
  • [44] A. Vedaldi and K. Lenc, "MatConvNet," in Proceedings of the 23rd ACM international conference on Multimedia - MM '15, 2015, pp. 689–692, doi:10.1145/2733373.2807412.
  • [45] J. Roels, J. De Vylder, J. Aelterman, Y. Saeys, and W. Philips, "Convolutional neural network pruning to accelerate membrane segmentation in electron microscopy," in 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), 2017, pp. 633–637, doi:10.1109/ISBI.2017.7950600.
  • [46] L. Torrey and J. Shavlik, "Transfer Learning," in Handbook of Research on Machine Learning Applications and Trends, IGI Global, pp. 242–264, doi:10.4018/978-1-60566-766-9.ch011.
  • [47] S. J. Pan and Q. Yang, "A Survey on Transfer Learning," IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, 2010, doi:10.1109/TKDE.2009.191.
  • [48] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks," Adv. Neural Inf. Process. Syst., pp. 1–9, 2012, doi:http://dx.doi.org/10.1016/j.protcy.2014.09.007.
  • [49] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," Sep. 2014http://arxiv.org/abs/1409.1556.
  • [50] F. N. Iandola, M. W. Moskewicz, K. Ashraf, S. Han, W. J. Dally, and K. Keutzer, "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size," CoRR, vol. abs/1602.0, 2016.
  • [51] C. Szegedy et al., "Going deeper with convolutions," in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–9, doi:10.1109/CVPR.2015.7298594.
  • [52] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818–2826, doi:10.1109/CVPR.2016.308.
  • [53] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely Connected Convolutional Networks," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2261–2269, doi:10.1109/CVPR.2017.243.
  • [54] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen, "MobileNetV2: Inverted Residuals and Linear Bottlenecks," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520, doi:10.1109/CVPR.2018.00474.
  • [55] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778, doi:10.1109/CVPR.2016.90.
  • [56] F. Chollet, "Xception: Deep Learning with Depthwise Separable Convolutions," 2016http://arxiv.org/abs/1610.02357.
  • [57] X. Zhang, X. Zhou, M. Lin, and J. Sun, "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices," 2017http://arxiv.org/abs/1707.01083.
  • [58] Kaggle, "Face Mask Detection," 2020. [Online]. Available: https://www.kaggle.com/andrewmvd/face-mask-detectionhttps://www.kaggle.com/andrewmvd/face-mask-detection.
  • [59] M. Loey, G. Manogaran, M. H. N. Taha, and N. E. M. Khalifa, "Fighting against COVID-19: A novel deep learning model based on YOLO-v2 with ResNet-50 for medical face mask detection," Sustain. Cities Soc., p. 102600, 2020, doi:10.1016/j.scs.2020.102600.
  • [60] S. Ge, J. Li, Q. Ye, and Z. Luo, "Detecting Masked Faces in the Wild with LLE-CNNs," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 426–434, doi:10.1109/CVPR.2017.53.
  • [61] A. Das, A. Dantcheva, and F. Bremond, "Mitigating Bias in Gender, Age and Ethnicity Classification: a Multi-Task Convolution Neural Network Approach," in Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018.
  • [62] S. H. Lee and Y. M. Ro, "Local age group modeling in unconstrained face images for facial age classification," in International Conference on Image Processing (ICIP), 2014, pp. 1395–1399, doi:10.1109/ICIP.2014.7025279.
  • [63] L. Liu, J. Liu, and J. Cheng, "Age-Group Classification of Facial Images," in 11th International Conference on Machine Learning and Applications, 2012, pp. 693–696, doi:10.1109/ICMLA.2012.129.

A Multi-task Deep Learning System for Face Detection and Age Group Classification for Masked Faces

Year 2021, , 1394 - 1407, 31.12.2021
https://doi.org/10.16984/saufenbilder.981927

Abstract

COVID-19 is an ongoing pandemic and according to the experts, using a face mask can reduce the spread of the disease. On the other hand, masks cause occlusion in faces and can create safety problems such as the recognition of the face and the estimation of its age. To prevent the spread of COVID-19, some countries have restrictions according to age groups. Also in different countries, people in some age groups have safety restrictions such as driving and consuming alcohol, etc. But these rules are difficult to follow due to occlusion in faces. Automated systems can assist to monitor these rules. In this study, a deep learning-based automated multi-task face detection and age group classification system is proposed for masked faces. The system first detects masked/no-masked-faces. Then, it classifies them according to age-groups. It works for multi-person regardless of indoor/outdoor environment. The system achieved 79.0% precision score for masked face detection using Faster R-CNN with resnet50 network. Also, 83.87% accuracy for classifying age groups with masked faces and 84.48% accuracy for no-masked faces using densenet201 network have been observed. It produced better results compared to the literature. The results are significant because they show that a reliable age classification for masked faces is possible.

References

  • [1] “World Health Organization.” 2020https://www.who.int/.
  • [2] A. Abbas, M. M. Abdelsamea, and M. M. Gaber, "Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network," Appl. Intell., vol. 51, no. 2, pp. 854–864, Feb. 2021, doi:10.1007/s10489-020-01829-7.
  • [3] S. A. B. P and C. S. R. Annavarapu, "Deep learning-based improved snapshot ensemble technique for COVID-19 chest X-ray classification," Appl. Intell., Mar. 2021, doi:10.1007/s10489-021-02199-4.
  • [4] M. Turkoglu, "COVIDetectioNet: COVID-19 diagnosis system based on X-ray images using features selected from pre-learned deep features ensemble," Appl. Intell., vol. 51, no. 3, pp. 1213–1226, Mar. 2021, doi:10.1007/s10489-020-01888-w.
  • [5] H. Mittal, A. C. Pandey, R. Pal, and A. Tripathi, "A new clustering method for the diagnosis of CoVID19 using medical images," Appl. Intell., Jan. 2021, doi:10.1007/s10489-020-02122-3.
  • [6] A. Oulefki, S. Agaian, T. Trongtirakul, and A. Kassah Laouar, "Automatic COVID-19 lung infected region segmentation and measurement using CT-scans images," Pattern Recognit., vol. 114, p. 107747, 2021, doi:10.1016/j.patcog.2020.107747.
  • [7] D.-P. Fan et al., "Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images," IEEE Trans. Med. Imaging, vol. 39, no. 8, pp. 2626–2637, 2020, doi:10.1109/TMI.2020.2996645.
  • [8] M. Loey, G. Manogaran, M. H. N. Taha, and N. E. M. Khalifa, "A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the COVID-19 pandemic," Measurement, vol. 167, p. 108288, 2021, doi:10.1016/j.measurement.2020.108288.
  • [9] P. Mohan, A. J. Paul, and A. Chirania, "A Tiny CNN Architecture for Medical Face Mask Detection for Resource-Constrained Endpoints," ArVix, 2020http://arxiv.org/abs/2011.14858.
  • [10] B. Qin and D. Li, "Identifying facemask-wearing condition using image super-resolution with classification network to prevent COVID-19," Sensors (Switzerland), vol. 20, no. 18, pp. 1–23, 2020, doi:10.3390/s20185236.
  • [11] K. Bhambani, T. Jain, and K. A. Sultanpure, "Real-time Face Mask and Social Distancing Violation Detection System using YOLO," in Bangalore Humanitarian Technology Conference (B-HTC), 2020, pp. 1–6, doi:10.1109/B-HTC50970.2020.9297902.
  • [12] B. Sathyabama, A. Devpura, M. Maroti, and R. S. Rajput, "Monitoring Pandemic Precautionary Protocols using Real-time Surveillance and Artificial Intelligence," in 3rd International Conference on Intelligent Sustainable Systems (ICISS), 2020, pp. 1036–1041, doi:10.1109/ICISS49785.2020.9315934.
  • [13] S. Meivel, K. Indira Devi, S. Uma Maheswari, and J. Vijaya Menaka, "Real time data analysis of face mask detection and social distance measurement using Matlab," Mater. Today Proc., 2021, doi:10.1016/j.matpr.2020.12.1042.
  • [14] S. S. Ahamad and A.-S. Khan Pathan, "A formally verified authentication protocol in secure framework for mobile healthcare during COVID-19-like pandemic," Conn. Sci., pp. 1–23, 2020, doi:10.1080/09540091.2020.1854180.
  • [15] Y. Li, K. Guo, Y. Lu, and L. Liu, "Cropping and attention based approach for masked face recognition," Appl. Intell., Feb. 2021, doi:10.1007/s10489-020-02100-9.
  • [16] G. Yolcu, I. Oztel, S. Kazan, C. Oz, and F. Bunyak, "Deep learning-based face analysis system for monitoring customer interest," J. Ambient Intell. Humaniz. Comput., vol. 11, no. 1, pp. 237–248, Jan. 2020, doi:10.1007/s12652-019-01310-5.
  • [17] S. Zhang, C. Chi, Z. Lei, and S. Z. Li, "RefineFace: Refinement Neural Network for High Performance Face Detection," IEEE Trans. Pattern Anal. Mach. Intell., p. 1, 2020, doi:10.1109/TPAMI.2020.2997456.
  • [18] M. Tao et al., "Smartphone-based detection of leaf color levels in rice plants," Comput. Electron. Agric., vol. 173, p. 105431, 2020, doi:10.1016/j.compag.2020.105431.
  • [19] P. Emami, P. M. Pardalos, L. Elefteriadou, and S. Ranka, "Machine Learning Methods for Data Association in Multi-Object Tracking," ACM Comput. Surv., vol. 53, no. 4, pp. 1–34, 2020, doi:10.1145/3394659.
  • [20] N. Al-Shakarji, F. Bunyak, H. Aliakbarpour, G. Seetharaman, and K. Palaniappan, "Multi-Cue Vehicle Detection for Semantic Video Compression in Georegistered Aerial Videos," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019.
  • [21] G. V Santiago and A. J. Alvares, "Deployment framework for the Internet of water meters using computer vision on ARM platform," J. Ambient Intell. Smart Environ., vol. 12, no. 1, pp. 35–60, 2020, doi:10.3233/AIS-200544.
  • [22] K. Trejo, C. Angulo, S. Satoh, and M. Bono, "Towards robots reasoning about group behavior of museum visitors: Leader detection and group tracking," J. Ambient Intell. Smart Environ., vol. 10, no. 1, pp. 3–19, 2018, doi:10.3233/AIS-170467.
  • [23] H. Wu, L. Yang, S. Fu, and X. (Luke) Zhang, "Beyond remote control: Exploring natural gesture inputs for smart TV systems," J. Ambient Intell. Smart Environ., vol. 11, no. 4, pp. 335–354, 2019, doi:10.3233/AIS-190528.
  • [24] Y. Sun and Z. Yan, "Image target detection algorithm compression and pruning based on neural network," Comput. Sci. Inf. Syst., vol. 18, no. 2, pp. 499–516, 2021, doi:10.2298/csis200316007s.
  • [25] J. Ko and K. Cheoi, "A novel distant target region detection method using hybrid saliency-based attention model under complex textures," Comput. Sci. Inf. Syst., vol. 18, no. 2, pp. 379–399, 2021, doi:10.2298/csis200120001k.
  • [26] K. Ueki, T. Hayashida, and T. Kobayashi, "Subspace-based Age-group Classification Using Facial Images under Various Lighting Conditions," in 7th International Conference on Automatic Face and Gesture Recognition (FGR06), 2006, pp. 43–48, doi:10.1109/FGR.2006.102.
  • [27] K. S. Htet and M. M. Sein, "Market Intelligence Analysis on Age Estimation and Gender Classification on Events with deep learning hyperparameters optimization and SDN Controllers," in 2020 IEEE Conference on Computer Applications(ICCA), 2020, pp. 1–5, doi:10.1109/ICCA49400.2020.9022854.
  • [28] Z. Lin et al., "Establishment of age group classification for risk stratification in glioma patients," BMC Neurol., vol. 20, no. 1, p. 310, 2020, doi:10.1186/s12883-020-01888-w.
  • [29] E. Torres, S. L. Granizo, and M. Hernandez-Alvarez, "Gender and Age Classification Based on Human Features to Detect Illicit Activity in Suspicious Sites," in International Conference on Computational Science and Computational Intelligence (CSCI), 2019, pp. 416–419, doi:10.1109/CSCI49370.2019.00081.
  • [30] Y. H. Kwon and N. da V. Lobo, "Age Classification from Facial Images," Comput. Vis. Image Underst., vol. 74, no. 1, pp. 1–21, 1999, doi:10.1006/cviu.1997.0549.
  • [31] W. B. Horng, C. P. Lee, and C. W. Chen, "Classification of age groups based on facial features," Tamkang J. Sci. Eng., vol. 4, no. 3, pp. 183–192, 2001, doi:10.6180/jase.2001.4.3.05.
  • [32] P. Thukral, K. Mitra, and R. Chellappa, "A hierarchical approach for human age estimation," in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012, pp. 1529–1532, doi:10.1109/ICASSP.2012.6288182.
  • [33] A. Gunay and V. V Nabiyev, "Automatic age classification with LBP," in 23rd International Symposium on Computer and Information Sciences, 2008, pp. 1–4, doi:10.1109/ISCIS.2008.4717926.
  • [34] A. K. Soni, R. Kumar, and D. K. Kishore, "Estimation of age groups based on facial features," in 2015 International Conference on Applied and Theoretical Computing and Communication Technology (iCATccT), 2015, pp. 681–687, doi:10.1109/ICATCCT.2015.7456970.
  • [35] D.-V. Bratu, S.-A. Moraru, and L. Georgeta Guseila, "A Performance Comparison between Deep Learning Network and Haar Cascade on an IoT Device," in 2019 International Conference on Sensing and Instrumentation in IoT Era (ISSI), 2019, pp. 1–6, doi:10.1109/ISSI47111.2019.9043714.
  • [36] Z. Zhang, Y. Song, and H. Qi, "Age Progression/Regression by Conditional Adversarial Autoencoder," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4352–4360, doi:10.1109/CVPR.2017.463.
  • [37] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017, doi:10.1109/TPAMI.2016.2577031.
  • [38] J. Redmon, "Darknet: Open Source Neural Networks in C." 2016https://pjreddie.com/darknet/.
  • [39] J. Redmon and A. Farhadi, "YOLO9000: Better, Faster, Stronger," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6517–6525, doi:10.1109/CVPR.2017.690.
  • [40] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT Press, 2016http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.672.7118%7B&%7Drep=rep1%7B&%7Dtype=pdf.
  • [41] R. Mostafiz, M. S. Uddin, N.-A.- Alam, M. Mahfuz Reza, and M. M. Rahman, "Covid-19 detection in chest X-ray through random forest classifier using a hybridization of deep CNN and DWT optimized features," J. King Saud Univ. - Comput. Inf. Sci., 2020, doi:https://doi.org/10.1016/j.jksuci.2020.12.010.
  • [42] R. Hafiz, M. R. Haque, A. Rakshit, and M. S. Uddin, "Image-based soft drink type classification and dietary assessment system using deep convolutional neural network with transfer learning," J. King Saud Univ. - Comput. Inf. Sci., 2020, doi:https://doi.org/10.1016/j.jksuci.2020.08.015.
  • [43] Y. M. Kassim et al., "Clustering-Based Dual Deep Learning Architecture for Detecting Red Blood Cells in Malaria Diagnostic Smears," IEEE J. Biomed. Heal. Informatics, p. 1, 2020, doi:10.1109/JBHI.2020.3034863.
  • [44] A. Vedaldi and K. Lenc, "MatConvNet," in Proceedings of the 23rd ACM international conference on Multimedia - MM '15, 2015, pp. 689–692, doi:10.1145/2733373.2807412.
  • [45] J. Roels, J. De Vylder, J. Aelterman, Y. Saeys, and W. Philips, "Convolutional neural network pruning to accelerate membrane segmentation in electron microscopy," in 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), 2017, pp. 633–637, doi:10.1109/ISBI.2017.7950600.
  • [46] L. Torrey and J. Shavlik, "Transfer Learning," in Handbook of Research on Machine Learning Applications and Trends, IGI Global, pp. 242–264, doi:10.4018/978-1-60566-766-9.ch011.
  • [47] S. J. Pan and Q. Yang, "A Survey on Transfer Learning," IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, 2010, doi:10.1109/TKDE.2009.191.
  • [48] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks," Adv. Neural Inf. Process. Syst., pp. 1–9, 2012, doi:http://dx.doi.org/10.1016/j.protcy.2014.09.007.
  • [49] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," Sep. 2014http://arxiv.org/abs/1409.1556.
  • [50] F. N. Iandola, M. W. Moskewicz, K. Ashraf, S. Han, W. J. Dally, and K. Keutzer, "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size," CoRR, vol. abs/1602.0, 2016.
  • [51] C. Szegedy et al., "Going deeper with convolutions," in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–9, doi:10.1109/CVPR.2015.7298594.
  • [52] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818–2826, doi:10.1109/CVPR.2016.308.
  • [53] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely Connected Convolutional Networks," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2261–2269, doi:10.1109/CVPR.2017.243.
  • [54] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen, "MobileNetV2: Inverted Residuals and Linear Bottlenecks," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520, doi:10.1109/CVPR.2018.00474.
  • [55] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778, doi:10.1109/CVPR.2016.90.
  • [56] F. Chollet, "Xception: Deep Learning with Depthwise Separable Convolutions," 2016http://arxiv.org/abs/1610.02357.
  • [57] X. Zhang, X. Zhou, M. Lin, and J. Sun, "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices," 2017http://arxiv.org/abs/1707.01083.
  • [58] Kaggle, "Face Mask Detection," 2020. [Online]. Available: https://www.kaggle.com/andrewmvd/face-mask-detectionhttps://www.kaggle.com/andrewmvd/face-mask-detection.
  • [59] M. Loey, G. Manogaran, M. H. N. Taha, and N. E. M. Khalifa, "Fighting against COVID-19: A novel deep learning model based on YOLO-v2 with ResNet-50 for medical face mask detection," Sustain. Cities Soc., p. 102600, 2020, doi:10.1016/j.scs.2020.102600.
  • [60] S. Ge, J. Li, Q. Ye, and Z. Luo, "Detecting Masked Faces in the Wild with LLE-CNNs," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 426–434, doi:10.1109/CVPR.2017.53.
  • [61] A. Das, A. Dantcheva, and F. Bremond, "Mitigating Bias in Gender, Age and Ethnicity Classification: a Multi-Task Convolution Neural Network Approach," in Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018.
  • [62] S. H. Lee and Y. M. Ro, "Local age group modeling in unconstrained face images for facial age classification," in International Conference on Image Processing (ICIP), 2014, pp. 1395–1399, doi:10.1109/ICIP.2014.7025279.
  • [63] L. Liu, J. Liu, and J. Cheng, "Age-Group Classification of Facial Images," in 11th International Conference on Machine Learning and Applications, 2012, pp. 693–696, doi:10.1109/ICMLA.2012.129.
There are 63 citations in total.

Details

Primary Language English
Subjects Artificial Intelligence
Journal Section Research Articles
Authors

Gozde Yolcu 0000-0002-7841-2131

İsmail Öztel 0000-0001-5157-7035

Publication Date December 31, 2021
Submission Date August 12, 2021
Acceptance Date November 3, 2021
Published in Issue Year 2021

Cite

APA Yolcu, G., & Öztel, İ. (2021). A Multi-task Deep Learning System for Face Detection and Age Group Classification for Masked Faces. Sakarya University Journal of Science, 25(6), 1394-1407. https://doi.org/10.16984/saufenbilder.981927
AMA Yolcu G, Öztel İ. A Multi-task Deep Learning System for Face Detection and Age Group Classification for Masked Faces. SAUJS. December 2021;25(6):1394-1407. doi:10.16984/saufenbilder.981927
Chicago Yolcu, Gozde, and İsmail Öztel. “A Multi-Task Deep Learning System for Face Detection and Age Group Classification for Masked Faces”. Sakarya University Journal of Science 25, no. 6 (December 2021): 1394-1407. https://doi.org/10.16984/saufenbilder.981927.
EndNote Yolcu G, Öztel İ (December 1, 2021) A Multi-task Deep Learning System for Face Detection and Age Group Classification for Masked Faces. Sakarya University Journal of Science 25 6 1394–1407.
IEEE G. Yolcu and İ. Öztel, “A Multi-task Deep Learning System for Face Detection and Age Group Classification for Masked Faces”, SAUJS, vol. 25, no. 6, pp. 1394–1407, 2021, doi: 10.16984/saufenbilder.981927.
ISNAD Yolcu, Gozde - Öztel, İsmail. “A Multi-Task Deep Learning System for Face Detection and Age Group Classification for Masked Faces”. Sakarya University Journal of Science 25/6 (December 2021), 1394-1407. https://doi.org/10.16984/saufenbilder.981927.
JAMA Yolcu G, Öztel İ. A Multi-task Deep Learning System for Face Detection and Age Group Classification for Masked Faces. SAUJS. 2021;25:1394–1407.
MLA Yolcu, Gozde and İsmail Öztel. “A Multi-Task Deep Learning System for Face Detection and Age Group Classification for Masked Faces”. Sakarya University Journal of Science, vol. 25, no. 6, 2021, pp. 1394-07, doi:10.16984/saufenbilder.981927.
Vancouver Yolcu G, Öztel İ. A Multi-task Deep Learning System for Face Detection and Age Group Classification for Masked Faces. SAUJS. 2021;25(6):1394-407.

30930 This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.