Araştırma Makalesi
BibTex RIS Kaynak Göster

A New Approach for Standardized Zero-Shot Face Verification Using Siamese Neural Networks with Dynamic Thresholding

Yıl 2026, Cilt: 15 Sayı: 2, 145 - 157, 29.01.2026

Öz

Face recognition and verification systems play a crucial role in many critical areas such as biometric security, access control, and user authentication. This study presents a training-free (zero-shot) face verification protocol and comprehensively compares the performance of different pre-trained deep learning models-Facenet-IRv1, ArcFace, ResNet-18, VGG16, AlexNet, and OpenFace-on the Labeled Faces in the Wild (LFW) dataset. In the proposed approach, two input images are passed through the same network using a Siamese-like inference process, and the resulting embeddings are compared using cosine similarity after L2-normalization. To classify the similarity scores obtained from the model outputs, dynamic threshold calibration is applied for each model, maximizing Youden's J statistic, and this threshold value (𝜏) is transferred to the test dataset without any additional optimization. Additionally, multiple metrics such as ROC-AUC curve, accuracy, precision, recall, F1-score, average inference time, and FPS were calculated to evaluate model performance independently of the threshold. The findings indicate that ArcFace and Facenet-IRv1 models surpassed others in terms of accuracy and reliability, while lightweight architectures such as ResNet-18 and VGG16 offer speed advantages, making them suitable alternatives for real-time applications. These results demonstrate that approaches that do not require training from scratch offer both a cost- and time-efficient solution in face verification systems. In this respect, the study introduces a standardized framework that enables a multidimensional evaluation of different architectures without the need for additional training and offers quantitative insights into the accuracy–speed trade-off in the field of face verification.

Destekleyen Kurum

TÜBİTAK

Proje Numarası

5249902

Teşekkür

This work is supported by The Scientific and Technological Research Council of Türkiye (TÜBİTAK) 1515 Frontier R\&D Laboratories Support Program for Turk Telekom neXt Generation Technologies Lab (XGeNTT) under project number 5249902.

Kaynakça

  • [1] L. Li, X. Mu, S. Li, and H. Peng, “A review of face recognition technology,” IEEE Access, vol. 8, pp. 139110–139120, 2020.
  • [2] Y. Kortli, M. Jridi, A. Al Falou, and M. Atri, “Face recognition systems: A survey,” Sensors, vol. 20, no. 2, p. 342, 2020.
  • [3] M. Rakhra, D. Singh, A. Singh, K. D. Garg, and D. Gupta, “Face recognition with smart security system,” in Proc. 2022 10th Int. Conf. Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), 2022, pp. 1–6.
  • [4] M. F. Siddiqui, W. A. Siddique, M. Ahmedh, and A. K. Jumani, “Face detection and recognition system for enhancing security measures using artificial intelligence system,” Indian J. Sci. Technol., vol. 13, no. 9, pp. 1057–1064, 2020.
  • [5] A. Anshari, S. A. Hirtranusi, D. I. Sensuse, and R. R. Suryono, “Face recognition for identification and verification in attendance system: A systematic review,” in Proc. 2021 IEEE Int. Conf. Communication, Networks and Satellite (COMNETSAT), 2021, pp. 316–323.
  • [6] A. Syafeeza, M. M. F. Alif, Y. N. Athirah, A. Jaafar, A. Norihan, and M. Saleha, “IoT based facial recognition door access control home security system using Raspberry Pi,” Int. J. Power Electron. Drive Syst., vol. 11, no. 1, pp. 417–424, 2020.
  • [7] M. Baytamouny, R. Kolandaisamy, and G. S. ALDharhani, “AI-based home security system with face recognition,” in Proc. 2022 6th Int. Conf. Trends in Electronics and Informatics (ICOEI), 2022, pp. 1038–1042.
  • [8] B. Ríos-Sánchez, D. C.-d. Silva, N. Martín-Yuste, and C. Sánchez-Ávila, “Deep learning for face recognition on mobile devices,” IET Biometrics, vol. 9, no. 3, pp. 109–117, 2020.
  • [9] C. Wang, Y. Xiao, X. Gao, L. Li, and J. Wang, “A framework for behavioral biometric authentication using deep metric learning on mobile devices,” IEEE Trans. Mobile Comput., vol. 22, no. 1, pp. 19–36, 2021.
  • [10] S. Kokal, M. Vanamala, and R. Dave, “Deep learning and machine learning, better together than apart: A review on biometrics mobile authentication,” J. Cybersecurity Privacy, vol. 3, no. 2, pp. 227–258, 2023.
  • [11] H. U. Khan, M. Z. Malik, S. Nazir, and F. Khan, “Utilizing biometric system for enhancing cyber security in banking sector: A systematic analysis,” IEEE Access, vol. 11, pp. 80181–80198, 2023.
  • [12] S. Ramya, R. Sheeba, P. Aravind, S. Gnanaprakasam, M. Gokul, and S. Santhish, “Face biometric authentication system for ATM using deep learning,” in Proc. 2022 6th Int. Conf. Intelligent Computing and Control Systems (ICICCS), 2022, pp. 1446–1451.
  • [13] J. S. Oliveira, G. B. Souza, A. R. Rocha, F. E. Deus, and A. N. Marana, “Cross-domain deep face matching for real banking security systems,” in Proc. 2020 Seventh Int. Conf. eDemocracy & eGovernment (ICEDEG), 2020, pp. 21–28.
  • [14] T. Zhu and L. Wang, “Feasibility study of a new security verification process based on face recognition technology at airport,” J. Phys. Conf. Ser., vol. 1510, no. 1, p. 012025, 2020.
  • [15] N. Khan and M. Efthymiou, “The use of biometric technology at airports: The case of customs and border protection (CBP),” Int. J. Inf. Manage. Data Insights, vol. 1, no. 2, p. 100049, 2021.
  • [16] A. G. Toprak, Ö. Berfin Mercan and M. S. Osmanca, "ML-Based Telecom Customer Churn Analysis," 2025 33rd Signal Processing and Communications Applications Conference (SIU), Sile, Istanbul, Turkiye, 2025, pp. 1-4, doi: 10.1109/SIU66497.2025.11111814.
  • [17] M. E. Baydilli et al., "IPTV Device Anomaly Detection: ML Approaches," 2025 33rd Signal Processing and Communications Applications Conference (SIU), Sile, Istanbul, Turkiye, 2025, pp. 1-4, doi: 10.1109/SIU66497.2025.11112107.
  • [18] K.-H. Lin, K.-H. Chung, K.-S. Lin, and J.-S. Chen, “Face recognition-aided IPTV group recommender with consideration of serendipity,” Int. J. Future Comput. Commun., vol. 3, no. 2, p. 141, 2014.
  • [19] H. Wang, F. Xu, X. Yan, and H. Li, “Research on user’s facial expression analysis when watching TV,” in Proc. 3rd Int. Conf. Comput. Vision and Pattern Analysis (ICCPA), vol. 12754. SPIE, 2023, pp. 521–528.
  • [20] M. D. Rahmatya and M. F. Wicaksono, “Online attendance with Python face recognition and Django framework,” SISTEMASI, vol. 12, no. 3, pp. 703–714, 2023.
  • [21] F. Ozdamli, A. Aljarrah, D. Karagozlu, and M. Ababneh, “Facial recognition system to detect student emotions and cheating in distance learning,” Sustainability, vol. 14, no. 20, p. 13230, 2022.
  • [22] A. V. Savchenko, L. V. Savchenko, and I. Makarov, “Classifying emotions and engagement in online learning based on a single facial expression recognition neural network,” IEEE Trans. Affective Comput., vol. 13, no. 4, pp. 2132–2143, 2022.
  • [23] W. Ali, W. Tian, S. U. Din, D. Iradukunda, and A. A. Khan, “Classical and modern face recognition approaches: A complete review,” Multimedia Tools Appl., vol. 80, pp. 4825–4880, 2021.
  • [24] A. G. Toprak, D. Ünay, T. Cerit, and K. Üğüdücü, “Cancerous lesion segmentation for early detection of breast cancer by using CNN,” in Proc. Int. Grad. Res. Symp. (IGRS’22), 2022, p. 190.
  • [25] B. N. E. Nyarko, W. Bin, J. Zhou, G. K. Agordzo, J. Odoom, and E. Koukoyi, “Comparative analysis of AlexNet, ResNet-50, and Inception-V3 models on masked face recognition,” in Proc. 2022 IEEE World AI IoT Congr. (AIIoT), 2022, pp. 337–343.
  • [26] J. C. Tan, K. M. Lim, and C. P. Lee, “Enhanced AlexNet with super-resolution for low-resolution face recognition,” in Proc. 2021 9th Int. Conf. Information and Communication Technology (ICoICT), 2021, pp. 302–306.
  • [27] S. Mahesh and G. Ramkumar, “Smart face detection and recognition in illumination invariant images using AlexNet CNN compare accuracy with SVM,” in Proc. 2022 3rd Int. Conf. Intelligent Engineering and Management (ICIEM), 2022, pp. 572–575.
  • [28] R. S. Chhillar, “Innovative integration of convolutional neural networks for enhanced face recognition,” J. Electr. Syst., vol. 20, no. 3s, pp. 1941–1950, 2024.
  • [29] A. Choubey, S. B. Choubey, and S. Kumar, “VGG network-based deep convoluted facial recognition,” in Int. Conf. Mechanical and Energy Technologies. Springer, 2023, pp. 347–351.
  • [30] K. Yesugade and R. Jadhav, “Implementation of deep learning techniques for deepfake classification: A comparative study using ResNet-50 and VGG16,” in Proc. 2024 IEEE Pune Section Int. Conf. (PuneCon), 2024, pp. 1–5.
  • [31] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” CoRR, vol. abs/1503.03832, 2015. [Online]. Available: http://arxiv.org/abs/1503.03832
  • [32] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “DeepFace: Closing the gap to human-level performance in face verification,” in Proc. 2014 IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), 2014, pp. 1701–1708.
  • [33] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in Proc. BMVC 2015 - Brit. Mach. Vision Conf., 2015, pp. 1–12.
  • [34] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “ArcFace: Additive angular margin loss for deep face recognition,” in Proc. IEEE/CVF Conf. Comput. Vision Pattern Recognit. (CVPR), Jun. 2019, pp. 4690–4699.
  • [35] T. Baltrušaitis, P. Robinson, and L.-P. Morency, “OpenFace: An open source facial behavior analysis toolkit,” in Proc. 2016 IEEE Winter Conf. Applications of Computer Vision (WACV), 2016, pp. 1–10.
  • [36] P. B. Sri, K. Navya, K. Saiteja, P. Keerthi, S. Hariharan, and V. Kukreja, “Harnessing security improvements using FaceNet approach for face recognition system,” in Proc. 2024 IEEE 13th Int. Conf. Communication Systems and Network Technologies (CSNT), 2024, pp. 306–312.
  • [37] J. Ferdinand, C. Wijaya, A. N. Ronal, I. S. Edbert, and D. Suhartono, “ATM security system modeling using face recognition with FaceNet and Haar cascade,” in Proc. 2022 6th Int. Conf. Informatics and Computational Sciences (ICICoS), 2022, pp. 111–116.
  • [38] C. Wu and Y. Zhang, “MTCNN and FaceNet based access control system for face detection and recognition,” Automat. Control Comput. Sci., vol. 55, pp. 102–112, 2021.
  • [39] M. Ibrahem and M. Abdulameer, “Age face invariant recognition model based on VGG Face-based DNN and support vector classifier,” Int. J. Tech. Phys. Eng. (IJTPE), no. 54, pp. 232–240, 2023.
  • [40] T.-V. Dang, “Smart home management system with face recognition based on ArcFace model in deep convolutional neural network,” J. Robot. Control (JRC), vol. 3, no. 6, pp. 754–761, 2022.
  • [41] D. Bansal, B. Gupta, S. Gupta, A. Anand, Sumit, and A. Sagar, “Facial recognition advancements with Siamese networks: A comprehensive survey,” in Proc. Int. Conf. Computation of Artificial Intelligence & Machine Learning, Springer, 2024, pp. 29–41.
  • [42] Y. Niu and Z. Wang, “Face recognition with Siamese networks,” J. Phys. Conf. Ser., vol. 2872, no. 1, p. 012008, 2024.
  • [43] M. Heidari and K. Fouladi-Ghaleh, “Using Siamese networks with transfer learning for face recognition on small-samples datasets,” in Proc. 2020 Int. Conf. Machine Vision and Image Processing (MVIP), 2020, pp. 1–4.
  • [44] D. Alashammari and D. Akgün, “A comparison of transfer learning models for face recognition,” Sakarya Univ. J. Comput. Inf. Sci., vol. 7, no. 3, pp. 427–438, 2024.
  • [45] S.-C. Lai and K.-M. Lam, “Deep Siamese network for low-resolution face recognition,” in Proc. 2021 APSIPA Annu. Summit and Conf. (APSIPA ASC), 2021, pp. 1444–1449.
  • [46] S. Soleymani, B. Chaudhary, A. Dabouei, J. Dawson, and N. M. Nasrabadi, “Differential morphed face detection using deep Siamese networks,” in Proc. Int. Conf. Pattern Recognit., Springer, 2021, pp. 560–572.
  • [47] M. Khan, M. Saeed, A. El Saddik, and W. Gueaieb, “ArtriVit: Automatic face recognition system using ViT-based Siamese neural networks with a triplet loss,” in Proc. 2023 IEEE 32nd Int. Symp. Industrial Electronics (ISIE), 2023, pp. 1–6.
  • [48] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, et al., “An image is worth 16×16 words: Transformers for image recognition at scale,” in Proc. Int. Conf. Learn. Represent., 2021, pp. 1–21.
  • [49] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, et al., “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 10012–10022.
  • [50] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “SimCLR: A simple framework for contrastive learning of visual representations,” in Proc. Int. Conf. Learn. Represent., 2020, pp. 1–10.
  • [51] X. Chen, H. Fan, R. Girshick, and K. He, “Improved baselines with momentum contrastive learning,” arXiv preprint arXiv:2003.04297, 2020.
  • [52] R. Daş, B. Polat, and G. Tuna, “Derin öğrenme ile resim ve videolarda nesnelerin tanınması ve takibi,” Fırat Univ. J. Eng. Sci., vol. 31, no. 2, pp. 571–581, 2019.
  • [53] T. Meinhardt, A. Kirillov, L. Leal-Taixé, and C. Feichtenhofer, “TrackFormer: Multi-object tracking with transformers,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 8844–8854.
  • [54] P. Sun, J. Cao, Y. Jiang, R. Zhang, E. Xie, Z. Yuan, et al., “TransTrack: Multiple object tracking with transformer,” arXiv preprint arXiv:2012.15460, 2020.G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” in Workshop Faces in Real-Life Images: Detection, Alignment, and Recognition, 2008.
  • [55] Honey and S. S. Oberoi, “Revolutionizing face detection: Exploring the potential of MTCNN algorithm for human face recognition,” in Proc. 2023 7th Int. Conf. Image Information Processing (ICIIP), 2023, pp. 300–306.
  • [56] D. Chicco, “Siamese neural networks: An overview,” Artif. Neural Netw., pp. 73–94, 2021.
  • [57] Y. Li, C. P. Chen, and T. Zhang, “A survey on Siamese network: Methodologies, applications, and opportunities,” IEEE Trans. Artif. Intell., vol. 3, no. 6, pp. 994–1014, 2022.
  • [58] A. L. Marshal and A. N. F. A. N. Fajar, “Image classification of mangoes using CNN VGG16 and AlexNet,” J. Soc. Sci. (JoSS), vol. 2, no. 8, pp. 694–703, 2023.
  • [59] A. Ullah, H. Elahi, Z. Sun, A. Khatoon, and I. Ahmad, “Comparative analysis of AlexNet, ResNet18 and SqueezeNet with diverse modification and arduous implementation,” Arab. J. Sci. Eng., vol. 47, no. 2, pp. 2397–2417, 2022.
  • [60] Z. Chen, Y. Jiang, X. Zhang, R. Zheng, R. Qiu, Y. Sun, C. Zhao, and H. Shang, “ResNet18DNN: Prediction approach of drug-induced liver injury by deep neural network with ResNet18,” Brief. Bioinform., vol. 23, no. 1, p. bbab503, 2022.
  • [61] K. S. Rao and P. Chatterjee, “Reducing the number of trainable parameters does not affect the accuracy of ResNet-18 on cervical cancer images,” in Impending Inquisitions in Humanities and Sciences, CRC Press, 2024, pp. 536–541.
  • [62] A. G. Toprak, D. Ünay, T. Cerit, and K. Üğüdücü, “Cancerous lesion segmentation for early detection of breast cancer by using CNN,” in Proc. Int. Graduate Research Symp. (IGRS), 2022, p. 190.
  • [63] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [64] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), Jun. 2015, pp. 815–823.
  • [65] B. Alharbi and H. S. Alshanbari, “Face-voice based multimodal biometric authentication system via FaceNet and GMM,” PeerJ Comput. Sci., vol. 9, p. e1468, 2023.
  • [66] P. P. Jena, K. N. Kattigenahally, S. Nikitha, S. Sarda, and H. Y., “Multimodal biometric authentication: Deep learning approach,” in Proc. 2021 Int. Conf. Circuits, Controls and Communications (CCUBE), 2021, pp. 1– 5.
  • [67] P. S. and S. S. V., “Security monitoring system using FaceNet for wireless sensor network,” arXiv preprint arXiv:2112.01305, 2021.
  • [68] A. F. S. Moura, S. S. L. Pereira, M. W. L. Moreira, and J. J. P. C. Rodrigues, “Video monitoring system using facial recognition: A FaceNet-based approach,” in Proc. GLOBECOM 2020 - IEEE Global Commun. Conf., 2020, pp. 1–6.
  • [69] P. B. Sri, K. Navya, K. Saiteja, P. Keerthi, S. Hariharan, and V. Kukreja, “Harnessing security improvements using FaceNet approach for face recognition system,” in Proc. 2024 IEEE 13th Int. Conf. Communication Systems and Network Technologies (CSNT), 2024, pp. 306–312.
  • [70] A. Chinapas, P. Polpinit, N. Intiruk, and K. R. Saikaew, “Personal verification system using ID card and face photo,” Int. J. Mach. Learn. Comput., vol. 9, no. 4, pp. 407–412, 2019.
  • [71] R. A. Asmara, B. Sayudha, M. Mentari, R. P. P. Budiman, A. N. Handayani, M. Ridwan, and P. P. Arhandi, “Face recognition using ArcFace and FaceNet in Google Cloud Platform for attendance system mobile application,” in Proc. 2022 Annu. Technol., Appl. Sci. and Eng. Conf. (ATASEC), 2022, pp. 134–144.
  • [72] M. Abdollahzadeh, T. Malekzadeh, C. T. H. Teo, K. Chandrasegaran, G. Liu, and N.-M. Cheung, “A survey on generative modeling with limited data, few shots, and zero shot,” arXiv preprint arXiv:2307.14397, 2023.
  • [73] C. Patrício and J. C. Neves, “Zero-shot face recognition: Improving the discriminability of visual face features using a semantic-guided attention model,” Expert Syst. Appl., vol. 211, p. 118635, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0957417422016803

Dinamik Eşikleme ile Siamese Sinir Ağlarını Kullanarak Standartlaştırılmış Sıfır Atışlı Yüz Doğrulama için Yeni Bir Yaklaşım

Yıl 2026, Cilt: 15 Sayı: 2, 145 - 157, 29.01.2026

Öz

Yüz tanıma ve doğrulama sistemleri, biyometrik güvenlik, erişim kontrolü ve kullanıcı kimlik doğrulaması gibi birçok kritik alanda çok önemli bir rol oynamaktadır. Bu çalışma, eğitim gerektirmeyen (sıfır atışlı) bir yüz doğrulama protokolü sunmakta ve Labeled Faces in the Wild (LFW) veri kümesi üzerinde farklı önceden eğitilmiş derin öğrenme modellerinin (Facenet-IRv1, ArcFace, ResNet-18, VGG16, AlexNet ve OpenFace) performansını kapsamlı bir şekilde karşılaştırmaktadır. Önerilen yaklaşımda, iki giriş görüntüsü aynı ağdan Siamese benzeri bir çıkarım süreci kullanılarak geçirilir ve elde edilen gömülü veriler L2 normalleştirmesinden sonra kosinüs benzerliği kullanılarak karşılaştırılır. Model çıktılarından elde edilen benzerlik puanlarını sınıflandırmak için, her model için dinamik eşik kalibrasyonu uygulanır, Youden'in J istatistiği maksimize edilir ve bu eşik değeri (τ) ek optimizasyon yapılmadan test veri setine aktarılır. Ek olarak, eşik değerinden bağımsız olarak model performansını değerlendirmek için ROC-AUC eğrisi, doğruluk, kesinlik, duyarlılık, F1 skoru, ortalama çıkarım süresi ve FPS gibi çoklu metrikler hesaplandı. Bulgular, ArcFace ve Facenet-IRv1 modellerinin doğruluk ve güvenilirlik açısından diğerlerini geride bıraktığını, daha hafif yapılar olan ResNet-18 ve VGG16'nın ise hız avantajları sunduğunu ve bu sayede gerçek zamanlı uygulamalar için uygun alternatifler olduğunu göstermektedir. Bu sonuçlar, sıfırdan eğitim gerektirmeyen yaklaşımların yüz doğrulama sistemlerinde hem maliyet hem de zaman açısından verimli bir çözüm sunduğunu göstermektedir. Bu bağlamda, çalışma ek eğitim gerektirmeden farklı mimarilerin çok boyutlu değerlendirilmesini sağlayan standartlaştırılmış bir çerçeve sunmakta ve yüz doğrulama alanında doğruluk-hız takası hakkında nicel bilgiler sağlamaktadır.

Proje Numarası

5249902

Kaynakça

  • [1] L. Li, X. Mu, S. Li, and H. Peng, “A review of face recognition technology,” IEEE Access, vol. 8, pp. 139110–139120, 2020.
  • [2] Y. Kortli, M. Jridi, A. Al Falou, and M. Atri, “Face recognition systems: A survey,” Sensors, vol. 20, no. 2, p. 342, 2020.
  • [3] M. Rakhra, D. Singh, A. Singh, K. D. Garg, and D. Gupta, “Face recognition with smart security system,” in Proc. 2022 10th Int. Conf. Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), 2022, pp. 1–6.
  • [4] M. F. Siddiqui, W. A. Siddique, M. Ahmedh, and A. K. Jumani, “Face detection and recognition system for enhancing security measures using artificial intelligence system,” Indian J. Sci. Technol., vol. 13, no. 9, pp. 1057–1064, 2020.
  • [5] A. Anshari, S. A. Hirtranusi, D. I. Sensuse, and R. R. Suryono, “Face recognition for identification and verification in attendance system: A systematic review,” in Proc. 2021 IEEE Int. Conf. Communication, Networks and Satellite (COMNETSAT), 2021, pp. 316–323.
  • [6] A. Syafeeza, M. M. F. Alif, Y. N. Athirah, A. Jaafar, A. Norihan, and M. Saleha, “IoT based facial recognition door access control home security system using Raspberry Pi,” Int. J. Power Electron. Drive Syst., vol. 11, no. 1, pp. 417–424, 2020.
  • [7] M. Baytamouny, R. Kolandaisamy, and G. S. ALDharhani, “AI-based home security system with face recognition,” in Proc. 2022 6th Int. Conf. Trends in Electronics and Informatics (ICOEI), 2022, pp. 1038–1042.
  • [8] B. Ríos-Sánchez, D. C.-d. Silva, N. Martín-Yuste, and C. Sánchez-Ávila, “Deep learning for face recognition on mobile devices,” IET Biometrics, vol. 9, no. 3, pp. 109–117, 2020.
  • [9] C. Wang, Y. Xiao, X. Gao, L. Li, and J. Wang, “A framework for behavioral biometric authentication using deep metric learning on mobile devices,” IEEE Trans. Mobile Comput., vol. 22, no. 1, pp. 19–36, 2021.
  • [10] S. Kokal, M. Vanamala, and R. Dave, “Deep learning and machine learning, better together than apart: A review on biometrics mobile authentication,” J. Cybersecurity Privacy, vol. 3, no. 2, pp. 227–258, 2023.
  • [11] H. U. Khan, M. Z. Malik, S. Nazir, and F. Khan, “Utilizing biometric system for enhancing cyber security in banking sector: A systematic analysis,” IEEE Access, vol. 11, pp. 80181–80198, 2023.
  • [12] S. Ramya, R. Sheeba, P. Aravind, S. Gnanaprakasam, M. Gokul, and S. Santhish, “Face biometric authentication system for ATM using deep learning,” in Proc. 2022 6th Int. Conf. Intelligent Computing and Control Systems (ICICCS), 2022, pp. 1446–1451.
  • [13] J. S. Oliveira, G. B. Souza, A. R. Rocha, F. E. Deus, and A. N. Marana, “Cross-domain deep face matching for real banking security systems,” in Proc. 2020 Seventh Int. Conf. eDemocracy & eGovernment (ICEDEG), 2020, pp. 21–28.
  • [14] T. Zhu and L. Wang, “Feasibility study of a new security verification process based on face recognition technology at airport,” J. Phys. Conf. Ser., vol. 1510, no. 1, p. 012025, 2020.
  • [15] N. Khan and M. Efthymiou, “The use of biometric technology at airports: The case of customs and border protection (CBP),” Int. J. Inf. Manage. Data Insights, vol. 1, no. 2, p. 100049, 2021.
  • [16] A. G. Toprak, Ö. Berfin Mercan and M. S. Osmanca, "ML-Based Telecom Customer Churn Analysis," 2025 33rd Signal Processing and Communications Applications Conference (SIU), Sile, Istanbul, Turkiye, 2025, pp. 1-4, doi: 10.1109/SIU66497.2025.11111814.
  • [17] M. E. Baydilli et al., "IPTV Device Anomaly Detection: ML Approaches," 2025 33rd Signal Processing and Communications Applications Conference (SIU), Sile, Istanbul, Turkiye, 2025, pp. 1-4, doi: 10.1109/SIU66497.2025.11112107.
  • [18] K.-H. Lin, K.-H. Chung, K.-S. Lin, and J.-S. Chen, “Face recognition-aided IPTV group recommender with consideration of serendipity,” Int. J. Future Comput. Commun., vol. 3, no. 2, p. 141, 2014.
  • [19] H. Wang, F. Xu, X. Yan, and H. Li, “Research on user’s facial expression analysis when watching TV,” in Proc. 3rd Int. Conf. Comput. Vision and Pattern Analysis (ICCPA), vol. 12754. SPIE, 2023, pp. 521–528.
  • [20] M. D. Rahmatya and M. F. Wicaksono, “Online attendance with Python face recognition and Django framework,” SISTEMASI, vol. 12, no. 3, pp. 703–714, 2023.
  • [21] F. Ozdamli, A. Aljarrah, D. Karagozlu, and M. Ababneh, “Facial recognition system to detect student emotions and cheating in distance learning,” Sustainability, vol. 14, no. 20, p. 13230, 2022.
  • [22] A. V. Savchenko, L. V. Savchenko, and I. Makarov, “Classifying emotions and engagement in online learning based on a single facial expression recognition neural network,” IEEE Trans. Affective Comput., vol. 13, no. 4, pp. 2132–2143, 2022.
  • [23] W. Ali, W. Tian, S. U. Din, D. Iradukunda, and A. A. Khan, “Classical and modern face recognition approaches: A complete review,” Multimedia Tools Appl., vol. 80, pp. 4825–4880, 2021.
  • [24] A. G. Toprak, D. Ünay, T. Cerit, and K. Üğüdücü, “Cancerous lesion segmentation for early detection of breast cancer by using CNN,” in Proc. Int. Grad. Res. Symp. (IGRS’22), 2022, p. 190.
  • [25] B. N. E. Nyarko, W. Bin, J. Zhou, G. K. Agordzo, J. Odoom, and E. Koukoyi, “Comparative analysis of AlexNet, ResNet-50, and Inception-V3 models on masked face recognition,” in Proc. 2022 IEEE World AI IoT Congr. (AIIoT), 2022, pp. 337–343.
  • [26] J. C. Tan, K. M. Lim, and C. P. Lee, “Enhanced AlexNet with super-resolution for low-resolution face recognition,” in Proc. 2021 9th Int. Conf. Information and Communication Technology (ICoICT), 2021, pp. 302–306.
  • [27] S. Mahesh and G. Ramkumar, “Smart face detection and recognition in illumination invariant images using AlexNet CNN compare accuracy with SVM,” in Proc. 2022 3rd Int. Conf. Intelligent Engineering and Management (ICIEM), 2022, pp. 572–575.
  • [28] R. S. Chhillar, “Innovative integration of convolutional neural networks for enhanced face recognition,” J. Electr. Syst., vol. 20, no. 3s, pp. 1941–1950, 2024.
  • [29] A. Choubey, S. B. Choubey, and S. Kumar, “VGG network-based deep convoluted facial recognition,” in Int. Conf. Mechanical and Energy Technologies. Springer, 2023, pp. 347–351.
  • [30] K. Yesugade and R. Jadhav, “Implementation of deep learning techniques for deepfake classification: A comparative study using ResNet-50 and VGG16,” in Proc. 2024 IEEE Pune Section Int. Conf. (PuneCon), 2024, pp. 1–5.
  • [31] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” CoRR, vol. abs/1503.03832, 2015. [Online]. Available: http://arxiv.org/abs/1503.03832
  • [32] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “DeepFace: Closing the gap to human-level performance in face verification,” in Proc. 2014 IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), 2014, pp. 1701–1708.
  • [33] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in Proc. BMVC 2015 - Brit. Mach. Vision Conf., 2015, pp. 1–12.
  • [34] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “ArcFace: Additive angular margin loss for deep face recognition,” in Proc. IEEE/CVF Conf. Comput. Vision Pattern Recognit. (CVPR), Jun. 2019, pp. 4690–4699.
  • [35] T. Baltrušaitis, P. Robinson, and L.-P. Morency, “OpenFace: An open source facial behavior analysis toolkit,” in Proc. 2016 IEEE Winter Conf. Applications of Computer Vision (WACV), 2016, pp. 1–10.
  • [36] P. B. Sri, K. Navya, K. Saiteja, P. Keerthi, S. Hariharan, and V. Kukreja, “Harnessing security improvements using FaceNet approach for face recognition system,” in Proc. 2024 IEEE 13th Int. Conf. Communication Systems and Network Technologies (CSNT), 2024, pp. 306–312.
  • [37] J. Ferdinand, C. Wijaya, A. N. Ronal, I. S. Edbert, and D. Suhartono, “ATM security system modeling using face recognition with FaceNet and Haar cascade,” in Proc. 2022 6th Int. Conf. Informatics and Computational Sciences (ICICoS), 2022, pp. 111–116.
  • [38] C. Wu and Y. Zhang, “MTCNN and FaceNet based access control system for face detection and recognition,” Automat. Control Comput. Sci., vol. 55, pp. 102–112, 2021.
  • [39] M. Ibrahem and M. Abdulameer, “Age face invariant recognition model based on VGG Face-based DNN and support vector classifier,” Int. J. Tech. Phys. Eng. (IJTPE), no. 54, pp. 232–240, 2023.
  • [40] T.-V. Dang, “Smart home management system with face recognition based on ArcFace model in deep convolutional neural network,” J. Robot. Control (JRC), vol. 3, no. 6, pp. 754–761, 2022.
  • [41] D. Bansal, B. Gupta, S. Gupta, A. Anand, Sumit, and A. Sagar, “Facial recognition advancements with Siamese networks: A comprehensive survey,” in Proc. Int. Conf. Computation of Artificial Intelligence & Machine Learning, Springer, 2024, pp. 29–41.
  • [42] Y. Niu and Z. Wang, “Face recognition with Siamese networks,” J. Phys. Conf. Ser., vol. 2872, no. 1, p. 012008, 2024.
  • [43] M. Heidari and K. Fouladi-Ghaleh, “Using Siamese networks with transfer learning for face recognition on small-samples datasets,” in Proc. 2020 Int. Conf. Machine Vision and Image Processing (MVIP), 2020, pp. 1–4.
  • [44] D. Alashammari and D. Akgün, “A comparison of transfer learning models for face recognition,” Sakarya Univ. J. Comput. Inf. Sci., vol. 7, no. 3, pp. 427–438, 2024.
  • [45] S.-C. Lai and K.-M. Lam, “Deep Siamese network for low-resolution face recognition,” in Proc. 2021 APSIPA Annu. Summit and Conf. (APSIPA ASC), 2021, pp. 1444–1449.
  • [46] S. Soleymani, B. Chaudhary, A. Dabouei, J. Dawson, and N. M. Nasrabadi, “Differential morphed face detection using deep Siamese networks,” in Proc. Int. Conf. Pattern Recognit., Springer, 2021, pp. 560–572.
  • [47] M. Khan, M. Saeed, A. El Saddik, and W. Gueaieb, “ArtriVit: Automatic face recognition system using ViT-based Siamese neural networks with a triplet loss,” in Proc. 2023 IEEE 32nd Int. Symp. Industrial Electronics (ISIE), 2023, pp. 1–6.
  • [48] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, et al., “An image is worth 16×16 words: Transformers for image recognition at scale,” in Proc. Int. Conf. Learn. Represent., 2021, pp. 1–21.
  • [49] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, et al., “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 10012–10022.
  • [50] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “SimCLR: A simple framework for contrastive learning of visual representations,” in Proc. Int. Conf. Learn. Represent., 2020, pp. 1–10.
  • [51] X. Chen, H. Fan, R. Girshick, and K. He, “Improved baselines with momentum contrastive learning,” arXiv preprint arXiv:2003.04297, 2020.
  • [52] R. Daş, B. Polat, and G. Tuna, “Derin öğrenme ile resim ve videolarda nesnelerin tanınması ve takibi,” Fırat Univ. J. Eng. Sci., vol. 31, no. 2, pp. 571–581, 2019.
  • [53] T. Meinhardt, A. Kirillov, L. Leal-Taixé, and C. Feichtenhofer, “TrackFormer: Multi-object tracking with transformers,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 8844–8854.
  • [54] P. Sun, J. Cao, Y. Jiang, R. Zhang, E. Xie, Z. Yuan, et al., “TransTrack: Multiple object tracking with transformer,” arXiv preprint arXiv:2012.15460, 2020.G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” in Workshop Faces in Real-Life Images: Detection, Alignment, and Recognition, 2008.
  • [55] Honey and S. S. Oberoi, “Revolutionizing face detection: Exploring the potential of MTCNN algorithm for human face recognition,” in Proc. 2023 7th Int. Conf. Image Information Processing (ICIIP), 2023, pp. 300–306.
  • [56] D. Chicco, “Siamese neural networks: An overview,” Artif. Neural Netw., pp. 73–94, 2021.
  • [57] Y. Li, C. P. Chen, and T. Zhang, “A survey on Siamese network: Methodologies, applications, and opportunities,” IEEE Trans. Artif. Intell., vol. 3, no. 6, pp. 994–1014, 2022.
  • [58] A. L. Marshal and A. N. F. A. N. Fajar, “Image classification of mangoes using CNN VGG16 and AlexNet,” J. Soc. Sci. (JoSS), vol. 2, no. 8, pp. 694–703, 2023.
  • [59] A. Ullah, H. Elahi, Z. Sun, A. Khatoon, and I. Ahmad, “Comparative analysis of AlexNet, ResNet18 and SqueezeNet with diverse modification and arduous implementation,” Arab. J. Sci. Eng., vol. 47, no. 2, pp. 2397–2417, 2022.
  • [60] Z. Chen, Y. Jiang, X. Zhang, R. Zheng, R. Qiu, Y. Sun, C. Zhao, and H. Shang, “ResNet18DNN: Prediction approach of drug-induced liver injury by deep neural network with ResNet18,” Brief. Bioinform., vol. 23, no. 1, p. bbab503, 2022.
  • [61] K. S. Rao and P. Chatterjee, “Reducing the number of trainable parameters does not affect the accuracy of ResNet-18 on cervical cancer images,” in Impending Inquisitions in Humanities and Sciences, CRC Press, 2024, pp. 536–541.
  • [62] A. G. Toprak, D. Ünay, T. Cerit, and K. Üğüdücü, “Cancerous lesion segmentation for early detection of breast cancer by using CNN,” in Proc. Int. Graduate Research Symp. (IGRS), 2022, p. 190.
  • [63] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [64] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), Jun. 2015, pp. 815–823.
  • [65] B. Alharbi and H. S. Alshanbari, “Face-voice based multimodal biometric authentication system via FaceNet and GMM,” PeerJ Comput. Sci., vol. 9, p. e1468, 2023.
  • [66] P. P. Jena, K. N. Kattigenahally, S. Nikitha, S. Sarda, and H. Y., “Multimodal biometric authentication: Deep learning approach,” in Proc. 2021 Int. Conf. Circuits, Controls and Communications (CCUBE), 2021, pp. 1– 5.
  • [67] P. S. and S. S. V., “Security monitoring system using FaceNet for wireless sensor network,” arXiv preprint arXiv:2112.01305, 2021.
  • [68] A. F. S. Moura, S. S. L. Pereira, M. W. L. Moreira, and J. J. P. C. Rodrigues, “Video monitoring system using facial recognition: A FaceNet-based approach,” in Proc. GLOBECOM 2020 - IEEE Global Commun. Conf., 2020, pp. 1–6.
  • [69] P. B. Sri, K. Navya, K. Saiteja, P. Keerthi, S. Hariharan, and V. Kukreja, “Harnessing security improvements using FaceNet approach for face recognition system,” in Proc. 2024 IEEE 13th Int. Conf. Communication Systems and Network Technologies (CSNT), 2024, pp. 306–312.
  • [70] A. Chinapas, P. Polpinit, N. Intiruk, and K. R. Saikaew, “Personal verification system using ID card and face photo,” Int. J. Mach. Learn. Comput., vol. 9, no. 4, pp. 407–412, 2019.
  • [71] R. A. Asmara, B. Sayudha, M. Mentari, R. P. P. Budiman, A. N. Handayani, M. Ridwan, and P. P. Arhandi, “Face recognition using ArcFace and FaceNet in Google Cloud Platform for attendance system mobile application,” in Proc. 2022 Annu. Technol., Appl. Sci. and Eng. Conf. (ATASEC), 2022, pp. 134–144.
  • [72] M. Abdollahzadeh, T. Malekzadeh, C. T. H. Teo, K. Chandrasegaran, G. Liu, and N.-M. Cheung, “A survey on generative modeling with limited data, few shots, and zero shot,” arXiv preprint arXiv:2307.14397, 2023.
  • [73] C. Patrício and J. C. Neves, “Zero-shot face recognition: Improving the discriminability of visual face features using a semantic-guided attention model,” Expert Syst. Appl., vol. 211, p. 118635, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0957417422016803
Toplam 73 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Bilgisayar Yazılımı
Bölüm Araştırma Makalesi
Yazarlar

Mehmet Özdem 0000-0002-2901-2342

Proje Numarası 5249902
Gönderilme Tarihi 20 Eylül 2025
Kabul Tarihi 13 Ekim 2025
Yayımlanma Tarihi 29 Ocak 2026
Yayımlandığı Sayı Yıl 2026 Cilt: 15 Sayı: 2

Kaynak Göster

APA Özdem, M. (2026). A New Approach for Standardized Zero-Shot Face Verification Using Siamese Neural Networks with Dynamic Thresholding. European Journal of Technique (EJT), 15(2), 145-157. https://doi.org/10.36222/ejt.1788087