Research Article
BibTex RIS Cite

DeepFake Detection Using Fine-Tuned CNN Architectures

Year 2025, Volume: 21 Issue: 1, 121 - 128, 26.03.2025
https://doi.org/10.18466/cbayarfbe.1530209

Abstract

Synthetic images have gained significant popularity, producing high-quality visuals that are challenging to distinguish from real images. Computer-generated images have become increasingly realistic and misleading as artificial intelligence models advance. The easy dissemination of synthetic images online has raised concerns about their potential misuse. An automated detection system has become essential to safeguard personal privacy. Such a system is also critical for preventing manipulation, maintaining social order, and preserving the authenticity of images. This study compares lightweight and dense models for real-fake classification tasks. In the first phase, the performance of lightweight models on the dataset is analyzed, followed by an assessment of dense models in the second phase. When the best-performing lightweight model, EfficientNetV2B0, is combined in a hybrid with the top dense model, DenseNet201, an 88% accuracy rate is observed. Moreover, a hybrid of the two most effective dense models, DenseNet121 and DenseNet201, achieved an accuracy of 89% on the test dataset. Experimental results indicate that DenseNet networks excelling in finer details achieve preferable outcomes on synthetic data.

References

  • [1]. Dang, L.M., Min, K., Wang, H., Piran, M.J., Lee, C.H. and Moon, H., 2020. Sensor-based and vision-based human activity recognition: A comprehensive survey. Pattern Recognition, 108, p.107561.
  • [2]. Corvi, R., Cozzolino, D., Zingarini, G., Poggi, G., Nagano, K. and Verdoliva, L., 2023, June. On the detection of synthetic images generated by diffusion models. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE.
  • [3]. Bonettini, N., Bestagini, P., Milani, S. and Tubaro, S., 2021, January. On the use of Benford's law to detect GAN-generated images. In 2020 25th international conference on pattern recognition (ICPR) (pp. 5495-5502). IEEE.
  • [4]. Bird, J.J., Naser, A. and Lotfi, A., 2023. Writer-independent signature verification; Evaluation of robotic and generative adversarial attacks. Information Sciences, 633, pp.170-181.
  • [5]. Bayar, B. and Stamm, M.C., 2016, June. A deep learning approach to universal image manipulation detection using a new convolutional layer. In Proceedings of the 4th ACM workshop on information hiding and multimedia security (pp. 5-10).
  • [6]. Verdoliva, L., 2020. Media forensics and deepfakes: an overview. IEEE journal of selected topics in signal processing, 14(5), pp.910-932.
  • [7]. Fridrich, J. and Kodovsky, J., 2012. Rich models for steganalysis of digital images. IEEE Transactions on information Forensics and Security, 7(3), pp.868-882.
  • [8]. Zhang, X., Karaman, S. and Chang, S.F., 2019, December. Detecting and simulating artifacts in gan fake images. In 2019 IEEE international workshop on information forensics and security (WIFS) (pp. 1-6). IEEE.
  • [9]. Frank, J., Eisenhofer, T., Schönherr, L., Fischer, A., Kolossa, D. and Holz, T., 2020, November. Leveraging frequency analysis for deep fake image recognition. In International conference on machine learning (pp. 3247-3258). PMLR.
  • [10]. Wang, S.Y., Wang, O., Zhang, R., Owens, A. and Efros, A.A., 2020. CNN-generated images are surprisingly easy to spot... for now. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8695-8704).
  • [11]. Guarnera, L., Giudice, O., & Battiato, S., 2020. Deepfake detection by analyzing convolutional traces. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (pp. 666-667).
  • [12]. Somepalli, G., Singla, V., Goldblum, M., Geiping, J. and Goldstein, T., 2023. Diffusion art or digital forgery? investigating data replication in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6048-6058).
  • [13]. Malolan, B., Parekh, A., & Kazi, F., 2020. Explainable deep-fake detection using visual interpretability methods. In 2020 3rd International conference on Information and Computer Technologies (ICICT) (pp. 289-293). IEEE.
  • [14]. Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., & Nießner, M., 2018. Faceforensics: A large-scale video dataset for forgery detection in human faces. arXiv preprint arXiv:1803.09179.
  • [15]. Ranjan, P., Patil, S., & Kazi, F., 2020. Improved generalizability of deep-fakes detection using transfer learning based CNN framework. In 2020 3rd international conference on information and computer technologies (ICICT) (pp. 86-90). IEEE.
  • [16]. Dufour, N., & Gully, A. 2024. Contributing data to deepfake detection research. https://research.google/blog/contributing-data-to-deepfake-detection-research/
  • [17]. Li, Y., Yang, X., Sun, P., Qi, H. and Lyu, S., 2020. Celeb-df: A large-scale challenging dataset for deepfake forensics. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3207-3216).
  • [18]. Dolhansky, B., 2019. The deepfake detection challenge (DFDC) pre view dataset. arXiv preprint arXiv:1910.08854.
  • [19]. Nida, N., Irtaza, A., & Ilyas, N. 2021. Forged face detection using ELA and deep learning techniques. In 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST) (pp. 271-275). IEEE.
  • [20]. Real and fake face detection, https://www.kaggle.com/ciplab/real-and-fake-face-detection (2024).
  • [21]. Bird, J.J. and Lotfi, A., 2024. Cifake: Image classification and explainable identification of ai-generated synthetic images. IEEE Access.
  • [22]. The CIFAR-10 dataset, https://www.cs.toronto.edu/~kriz/cifar.html (accessed at 13.03.2024).
  • [23]. Howard, A. G. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  • [24]. Huang, G., Liu, Z., Van Der Maaten, L. and Weinberger, K.Q., 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).
  • [25]. Tan, M. and Le, Q., 2021, July. Efficientnetv2: Smaller models and faster training. In International conference on machine learning (pp. 10096-10106). PMLR.
  • [26]. Wang, W., Li, Y., Zou, T., Wang, X., You, J., & Luo, Y. (2020). A novel image classification approach via dense‐MobileNet models. Mobile Information Systems, 2020(1), 7602384.
  • [27]. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520).
  • [28]. Howard, A., Sandler, M., Chu, G., Chen, L. C., Chen, B., Tan, M., & Adam, H. (2019). Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1314-1324).
  • [29]. Bozkurt, F. (2021). Classification of blood cells from blood cell images using dense convolutional network. Journal of Science, Technology and Engineering Research, 2(2), 81-88.
  • [30]. Zheng, T., Yang, X., Lv, J., Li, M., Wang, S. and Li, W., 2023. An efficient mobile model for insect image classification in the field pest management. Engineering Science and Technology, an International Journal, 39, p.101335.
  • [31]. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. and Chen, L.C., 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520).
  • [32]. Gupta, S. and Akin, B., 2020. Accelerator-aware neural network design using automl. arXiv preprint arXiv:2003.02838.
Year 2025, Volume: 21 Issue: 1, 121 - 128, 26.03.2025
https://doi.org/10.18466/cbayarfbe.1530209

Abstract

References

  • [1]. Dang, L.M., Min, K., Wang, H., Piran, M.J., Lee, C.H. and Moon, H., 2020. Sensor-based and vision-based human activity recognition: A comprehensive survey. Pattern Recognition, 108, p.107561.
  • [2]. Corvi, R., Cozzolino, D., Zingarini, G., Poggi, G., Nagano, K. and Verdoliva, L., 2023, June. On the detection of synthetic images generated by diffusion models. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE.
  • [3]. Bonettini, N., Bestagini, P., Milani, S. and Tubaro, S., 2021, January. On the use of Benford's law to detect GAN-generated images. In 2020 25th international conference on pattern recognition (ICPR) (pp. 5495-5502). IEEE.
  • [4]. Bird, J.J., Naser, A. and Lotfi, A., 2023. Writer-independent signature verification; Evaluation of robotic and generative adversarial attacks. Information Sciences, 633, pp.170-181.
  • [5]. Bayar, B. and Stamm, M.C., 2016, June. A deep learning approach to universal image manipulation detection using a new convolutional layer. In Proceedings of the 4th ACM workshop on information hiding and multimedia security (pp. 5-10).
  • [6]. Verdoliva, L., 2020. Media forensics and deepfakes: an overview. IEEE journal of selected topics in signal processing, 14(5), pp.910-932.
  • [7]. Fridrich, J. and Kodovsky, J., 2012. Rich models for steganalysis of digital images. IEEE Transactions on information Forensics and Security, 7(3), pp.868-882.
  • [8]. Zhang, X., Karaman, S. and Chang, S.F., 2019, December. Detecting and simulating artifacts in gan fake images. In 2019 IEEE international workshop on information forensics and security (WIFS) (pp. 1-6). IEEE.
  • [9]. Frank, J., Eisenhofer, T., Schönherr, L., Fischer, A., Kolossa, D. and Holz, T., 2020, November. Leveraging frequency analysis for deep fake image recognition. In International conference on machine learning (pp. 3247-3258). PMLR.
  • [10]. Wang, S.Y., Wang, O., Zhang, R., Owens, A. and Efros, A.A., 2020. CNN-generated images are surprisingly easy to spot... for now. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8695-8704).
  • [11]. Guarnera, L., Giudice, O., & Battiato, S., 2020. Deepfake detection by analyzing convolutional traces. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (pp. 666-667).
  • [12]. Somepalli, G., Singla, V., Goldblum, M., Geiping, J. and Goldstein, T., 2023. Diffusion art or digital forgery? investigating data replication in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6048-6058).
  • [13]. Malolan, B., Parekh, A., & Kazi, F., 2020. Explainable deep-fake detection using visual interpretability methods. In 2020 3rd International conference on Information and Computer Technologies (ICICT) (pp. 289-293). IEEE.
  • [14]. Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., & Nießner, M., 2018. Faceforensics: A large-scale video dataset for forgery detection in human faces. arXiv preprint arXiv:1803.09179.
  • [15]. Ranjan, P., Patil, S., & Kazi, F., 2020. Improved generalizability of deep-fakes detection using transfer learning based CNN framework. In 2020 3rd international conference on information and computer technologies (ICICT) (pp. 86-90). IEEE.
  • [16]. Dufour, N., & Gully, A. 2024. Contributing data to deepfake detection research. https://research.google/blog/contributing-data-to-deepfake-detection-research/
  • [17]. Li, Y., Yang, X., Sun, P., Qi, H. and Lyu, S., 2020. Celeb-df: A large-scale challenging dataset for deepfake forensics. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3207-3216).
  • [18]. Dolhansky, B., 2019. The deepfake detection challenge (DFDC) pre view dataset. arXiv preprint arXiv:1910.08854.
  • [19]. Nida, N., Irtaza, A., & Ilyas, N. 2021. Forged face detection using ELA and deep learning techniques. In 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST) (pp. 271-275). IEEE.
  • [20]. Real and fake face detection, https://www.kaggle.com/ciplab/real-and-fake-face-detection (2024).
  • [21]. Bird, J.J. and Lotfi, A., 2024. Cifake: Image classification and explainable identification of ai-generated synthetic images. IEEE Access.
  • [22]. The CIFAR-10 dataset, https://www.cs.toronto.edu/~kriz/cifar.html (accessed at 13.03.2024).
  • [23]. Howard, A. G. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  • [24]. Huang, G., Liu, Z., Van Der Maaten, L. and Weinberger, K.Q., 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).
  • [25]. Tan, M. and Le, Q., 2021, July. Efficientnetv2: Smaller models and faster training. In International conference on machine learning (pp. 10096-10106). PMLR.
  • [26]. Wang, W., Li, Y., Zou, T., Wang, X., You, J., & Luo, Y. (2020). A novel image classification approach via dense‐MobileNet models. Mobile Information Systems, 2020(1), 7602384.
  • [27]. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520).
  • [28]. Howard, A., Sandler, M., Chu, G., Chen, L. C., Chen, B., Tan, M., & Adam, H. (2019). Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1314-1324).
  • [29]. Bozkurt, F. (2021). Classification of blood cells from blood cell images using dense convolutional network. Journal of Science, Technology and Engineering Research, 2(2), 81-88.
  • [30]. Zheng, T., Yang, X., Lv, J., Li, M., Wang, S. and Li, W., 2023. An efficient mobile model for insect image classification in the field pest management. Engineering Science and Technology, an International Journal, 39, p.101335.
  • [31]. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. and Chen, L.C., 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520).
  • [32]. Gupta, S. and Akin, B., 2020. Accelerator-aware neural network design using automl. arXiv preprint arXiv:2003.02838.
There are 32 citations in total.

Details

Primary Language English
Subjects Computer Software, Software Engineering (Other)
Journal Section Articles
Authors

Dilber Çetintaş 0000-0003-0710-2280

Zehra Yücel 0000-0002-2863-9119

Publication Date March 26, 2025
Submission Date August 9, 2024
Acceptance Date November 28, 2024
Published in Issue Year 2025 Volume: 21 Issue: 1

Cite

APA Çetintaş, D., & Yücel, Z. (2025). DeepFake Detection Using Fine-Tuned CNN Architectures. Celal Bayar University Journal of Science, 21(1), 121-128. https://doi.org/10.18466/cbayarfbe.1530209
AMA Çetintaş D, Yücel Z. DeepFake Detection Using Fine-Tuned CNN Architectures. CBUJOS. March 2025;21(1):121-128. doi:10.18466/cbayarfbe.1530209
Chicago Çetintaş, Dilber, and Zehra Yücel. “DeepFake Detection Using Fine-Tuned CNN Architectures”. Celal Bayar University Journal of Science 21, no. 1 (March 2025): 121-28. https://doi.org/10.18466/cbayarfbe.1530209.
EndNote Çetintaş D, Yücel Z (March 1, 2025) DeepFake Detection Using Fine-Tuned CNN Architectures. Celal Bayar University Journal of Science 21 1 121–128.
IEEE D. Çetintaş and Z. Yücel, “DeepFake Detection Using Fine-Tuned CNN Architectures”, CBUJOS, vol. 21, no. 1, pp. 121–128, 2025, doi: 10.18466/cbayarfbe.1530209.
ISNAD Çetintaş, Dilber - Yücel, Zehra. “DeepFake Detection Using Fine-Tuned CNN Architectures”. Celal Bayar University Journal of Science 21/1 (March 2025), 121-128. https://doi.org/10.18466/cbayarfbe.1530209.
JAMA Çetintaş D, Yücel Z. DeepFake Detection Using Fine-Tuned CNN Architectures. CBUJOS. 2025;21:121–128.
MLA Çetintaş, Dilber and Zehra Yücel. “DeepFake Detection Using Fine-Tuned CNN Architectures”. Celal Bayar University Journal of Science, vol. 21, no. 1, 2025, pp. 121-8, doi:10.18466/cbayarfbe.1530209.
Vancouver Çetintaş D, Yücel Z. DeepFake Detection Using Fine-Tuned CNN Architectures. CBUJOS. 2025;21(1):121-8.