Synthetic images have gained significant popularity, producing high-quality visuals that are challenging to distinguish from real images. Computer-generated images have become increasingly realistic and misleading as artificial intelligence models advance. The easy dissemination of synthetic images online has raised concerns about their potential misuse. An automated detection system has become essential to safeguard personal privacy. Such a system is also critical for preventing manipulation, maintaining social order, and preserving the authenticity of images. This study compares lightweight and dense models for real-fake classification tasks. In the first phase, the performance of lightweight models on the dataset is analyzed, followed by an assessment of dense models in the second phase. When the best-performing lightweight model, EfficientNetV2B0, is combined in a hybrid with the top dense model, DenseNet201, an 88% accuracy rate is observed. Moreover, a hybrid of the two most effective dense models, DenseNet121 and DenseNet201, achieved an accuracy of 89% on the test dataset. Experimental results indicate that DenseNet networks excelling in finer details achieve preferable outcomes on synthetic data.
Primary Language | English |
---|---|
Subjects | Computer Software, Software Engineering (Other) |
Journal Section | Articles |
Authors | |
Publication Date | March 26, 2025 |
Submission Date | August 9, 2024 |
Acceptance Date | November 28, 2024 |
Published in Issue | Year 2025 Volume: 21 Issue: 1 |