The energy consumption of deep learning models during training and inference processes has become an important performance indicator, especially for applications running on resource-constrained devices. Although there are significant differences in computational costs between different architectures, studies that comprehensively compare the energy efficiency of models are limited. In this study, six widely used models MobileNetV2, EfficientNet-B0, ResNet50, DenseNet121, Xception, and VGG19 were trained and evaluated under the same dataset and experimental settings. Real-time power measurements were performed on an RTX 2070 GPU to calculate each model's total energy consumption during training and inference, average power value, frames per second (FPS), and energy cost per image (J/image). The findings show that lightweight architectures are significantly more efficient: MobileNetV2 achieved the lowest energy consumption at 0.2289 J/image during inference, while EfficientNet-B0 offered balanced performance in terms of accuracy and energy usage. In contrast, VGG19 stood out as the least efficient model due to its high power requirements. The results reveal that model architecture has a direct impact on energy consumption and that model selection plays a critical role in the design of sustainable artificial intelligence systems.
Energy consumption Deep learning Power profiling Green AI Model efficiency
Ethics committee approval was not required for this study because it did not involve human participants or animal subjects.
The author would like to thank the open-source contributors of the Imagenette dataset and the developers of Tensorflow and associated deep learning libraries used in this study. No additional administrative, technical, or material support was received.
The energy consumption of deep learning models during training and inference processes has become an important performance indicator, especially for applications running on resource-constrained devices. Although there are significant differences in computational costs between different architectures, studies that comprehensively compare the energy efficiency of models are limited. In this study, six widely used models MobileNetV2, EfficientNet-B0, ResNet50, DenseNet121, Xception, and VGG19 were trained and evaluated under the same dataset and experimental settings. Real-time power measurements were performed on an RTX 2070 GPU to calculate each model's total energy consumption during training and inference, average power value, frames per second (FPS), and energy cost per image (J/image). The findings show that lightweight architectures are significantly more efficient: MobileNetV2 achieved the lowest energy consumption at 0.2289 J/image during inference, while EfficientNet-B0 offered balanced performance in terms of accuracy and energy usage. In contrast, VGG19 stood out as the least efficient model due to its high power requirements. The results reveal that model architecture has a direct impact on energy consumption and that model selection plays a critical role in the design of sustainable artificial intelligence systems.
Energy consumption Deep learning Power profiling Green AI Model efficiency
Ethics committee approval was not required for this study because it did not involve human participants or animal subjects.
The author would like to thank the open-source contributors of the Imagenette dataset and the developers of Tensorflow and associated deep learning libraries used in this study. No additional administrative, technical, or material support was received.
| Birincil Dil | İngilizce |
|---|---|
| Konular | Sürdürülebilir Kalkınma ve Kamu Yararına Bilgi Sistemleri |
| Bölüm | Araştırma Makalesi |
| Yazarlar | |
| Gönderilme Tarihi | 7 Ocak 2026 |
| Kabul Tarihi | 5 Şubat 2026 |
| Yayımlanma Tarihi | 15 Mart 2026 |
| DOI | https://doi.org/10.34248/bsengineering.1858749 |
| IZ | https://izlik.org/JA66ND98FX |
| Yayımlandığı Sayı | Yıl 2026 Cilt: 9 Sayı: 2 |