The energy consumption of deep learning models during training and inference processes has become an important performance indicator, especially for applications running on resource-constrained devices. Although there are significant differences in computational costs between different architectures, studies that comprehensively compare the energy efficiency of models are limited. In this study, six widely used models MobileNetV2, EfficientNet-B0, ResNet50, DenseNet121, Xception, and VGG19 were trained and evaluated under the same dataset and experimental settings. Real-time power measurements were performed on an RTX 2070 GPU to calculate each model's total energy consumption during training and inference, average power value, frames per second (FPS), and energy cost per image (J/image). The findings show that lightweight architectures are significantly more efficient: MobileNetV2 achieved the lowest energy consumption at 0.2289 J/image during inference, while EfficientNet-B0 offered balanced performance in terms of accuracy and energy usage. In contrast, VGG19 stood out as the least efficient model due to its high power requirements. The results reveal that model architecture has a direct impact on energy consumption and that model selection plays a critical role in the design of sustainable artificial intelligence systems.
Ethics committee approval was not required for this study because it did not involve human participants or animal subjects.
The author would like to thank the open-source contributors of the Imagenette dataset and the developers of Tensorflow and associated deep learning libraries used in this study. No additional administrative, technical, or material support was received.
The energy consumption of deep learning models during training and inference processes has become an important performance indicator, especially for applications running on resource-constrained devices. Although there are significant differences in computational costs between different architectures, studies that comprehensively compare the energy efficiency of models are limited. In this study, six widely used models MobileNetV2, EfficientNet-B0, ResNet50, DenseNet121, Xception, and VGG19 were trained and evaluated under the same dataset and experimental settings. Real-time power measurements were performed on an RTX 2070 GPU to calculate each model's total energy consumption during training and inference, average power value, frames per second (FPS), and energy cost per image (J/image). The findings show that lightweight architectures are significantly more efficient: MobileNetV2 achieved the lowest energy consumption at 0.2289 J/image during inference, while EfficientNet-B0 offered balanced performance in terms of accuracy and energy usage. In contrast, VGG19 stood out as the least efficient model due to its high power requirements. The results reveal that model architecture has a direct impact on energy consumption and that model selection plays a critical role in the design of sustainable artificial intelligence systems.
Ethics committee approval was not required for this study because it did not involve human participants or animal subjects.
The author would like to thank the open-source contributors of the Imagenette dataset and the developers of Tensorflow and associated deep learning libraries used in this study. No additional administrative, technical, or material support was received.
| Primary Language | English |
|---|---|
| Subjects | Information Systems For Sustainable Development and The Public Good |
| Journal Section | Research Article |
| Authors | |
| Submission Date | January 7, 2026 |
| Acceptance Date | February 5, 2026 |
| Publication Date | March 15, 2026 |
| DOI | https://doi.org/10.34248/bsengineering.1858749 |
| IZ | https://izlik.org/JA66ND98FX |
| Published in Issue | Year 2026 Volume: 9 Issue: 2 |