Araştırma Makalesi
BibTex RIS Kaynak Göster

Comparative Analysis of Open-Source Deep Learning Models in Terms of Energy Consumption, Computational Load, and Performance

Yıl 2026, Cilt: 9 Sayı: 2, 616 - 623, 15.03.2026
https://doi.org/10.34248/bsengineering.1858749
https://izlik.org/JA66ND98FX

Öz

The energy consumption of deep learning models during training and inference processes has become an important performance indicator, especially for applications running on resource-constrained devices. Although there are significant differences in computational costs between different architectures, studies that comprehensively compare the energy efficiency of models are limited. In this study, six widely used models MobileNetV2, EfficientNet-B0, ResNet50, DenseNet121, Xception, and VGG19 were trained and evaluated under the same dataset and experimental settings. Real-time power measurements were performed on an RTX 2070 GPU to calculate each model's total energy consumption during training and inference, average power value, frames per second (FPS), and energy cost per image (J/image). The findings show that lightweight architectures are significantly more efficient: MobileNetV2 achieved the lowest energy consumption at 0.2289 J/image during inference, while EfficientNet-B0 offered balanced performance in terms of accuracy and energy usage. In contrast, VGG19 stood out as the least efficient model due to its high power requirements. The results reveal that model architecture has a direct impact on energy consumption and that model selection plays a critical role in the design of sustainable artificial intelligence systems.

Etik Beyan

Ethics committee approval was not required for this study because it did not involve human participants or animal subjects.

Teşekkür

The author would like to thank the open-source contributors of the Imagenette dataset and the developers of Tensorflow and associated deep learning libraries used in this study. No additional administrative, technical, or material support was received.

Kaynakça

  • Aquino-Brítez, S., García-Sánchez, P., Ortiz, A., & Aquino-Brítez, D. (2025). Towards an energy consumption index for deep learning models: A comparative analysis of architectures, GPUs, and measurement tools. Sensors, 25(3), 846.
  • Bouza, L., Bugeau, A., & Lannelongue, L. (2023). How to estimate carbon footprint when training deep learning models? A guide and review. Environmental Research Communications, 5(11), 115014.
  • Bozkurt, A. (2024). Stanford HAI yapay zekâ raporu incelemesi. Bilgi Yönetimi, 7(2), 445–457.
  • del Rey, S., Martínez-Fernández, S., Cruz, L., & Franch, X. (2023). Do DL models and training environments have an impact on energy consumption? 2023 49th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), 150–158.
  • Dey, S., Singh, A. K., Prasad, D. K., & McDonald-Maier, K. (2020). Temporal motionless analysis of video using CNN in MPSoC. 2020 IEEE 31st International Conference on Application-Specific Systems, Architectures and Processors (ASAP), 73–76.
  • Getzner, J., Charpentier, B., & Günnemann, S. (2023). Accuracy is not the only metric that matters: Estimating the energy consumption of deep learning models. arXiv. https://doi.org/10.48550/arXiv.2304.00897
  • Gowda, S. N., Hao, X., Li, G., Gowda, S. N., Jin, X., & Sevilla-Lara, L. (2024). Watt for what: Rethinking deep learning’s energy-performance relationship. European Conference on Computer Vision, 388–405.
  • Howard, J., & Gugger, S. (2020). Fastai: A layered API for deep learning. Information, 11(2), 108.
  • Ji, Z., & Jiang, M. (2026). A systematic review of electricity demand for large language models: Evaluations, challenges, and solutions. Renewable and Sustainable Energy Reviews, 225, 116159.
  • Karamchandani, A., Mozo, A., Gómez-Canaval, S., & Pastor, A. (2024). A methodological framework for optimizing the energy consumption of deep neural networks: A case study of a cyber threat detector. Neural Computing and Applications, 36(17), 10297–10338.
  • Latif, I., Newkirk, A. C., Carbone, M. R., Munir, A., Lin, Y., Koomey, J., Yu, X., & Dong, Z. (2024). Empirical measurements of AI training power demand on a GPU-accelerated node. arXiv. https://doi.org/10.48550/arXiv.2412.08602
  • Li, C., Tsourdos, A., & Guo, W. (2022). A transistor operations model for deep learning energy consumption scaling law. IEEE Transactions on Artificial Intelligence, 5(1), 192–204.
  • Mehlin, V., Schacht, S., & Lanquillon, C. (2023). Towards energy-efficient deep learning: An overview of energy-efficient approaches along the deep learning lifecycle. arXiv. https://doi.org/10.48550/arXiv.2303.01980
  • Qi, Y., Zhang, S., & Taha, T. M. (2022). TRIM: A design space exploration model for deep neural networks inference and training accelerators. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 42(5), 1648–1661.
  • Rodriguez, C., Degioanni, L., Kameni, L., Vidal, R., & Neglia, G. (2024). Evaluating the energy consumption of machine learning: Systematic literature review and experiments. arXiv. https://doi.org/10.48550/arXiv.2408.15128
  • Söyler, H. (2025). Gerze Meslek Yüksekokulu’nun karbon ayak izi hesaplaması, Monte Carlo analizi ve sürdürülebilirlik stratejileri. Journal of Anatolian Environmental and Animal Sciences, 10(4), 328–337.
  • Tang, Z., Wang, Y., Wang, Q., & Chu, X. (2019). The impact of GPU DVFS on the energy and performance of deep learning: An empirical study. Proceedings of the Tenth ACM International Conference on Future Energy Systems, 315–325.
  • Tripp, C. E., Perr-Sauer, J., Gafur, J., Nag, A., Purkayastha, A., Zisman, S., & Bensen, E. A. (2024). Measuring the energy consumption and efficiency of deep neural networks: An empirical analysis and design recommendations. arXiv. https://doi.org/10.48550/arXiv.2403.08151
  • Tu, X., Mallik, A., Chen, D., Han, K., Altintas, O., Wang, H., & Xie, J. (2023). Unveiling energy efficiency in deep learning: Measurement, prediction, and scoring across edge devices. Proceedings of the Eighth ACM/IEEE Symposium on Edge Computing, 80–93.
  • Tuğaç, Ç. (2023). Birleşmiş Milletler sürdürülebilir kalkınma amaçlarının gerçekleştirilmesinde yapay zekâ uygulamalarının rolü. Sayıştay Dergisi, 34(128), 1–25.
  • Wang, F., Zhang, W., Lai, S., Hao, M., & Wang, Z. (2021). Dynamic GPU energy optimization for machine learning training workloads. IEEE Transactions on Parallel and Distributed Systems, 33(11), 2943–2954.
  • Wang, Y., Hao, M., He, H., Zhang, W., Tang, Q., Sun, X., & Wang, Z. (2024). Drlcap: Runtime GPU frequency capping with deep reinforcement learning. IEEE Transactions on Sustainable Computing, 9(5), 712–726.
  • Xu, Y., Martínez-Fernández, S., Martinez, M., & Franch, X. (2023). Energy efficiency of training neural network architectures: An empirical study. arXiv. https://doi.org/10.48550/arXiv.2302.00967

Comparative Analysis of Open-Source Deep Learning Models in Terms of Energy Consumption, Computational Load, and Performance

Yıl 2026, Cilt: 9 Sayı: 2, 616 - 623, 15.03.2026
https://doi.org/10.34248/bsengineering.1858749
https://izlik.org/JA66ND98FX

Öz

The energy consumption of deep learning models during training and inference processes has become an important performance indicator, especially for applications running on resource-constrained devices. Although there are significant differences in computational costs between different architectures, studies that comprehensively compare the energy efficiency of models are limited. In this study, six widely used models MobileNetV2, EfficientNet-B0, ResNet50, DenseNet121, Xception, and VGG19 were trained and evaluated under the same dataset and experimental settings. Real-time power measurements were performed on an RTX 2070 GPU to calculate each model's total energy consumption during training and inference, average power value, frames per second (FPS), and energy cost per image (J/image). The findings show that lightweight architectures are significantly more efficient: MobileNetV2 achieved the lowest energy consumption at 0.2289 J/image during inference, while EfficientNet-B0 offered balanced performance in terms of accuracy and energy usage. In contrast, VGG19 stood out as the least efficient model due to its high power requirements. The results reveal that model architecture has a direct impact on energy consumption and that model selection plays a critical role in the design of sustainable artificial intelligence systems.

Etik Beyan

Ethics committee approval was not required for this study because it did not involve human participants or animal subjects.

Teşekkür

The author would like to thank the open-source contributors of the Imagenette dataset and the developers of Tensorflow and associated deep learning libraries used in this study. No additional administrative, technical, or material support was received.

Kaynakça

  • Aquino-Brítez, S., García-Sánchez, P., Ortiz, A., & Aquino-Brítez, D. (2025). Towards an energy consumption index for deep learning models: A comparative analysis of architectures, GPUs, and measurement tools. Sensors, 25(3), 846.
  • Bouza, L., Bugeau, A., & Lannelongue, L. (2023). How to estimate carbon footprint when training deep learning models? A guide and review. Environmental Research Communications, 5(11), 115014.
  • Bozkurt, A. (2024). Stanford HAI yapay zekâ raporu incelemesi. Bilgi Yönetimi, 7(2), 445–457.
  • del Rey, S., Martínez-Fernández, S., Cruz, L., & Franch, X. (2023). Do DL models and training environments have an impact on energy consumption? 2023 49th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), 150–158.
  • Dey, S., Singh, A. K., Prasad, D. K., & McDonald-Maier, K. (2020). Temporal motionless analysis of video using CNN in MPSoC. 2020 IEEE 31st International Conference on Application-Specific Systems, Architectures and Processors (ASAP), 73–76.
  • Getzner, J., Charpentier, B., & Günnemann, S. (2023). Accuracy is not the only metric that matters: Estimating the energy consumption of deep learning models. arXiv. https://doi.org/10.48550/arXiv.2304.00897
  • Gowda, S. N., Hao, X., Li, G., Gowda, S. N., Jin, X., & Sevilla-Lara, L. (2024). Watt for what: Rethinking deep learning’s energy-performance relationship. European Conference on Computer Vision, 388–405.
  • Howard, J., & Gugger, S. (2020). Fastai: A layered API for deep learning. Information, 11(2), 108.
  • Ji, Z., & Jiang, M. (2026). A systematic review of electricity demand for large language models: Evaluations, challenges, and solutions. Renewable and Sustainable Energy Reviews, 225, 116159.
  • Karamchandani, A., Mozo, A., Gómez-Canaval, S., & Pastor, A. (2024). A methodological framework for optimizing the energy consumption of deep neural networks: A case study of a cyber threat detector. Neural Computing and Applications, 36(17), 10297–10338.
  • Latif, I., Newkirk, A. C., Carbone, M. R., Munir, A., Lin, Y., Koomey, J., Yu, X., & Dong, Z. (2024). Empirical measurements of AI training power demand on a GPU-accelerated node. arXiv. https://doi.org/10.48550/arXiv.2412.08602
  • Li, C., Tsourdos, A., & Guo, W. (2022). A transistor operations model for deep learning energy consumption scaling law. IEEE Transactions on Artificial Intelligence, 5(1), 192–204.
  • Mehlin, V., Schacht, S., & Lanquillon, C. (2023). Towards energy-efficient deep learning: An overview of energy-efficient approaches along the deep learning lifecycle. arXiv. https://doi.org/10.48550/arXiv.2303.01980
  • Qi, Y., Zhang, S., & Taha, T. M. (2022). TRIM: A design space exploration model for deep neural networks inference and training accelerators. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 42(5), 1648–1661.
  • Rodriguez, C., Degioanni, L., Kameni, L., Vidal, R., & Neglia, G. (2024). Evaluating the energy consumption of machine learning: Systematic literature review and experiments. arXiv. https://doi.org/10.48550/arXiv.2408.15128
  • Söyler, H. (2025). Gerze Meslek Yüksekokulu’nun karbon ayak izi hesaplaması, Monte Carlo analizi ve sürdürülebilirlik stratejileri. Journal of Anatolian Environmental and Animal Sciences, 10(4), 328–337.
  • Tang, Z., Wang, Y., Wang, Q., & Chu, X. (2019). The impact of GPU DVFS on the energy and performance of deep learning: An empirical study. Proceedings of the Tenth ACM International Conference on Future Energy Systems, 315–325.
  • Tripp, C. E., Perr-Sauer, J., Gafur, J., Nag, A., Purkayastha, A., Zisman, S., & Bensen, E. A. (2024). Measuring the energy consumption and efficiency of deep neural networks: An empirical analysis and design recommendations. arXiv. https://doi.org/10.48550/arXiv.2403.08151
  • Tu, X., Mallik, A., Chen, D., Han, K., Altintas, O., Wang, H., & Xie, J. (2023). Unveiling energy efficiency in deep learning: Measurement, prediction, and scoring across edge devices. Proceedings of the Eighth ACM/IEEE Symposium on Edge Computing, 80–93.
  • Tuğaç, Ç. (2023). Birleşmiş Milletler sürdürülebilir kalkınma amaçlarının gerçekleştirilmesinde yapay zekâ uygulamalarının rolü. Sayıştay Dergisi, 34(128), 1–25.
  • Wang, F., Zhang, W., Lai, S., Hao, M., & Wang, Z. (2021). Dynamic GPU energy optimization for machine learning training workloads. IEEE Transactions on Parallel and Distributed Systems, 33(11), 2943–2954.
  • Wang, Y., Hao, M., He, H., Zhang, W., Tang, Q., Sun, X., & Wang, Z. (2024). Drlcap: Runtime GPU frequency capping with deep reinforcement learning. IEEE Transactions on Sustainable Computing, 9(5), 712–726.
  • Xu, Y., Martínez-Fernández, S., Martinez, M., & Franch, X. (2023). Energy efficiency of training neural network architectures: An empirical study. arXiv. https://doi.org/10.48550/arXiv.2302.00967
Toplam 23 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Sürdürülebilir Kalkınma ve Kamu Yararına Bilgi Sistemleri
Bölüm Araştırma Makalesi
Yazarlar

Yasin Sancar 0000-0002-4200-1293

Gönderilme Tarihi 7 Ocak 2026
Kabul Tarihi 5 Şubat 2026
Yayımlanma Tarihi 15 Mart 2026
DOI https://doi.org/10.34248/bsengineering.1858749
IZ https://izlik.org/JA66ND98FX
Yayımlandığı Sayı Yıl 2026 Cilt: 9 Sayı: 2

Kaynak Göster

APA Sancar, Y. (2026). Comparative Analysis of Open-Source Deep Learning Models in Terms of Energy Consumption, Computational Load, and Performance. Black Sea Journal of Engineering and Science, 9(2), 616-623. https://doi.org/10.34248/bsengineering.1858749
AMA 1.Sancar Y. Comparative Analysis of Open-Source Deep Learning Models in Terms of Energy Consumption, Computational Load, and Performance. BSJ Eng. Sci. 2026;9(2):616-623. doi:10.34248/bsengineering.1858749
Chicago Sancar, Yasin. 2026. “Comparative Analysis of Open-Source Deep Learning Models in Terms of Energy Consumption, Computational Load, and Performance”. Black Sea Journal of Engineering and Science 9 (2): 616-23. https://doi.org/10.34248/bsengineering.1858749.
EndNote Sancar Y (01 Mart 2026) Comparative Analysis of Open-Source Deep Learning Models in Terms of Energy Consumption, Computational Load, and Performance. Black Sea Journal of Engineering and Science 9 2 616–623.
IEEE [1]Y. Sancar, “Comparative Analysis of Open-Source Deep Learning Models in Terms of Energy Consumption, Computational Load, and Performance”, BSJ Eng. Sci., c. 9, sy 2, ss. 616–623, Mar. 2026, doi: 10.34248/bsengineering.1858749.
ISNAD Sancar, Yasin. “Comparative Analysis of Open-Source Deep Learning Models in Terms of Energy Consumption, Computational Load, and Performance”. Black Sea Journal of Engineering and Science 9/2 (01 Mart 2026): 616-623. https://doi.org/10.34248/bsengineering.1858749.
JAMA 1.Sancar Y. Comparative Analysis of Open-Source Deep Learning Models in Terms of Energy Consumption, Computational Load, and Performance. BSJ Eng. Sci. 2026;9:616–623.
MLA Sancar, Yasin. “Comparative Analysis of Open-Source Deep Learning Models in Terms of Energy Consumption, Computational Load, and Performance”. Black Sea Journal of Engineering and Science, c. 9, sy 2, Mart 2026, ss. 616-23, doi:10.34248/bsengineering.1858749.
Vancouver 1.Yasin Sancar. Comparative Analysis of Open-Source Deep Learning Models in Terms of Energy Consumption, Computational Load, and Performance. BSJ Eng. Sci. 01 Mart 2026;9(2):616-23. doi:10.34248/bsengineering.1858749

                           24890