This study delves into energy-efficient training strategies, emphasizing their alignment with green computing principles. In particular, it highlights the utility of early stopping mechanisms in optimizing the training process of deep learning models. Early stopping works by monitoring performance metrics, such as validation accuracy or loss, and halting the training process once these metrics stabilize or show no improvement over a predefined number of epochs. This approach eliminates redundant computations, leading to significant reductions in energy consumption and computational costs while preserving model accuracy. The research is centered on transfer learning models, specifically MobileNetV2, InceptionV3, ResNet50V2, and Xception, which are well-regarded for their versatility and performance in image classification tasks. By systematically varying patient values (3, 5, 7, 10, and 15), the study explores their impact on training duration, model accuracy, and computational efficiency. Each patience value determines how many epochs the training continues without improvement before stopping, allowing for a nuanced examination of its effects across different architectures. The findings reveal that early stopping not only streamlines the training process but also aligns well with the broader goals of sustainable artificial intelligence development. By effectively balancing computational efficiency with performance optimization, this strategy exemplifies how environmentally responsible practices can be integrated into AI workflows. This study contributes valuable insights into how adopting such techniques can mitigate the environmental impact of AI model training, highlighting their importance in the context of advancing green computing initiatives.
Primary Language | English |
---|---|
Subjects | Information Systems For Sustainable Development and The Public Good, Artificial Intelligence (Other) |
Journal Section | Research Article |
Authors | |
Early Pub Date | January 11, 2025 |
Publication Date | January 17, 2025 |
Submission Date | December 1, 2024 |
Acceptance Date | December 25, 2024 |
Published in Issue | Year 2024 Volume: 2 Issue: 2 |