Convolutional Neural Networks (CNNs) used for image classification often have complex architectures involving large images, time-costly training processes, and a large number of layers and hyperparameters. Therefore, improving the accuracy of CNN is a challenging process that requires time, resources and specialized knowledge. In this study, to improve the performance of CNN models, experiments were conducted on the MNIST, EMNIST, and Fashion-MNIST datasets using different optimization algorithms and a loss function (Si-CL) from the literature. The findings of the study reveal the effects of loss functions and optimization algorithms on model performance in detail. The SGDM, Adam, RMSProp, RMSProp, AdaMax, AdaDelta and AdaGrad optimization algorithms used during the experiments are examined and the results show that the Adam algorithm performs the best in terms of both training accuracy and test accuracy. The SGDM algorithm was particularly effective at larger batch sizes and low learning rates, but required longer training times compared to the Adam algorithm. The Si-CL loss function used in the study performed better than the traditional cross entropy loss. The model trained with the Si-CL loss function achieved higher results in terms of both training and test accuracy, reduced training time and lower loss value. This allowed the model to learn faster and more efficiently.
| Primary Language | English |
|---|---|
| Subjects | Deep Learning, Neural Networks |
| Journal Section | Research Article |
| Authors | |
| Submission Date | December 12, 2025 |
| Acceptance Date | March 2, 2026 |
| Publication Date | March 31, 2026 |
| DOI | https://doi.org/10.54287/gujsa.1840916 |
| IZ | https://izlik.org/JA49WR65MM |
| Published in Issue | Year 2026 Volume: 13 Issue: 1 |