Araştırma Makalesi
BibTex RIS Kaynak Göster

Auxiliary Learning of Non-Monotonic Hyperparameter Scheduling System Via Grid Search

Yıl 2022, Cilt: 5 Sayı: 2, 168 - 177, 21.09.2022
https://doi.org/10.38016/jista.1153108

Öz

Recent advancements in advanced neural networks have given rise to new adaptive learning strategies. Conventional learning strategies suffer from many issues, such as slow convergence and lack of robustness. To fully exploit its potential, these issues must be resolved. Both issues are related to the step-size, and momentum term, which is generally fixed and remains uniform for all weights associated with each network layer. In this study, the recently published Back-Propagation Algorithm with Variable Adaptive Momentum (BPVAM) algorithm has been proposed to overcome these issues and improve effectiveness for classification. The study was conducted on various hyperparameters based on the grid search approach, then the optimal values of hyperparameters have trained these algorithms. Six cases were considered with varying values of the hyperparameter to evaluate the impact of the hyperparameter on the training models. It is empirically proven that the convergence behavior of the model is improved in terms of the mean and standard deviation for accuracy and the sum of squared error (SSE). A comprehensive set of experiments indicated that the BPVAM is a robust and highly efficient algorithm.

Kaynakça

  • A. and Newman, D. J. 2007. UCI Machine Learning Repository, Department of Information and Computer Sciences, University of California, Irvine. Available at www.ics.uci.edu/~mlearn/MLRepository.html.
  • Bengio, Y., Simard, P., Frasconi, P., 1994. Learning Long-Term Dependencies with Gradient Descent is Difficult. IEEE Transactions on Neural Networks 5, 157–166. https://doi.org/10.1109/72.279181
  • Demircan Keskin, F., Çiçekli, U., İçli, D., 2022. Prediction of Failure Categories in Plastic Extrusion Process with Deep Learning. Journal of Intelligent Systems: Theory and Applications, 5(1), 27–34. https://doi.org/10.38016/jista.878854
  • Duchi, J., Hazan, E., Singer, Y., 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7).
  • Erol, B.A., Majumdar, A., Lwowski, J., Benavidez, P., Rad, P., Jamshidi, M., 2018. Improved deep neural network object tracking system for applications in home robotics, in: Studies in Computational Intelligence. Springer Verlag, pp. 369–395. https://doi.org/10.1007/978-3-319-89629-8_14
  • Gemirter, C. B., Goularas, D., 2021. A Turkish Question Answering System Based on Deep Learning Neural Networks. Journal of Intelligent Systems: Theory and Applications, 4(2), 65–75. https://doi.org/10.38016/jista.815823
  • Glorot, X., Bengio, Y., 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256.
  • Güney, E., Çakmak, O., Kocaman, Ç., 2022. Classification of Stockwell Transform Based Power Quality Disturbance with Support Vector Machine and Artificial Neural Networks. Journal of Intelligent Systems: Theory and Applications, 5(1), 75–84. https://doi.org/10.38016/jista.996541
  • Hameed, A.A., Karlik, B., Salman, M.S., 2016. Back-propagation algorithm with variable adaptive momentum. Knowledge-Based Systems 114, 79–87. https://doi.org/10.1016/j.knosys.2016.10.001
  • He, K., Zhang, X., Ren, S., Sun, J., 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034.
  • Hertel, L., Collado, J., Sadowski, P., Ott, J., Baldi, P., 2020. Sherpa: Robust hyperparameter optimization for machine learning. SoftwareX, 12, 100591.
  • Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R. R., 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580.
  • Houssein, E.H., Emam, M.M., Ali, A.A., Suganthan, P.N., 2021. Deep and machine learning techniques for medical imaging-based breast cancer: A comprehensive review. Expert Systems with Applications, 167. https://doi.org/10.1016/j.eswa.2020.114161
  • Ioffe, S., Szegedy, C., 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448-456.
  • Jagtap, A.D., Kawaguchi, K., Karniadakis, G.E., 2020. Adaptive activation functions accelerate convergence in deep and physics-informed neural networks. Journal of Computational Physics 404. https://doi.org/10.1016/j.jcp.2019.109136
  • Jain, D.K., Shamsolmoali, P., Sehdev, P., 2019. Extended deep neural network for facial emotion recognition. Pattern Recognition Letters 120, 69–74. https://doi.org/10.1016/j.patrec.2019.01.008
  • Jain, N., Kumar, S., Kumar, A., Shamsolmoali, P., Zareapoor, M., 2018. Hybrid deep neural networks for face emotion recognition. Pattern Recognition Letters 115, 101–106. https://doi.org/10.1016/j.patrec.2018.04.010
  • Jin, J., Zhu, J., Gong, J., Chen, W., 2022. Novel activation functions-based ZNN models for fixed-time solving dynamirc Sylvester equation. Neural Computing and Applications, 1-19. https://doi.org/10.1007/s00521-022-06905-2
  • Klambauer, G., Unterthiner, T., Mayr, A., Hochreiter, S., 2017. Self-Normalizing Neural Networks, Advances in neural information processing systems, 30.
  • Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. ImageNet Classification with Deep Convolutional Neural Networks. Advances in neural information processing systems, 25.
  • Li, Z., Arora, S., 2019. An Exponential Learning Rate Schedule for Deep Learning. arXiv preprint arXiv:1910.07454.
  • Liu, M., Chen, L., Du, X., Jin, L., Shang, M., 2021. Activated Gradients for Deep Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 1–13. https://doi.org/10.1109/tnnls.2021.3106044
  • Mestres, A., Rodriguez-Natal, A., Carner, J., Barlet-Ros, P., Alarcón, E., Solé, M., Muntés-Mulero, V., Meyer, D., Barkai, S., Hibbett, M.J., Estrada, G., Ma’ruf, K., Coras, F., Ermagan, V., Latapie, H., Cassar, C., Evans, J., Maino, F., Walrand, J., Cabellos, A., 2017. Knowledge-defined networking. Computer Communication Review 47, 1–10. https://doi.org/10.1145/3138808.3138810
  • Nair, V., Hinton, G. E., 2010. Rectified linear units improve restricted boltzmann machines. In Appearing in Proceedings of the 27 th International Conference on Machine Learning (ICML).
  • Park, J., Yi, D., Ji, S., 2020. A novel learning rate schedule in optimization for neural networks and it’s convergence. Symmetry (Basel) 12. https://doi.org/10.3390/SYM12040660
  • Patel, K., Rambach, K., Visentin, T., Rusev, D., Pfeiffer, M., Yang, B., 2019. Deep learning-based object classification on automotive radar spectra, in: 2019 IEEE Radar Conference, RadarConf 2019. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/RADAR.2019.8835775
  • Rahman, M.M., Tan, Y., Xue, J., Lu, K., 2020. Notice of Removal: Recent Advances in 3D Object Detection in the Era of Deep Neural Networks: A Survey. IEEE Transactions on Image Processing. https://doi.org/10.1109/TIP.2019.2955239
  • Reddi, S.J., Kale, S., Kumar, S., 2019. On the Convergence of Adam and Beyond. arXiv preprint arXiv:1904.09237. Rumelhart, D. E., Hinton, G. E., Williams, R. J., 1986. Learning representations by back-propagating errors. nature, 323(6088), 533-536.
  • Sandha, S. S., Aggarwal, M., Fedorov, I., Srivastava, M. 2020. Mango: A python library for parallel hyperparameter tuning. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3987-3991.
  • Sarvamangala, D.R., Kulkarni, R. v., 2022. Convolutional neural networks in medical image understanding: a survey. Evolutionary Intelligence, 1-22. https://doi.org/10.1007/s12065-020-00540-3
  • Seong, S., Lee, Y., Kee, Y., Han, D., Kim, J., 2018. Towards Flatter Loss Surface via Nonmonotonic Learning Rate Scheduling, In UAI.
  • Sharma, N., Jain, V., Mishra, A., 2018. An Analysis of Convolutional Neural Networks for Image Classification, in: Procedia Computer Science. Elsevier B.V., pp. 377–384. https://doi.org/10.1016/j.procs.2018.05.198
  • Smith, L.N., Topin, N., 2019. Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates. In Artificial intelligence and machine learning for multi-domain operations applications, Vol. 11006, pp. 369-386.
  • Sohl-Dickstein, J., Poole, B., Ganguli, S., 2014. Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods. In International Conference on Machine Learning, pp. 604-612.
  • Srivastava, N., Hinton, G., Krizhevsky, A., Salakhutdinov, R., 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting, Journal of Machine Learning Research. 15(1), 1929-1958.
  • Sun, J., Yang, Y., Xun, G., Zhang, A., 2022. Scheduling Hyperparameters to Improve Generalization: From Centralized SGD to Asynchronous SGD. ACM Transactions on Knowledge Discovery from Data (TKDD). https://dl.acm.org/doi/pdf/10.1145/3544782.
  • Sutskever, I., Martens, J., Dahl, G., Hinton, G., 2013. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pp. 1139-1147.
  • Tong, Q., Liang, G., Bi, J., 2022. Calibrating the adaptive learning rate to improve convergence of ADAM. Neurocomputing 481, 333–356. https://doi.org/10.1016/j.neucom.2022.01.014
  • Xue, Y., Tong, Y., Neri, F., 2022. An ensemble of differential evolution and Adam for training feed-forward neural networks. Information Sciences. Information Sciences, 608, 453-471. https://doi.org/10.1016/j.ins.2022.06.036
  • Yan, Z., Chen, J., Hu, R., Huang, T., Chen, Y., Wen, S., 2020. Training memristor-based multilayer neuromorphic networks with SGD, momentum and adaptive learning rates. Neural Networks 128, 142–149. https://doi.org/10.1016/j.neunet.2020.04.025
  • Yang, X., 2021. Kalman optimizer for consistent gradient descent, in: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Institute of Electrical and Electronics Engineers Inc., pp. 3900–3904. https://doi.org/10.1109/ICASSP39728.2021.9414588
  • YD., Ahn, J., Ji, S., 2020. An effective optimization method for machine learning based on ADAM. Applied Sciences (Switzerland) 10. https://doi.org/10.3390/app10031073
  • YH., Yang, L.T., Zhang, Q., Armstrong, D., Deen, M.J., 2021. Convolutional neural networks for medical image analysis: State-of-the-art, comparisons, improvement and perspectives. Neurocomputing 444, 92–110. https://doi.org/10.1016/j.neucom.2020.04.157

Grid Arama Yoluyla Monotonik Olmayan Hiperparametre Planlama Sisteminin Yardımcı Öğrenimi

Yıl 2022, Cilt: 5 Sayı: 2, 168 - 177, 21.09.2022
https://doi.org/10.38016/jista.1153108

Öz

Gelişmiş sinir ağlarındaki son gelişmeler, yeni uyarlanabilir öğrenme stratejilerine yol açmıştır. Geleneksel öğrenme stratejileri, yavaş yakınsama ve sağlamlık eksikliği gibi birçok sorundan muzdariptir. Potansiyelinden tam olarak yararlanmak için bu sorunların çözülmesi gerekir. Her iki konu da adım boyutu ve genellikle sabit olan ve her ağ katmanıyla ilişkili tüm ağırlıklar için tek tip kalan momentum terimi ile ilgilidir. Bu çalışmada, bu sorunların üstesinden gelmek ve sınıflandırma etkinliğini artırmak için yakın zamanda yayınlanan Değişken Uyarlanabilir Momentumlu Geri Yayılım Algoritması (BPVAM) algoritması önerilmiştir. Çalışma grid arama yaklaşımına dayalı olarak çeşitli hiperparametreler üzerinde yürütülmüş, daha sonra hiperparametrelerin optimal değerleri bu algoritmaları eğitmiştir. Hiperparametrenin eğitim modelleri üzerindeki etkisini değerlendirmek için hiperparametrenin değişen değerlerine sahip altı durum ele alındı. Modelin yakınsama davranışının, doğruluk için ortalama ve standart sapma ve karesel hatanın toplamı (SSE) açısından iyileştirildiği deneysel olarak kanıtlanmıştır. Kapsamlı bir deney seti, BPVAM'nin sağlam ve yüksek verimli bir algoritma olduğunu gösterdi.

Kaynakça

  • A. and Newman, D. J. 2007. UCI Machine Learning Repository, Department of Information and Computer Sciences, University of California, Irvine. Available at www.ics.uci.edu/~mlearn/MLRepository.html.
  • Bengio, Y., Simard, P., Frasconi, P., 1994. Learning Long-Term Dependencies with Gradient Descent is Difficult. IEEE Transactions on Neural Networks 5, 157–166. https://doi.org/10.1109/72.279181
  • Demircan Keskin, F., Çiçekli, U., İçli, D., 2022. Prediction of Failure Categories in Plastic Extrusion Process with Deep Learning. Journal of Intelligent Systems: Theory and Applications, 5(1), 27–34. https://doi.org/10.38016/jista.878854
  • Duchi, J., Hazan, E., Singer, Y., 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7).
  • Erol, B.A., Majumdar, A., Lwowski, J., Benavidez, P., Rad, P., Jamshidi, M., 2018. Improved deep neural network object tracking system for applications in home robotics, in: Studies in Computational Intelligence. Springer Verlag, pp. 369–395. https://doi.org/10.1007/978-3-319-89629-8_14
  • Gemirter, C. B., Goularas, D., 2021. A Turkish Question Answering System Based on Deep Learning Neural Networks. Journal of Intelligent Systems: Theory and Applications, 4(2), 65–75. https://doi.org/10.38016/jista.815823
  • Glorot, X., Bengio, Y., 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256.
  • Güney, E., Çakmak, O., Kocaman, Ç., 2022. Classification of Stockwell Transform Based Power Quality Disturbance with Support Vector Machine and Artificial Neural Networks. Journal of Intelligent Systems: Theory and Applications, 5(1), 75–84. https://doi.org/10.38016/jista.996541
  • Hameed, A.A., Karlik, B., Salman, M.S., 2016. Back-propagation algorithm with variable adaptive momentum. Knowledge-Based Systems 114, 79–87. https://doi.org/10.1016/j.knosys.2016.10.001
  • He, K., Zhang, X., Ren, S., Sun, J., 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034.
  • Hertel, L., Collado, J., Sadowski, P., Ott, J., Baldi, P., 2020. Sherpa: Robust hyperparameter optimization for machine learning. SoftwareX, 12, 100591.
  • Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R. R., 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580.
  • Houssein, E.H., Emam, M.M., Ali, A.A., Suganthan, P.N., 2021. Deep and machine learning techniques for medical imaging-based breast cancer: A comprehensive review. Expert Systems with Applications, 167. https://doi.org/10.1016/j.eswa.2020.114161
  • Ioffe, S., Szegedy, C., 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448-456.
  • Jagtap, A.D., Kawaguchi, K., Karniadakis, G.E., 2020. Adaptive activation functions accelerate convergence in deep and physics-informed neural networks. Journal of Computational Physics 404. https://doi.org/10.1016/j.jcp.2019.109136
  • Jain, D.K., Shamsolmoali, P., Sehdev, P., 2019. Extended deep neural network for facial emotion recognition. Pattern Recognition Letters 120, 69–74. https://doi.org/10.1016/j.patrec.2019.01.008
  • Jain, N., Kumar, S., Kumar, A., Shamsolmoali, P., Zareapoor, M., 2018. Hybrid deep neural networks for face emotion recognition. Pattern Recognition Letters 115, 101–106. https://doi.org/10.1016/j.patrec.2018.04.010
  • Jin, J., Zhu, J., Gong, J., Chen, W., 2022. Novel activation functions-based ZNN models for fixed-time solving dynamirc Sylvester equation. Neural Computing and Applications, 1-19. https://doi.org/10.1007/s00521-022-06905-2
  • Klambauer, G., Unterthiner, T., Mayr, A., Hochreiter, S., 2017. Self-Normalizing Neural Networks, Advances in neural information processing systems, 30.
  • Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. ImageNet Classification with Deep Convolutional Neural Networks. Advances in neural information processing systems, 25.
  • Li, Z., Arora, S., 2019. An Exponential Learning Rate Schedule for Deep Learning. arXiv preprint arXiv:1910.07454.
  • Liu, M., Chen, L., Du, X., Jin, L., Shang, M., 2021. Activated Gradients for Deep Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 1–13. https://doi.org/10.1109/tnnls.2021.3106044
  • Mestres, A., Rodriguez-Natal, A., Carner, J., Barlet-Ros, P., Alarcón, E., Solé, M., Muntés-Mulero, V., Meyer, D., Barkai, S., Hibbett, M.J., Estrada, G., Ma’ruf, K., Coras, F., Ermagan, V., Latapie, H., Cassar, C., Evans, J., Maino, F., Walrand, J., Cabellos, A., 2017. Knowledge-defined networking. Computer Communication Review 47, 1–10. https://doi.org/10.1145/3138808.3138810
  • Nair, V., Hinton, G. E., 2010. Rectified linear units improve restricted boltzmann machines. In Appearing in Proceedings of the 27 th International Conference on Machine Learning (ICML).
  • Park, J., Yi, D., Ji, S., 2020. A novel learning rate schedule in optimization for neural networks and it’s convergence. Symmetry (Basel) 12. https://doi.org/10.3390/SYM12040660
  • Patel, K., Rambach, K., Visentin, T., Rusev, D., Pfeiffer, M., Yang, B., 2019. Deep learning-based object classification on automotive radar spectra, in: 2019 IEEE Radar Conference, RadarConf 2019. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/RADAR.2019.8835775
  • Rahman, M.M., Tan, Y., Xue, J., Lu, K., 2020. Notice of Removal: Recent Advances in 3D Object Detection in the Era of Deep Neural Networks: A Survey. IEEE Transactions on Image Processing. https://doi.org/10.1109/TIP.2019.2955239
  • Reddi, S.J., Kale, S., Kumar, S., 2019. On the Convergence of Adam and Beyond. arXiv preprint arXiv:1904.09237. Rumelhart, D. E., Hinton, G. E., Williams, R. J., 1986. Learning representations by back-propagating errors. nature, 323(6088), 533-536.
  • Sandha, S. S., Aggarwal, M., Fedorov, I., Srivastava, M. 2020. Mango: A python library for parallel hyperparameter tuning. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3987-3991.
  • Sarvamangala, D.R., Kulkarni, R. v., 2022. Convolutional neural networks in medical image understanding: a survey. Evolutionary Intelligence, 1-22. https://doi.org/10.1007/s12065-020-00540-3
  • Seong, S., Lee, Y., Kee, Y., Han, D., Kim, J., 2018. Towards Flatter Loss Surface via Nonmonotonic Learning Rate Scheduling, In UAI.
  • Sharma, N., Jain, V., Mishra, A., 2018. An Analysis of Convolutional Neural Networks for Image Classification, in: Procedia Computer Science. Elsevier B.V., pp. 377–384. https://doi.org/10.1016/j.procs.2018.05.198
  • Smith, L.N., Topin, N., 2019. Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates. In Artificial intelligence and machine learning for multi-domain operations applications, Vol. 11006, pp. 369-386.
  • Sohl-Dickstein, J., Poole, B., Ganguli, S., 2014. Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods. In International Conference on Machine Learning, pp. 604-612.
  • Srivastava, N., Hinton, G., Krizhevsky, A., Salakhutdinov, R., 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting, Journal of Machine Learning Research. 15(1), 1929-1958.
  • Sun, J., Yang, Y., Xun, G., Zhang, A., 2022. Scheduling Hyperparameters to Improve Generalization: From Centralized SGD to Asynchronous SGD. ACM Transactions on Knowledge Discovery from Data (TKDD). https://dl.acm.org/doi/pdf/10.1145/3544782.
  • Sutskever, I., Martens, J., Dahl, G., Hinton, G., 2013. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pp. 1139-1147.
  • Tong, Q., Liang, G., Bi, J., 2022. Calibrating the adaptive learning rate to improve convergence of ADAM. Neurocomputing 481, 333–356. https://doi.org/10.1016/j.neucom.2022.01.014
  • Xue, Y., Tong, Y., Neri, F., 2022. An ensemble of differential evolution and Adam for training feed-forward neural networks. Information Sciences. Information Sciences, 608, 453-471. https://doi.org/10.1016/j.ins.2022.06.036
  • Yan, Z., Chen, J., Hu, R., Huang, T., Chen, Y., Wen, S., 2020. Training memristor-based multilayer neuromorphic networks with SGD, momentum and adaptive learning rates. Neural Networks 128, 142–149. https://doi.org/10.1016/j.neunet.2020.04.025
  • Yang, X., 2021. Kalman optimizer for consistent gradient descent, in: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Institute of Electrical and Electronics Engineers Inc., pp. 3900–3904. https://doi.org/10.1109/ICASSP39728.2021.9414588
  • YD., Ahn, J., Ji, S., 2020. An effective optimization method for machine learning based on ADAM. Applied Sciences (Switzerland) 10. https://doi.org/10.3390/app10031073
  • YH., Yang, L.T., Zhang, Q., Armstrong, D., Deen, M.J., 2021. Convolutional neural networks for medical image analysis: State-of-the-art, comparisons, improvement and perspectives. Neurocomputing 444, 92–110. https://doi.org/10.1016/j.neucom.2020.04.157
Toplam 43 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yapay Zeka
Bölüm Araştırma Makalesi
Yazarlar

Alaa Ali Hameed 0000-0002-8514-9255

Erken Görünüm Tarihi 14 Haziran 2022
Yayımlanma Tarihi 21 Eylül 2022
Gönderilme Tarihi 3 Ağustos 2022
Yayımlandığı Sayı Yıl 2022 Cilt: 5 Sayı: 2

Kaynak Göster

APA Hameed, A. A. (2022). Auxiliary Learning of Non-Monotonic Hyperparameter Scheduling System Via Grid Search. Journal of Intelligent Systems: Theory and Applications, 5(2), 168-177. https://doi.org/10.38016/jista.1153108
AMA Hameed AA. Auxiliary Learning of Non-Monotonic Hyperparameter Scheduling System Via Grid Search. jista. Eylül 2022;5(2):168-177. doi:10.38016/jista.1153108
Chicago Hameed, Alaa Ali. “Auxiliary Learning of Non-Monotonic Hyperparameter Scheduling System Via Grid Search”. Journal of Intelligent Systems: Theory and Applications 5, sy. 2 (Eylül 2022): 168-77. https://doi.org/10.38016/jista.1153108.
EndNote Hameed AA (01 Eylül 2022) Auxiliary Learning of Non-Monotonic Hyperparameter Scheduling System Via Grid Search. Journal of Intelligent Systems: Theory and Applications 5 2 168–177.
IEEE A. A. Hameed, “Auxiliary Learning of Non-Monotonic Hyperparameter Scheduling System Via Grid Search”, jista, c. 5, sy. 2, ss. 168–177, 2022, doi: 10.38016/jista.1153108.
ISNAD Hameed, Alaa Ali. “Auxiliary Learning of Non-Monotonic Hyperparameter Scheduling System Via Grid Search”. Journal of Intelligent Systems: Theory and Applications 5/2 (Eylül 2022), 168-177. https://doi.org/10.38016/jista.1153108.
JAMA Hameed AA. Auxiliary Learning of Non-Monotonic Hyperparameter Scheduling System Via Grid Search. jista. 2022;5:168–177.
MLA Hameed, Alaa Ali. “Auxiliary Learning of Non-Monotonic Hyperparameter Scheduling System Via Grid Search”. Journal of Intelligent Systems: Theory and Applications, c. 5, sy. 2, 2022, ss. 168-77, doi:10.38016/jista.1153108.
Vancouver Hameed AA. Auxiliary Learning of Non-Monotonic Hyperparameter Scheduling System Via Grid Search. jista. 2022;5(2):168-77.

Zeki Sistemler Teori ve Uygulamaları Dergisi