Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2019, Cilt: 2 Sayı: 2, 71 - 79, 24.02.2020

Öz

Kaynakça

  • L. E. Achenie, Computational experience with a quasi Newton method based training of feed-forward neural networks, Proc. World Congress on Neural Networks, San Diego, 1994, III607-III612.
  • A. Andreas and S. Wu, Practical optimization algorithms and engineering applications, Springer US, 2007, 1-26.
  • R. Battiti, First-and second-order methods for learning: between steepest descent and New-ton's method, Neural Comput., 4(2) 1992, 141-166.
  • R. Battiti and F. Masulli, BFGS optimization for faster and automated supervised learning, In International neural network conference, Springer, Dordrecht, 1990 .
  • C. M. Bishop, Neural networks for pattern recognition. Oxford university press, 1995.
  • E. Birgin and J. Martinez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Opt., 43(2) 2001, 117-128.
  • C. Charalambous, Conjugate gradient algorithm for efficient training of artificial neural networks, “IEE Proceedings G (Circuits, Devices and Systems)”, 139(3) 1992, 301-310.
  • R. Fletcher and C.M. Reeves, Function minimization by conjugate gradients, Comput. J., 7(2) 1964, 149-154.
  • [9] L. Gong, C. Liu, Y. Li and Y. Fuqing, Training feed-forward neural networks using the gradient descent method with the optimal stepsize. Journal of Computational Information Systems, 8(4) 2012, 1359-1371.
  • S. Haykin, Neural networks: a comprehensive foundation, Prentice Hall PTR, 1994.
  • J. Hertz, A. Krogh and RG. Palmer, Introduction to the theory of neural computation, Addison Wesley, Longman, 1991.
  • M.R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Washington, 1952.
  • R.A. Jacobs, Increased rates of convergence through learning rate adaptation, Neural networks, 1(4) 1988, 295-307.
  • K. Abbo and M. Hind, Improving the learning rate of the Back-propagation Algorithm Aitken process, Iraqi J. of Statistical Sci., 2012.
  • A. E. Kostopoulos, D. G. Sotiropoulos and T. N. Grapsa, A new efficient variable learning rate for Perry’s spectral conjugate gradient training method, 2004.
  • I. E. Livieris and P. Pintelas, An advanced conjugate gradient training algorithm based on a modified secant equation, ISRN Artificial Intelligence, 2011.

A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network

Yıl 2019, Cilt: 2 Sayı: 2, 71 - 79, 24.02.2020

Öz

Non-linear unconstrained
optimization  methods constitute
excellent neural network training methods characterized by their simplicity and
efficiency. In this paper, we propose a new preconditioned conjugate gradient neural
network training algorithm which guarantees descent property with standard
Wolfe condition. Encouraging numerical experiments verify that the proposed
algorithm provides fast and stable convergence.

Kaynakça

  • L. E. Achenie, Computational experience with a quasi Newton method based training of feed-forward neural networks, Proc. World Congress on Neural Networks, San Diego, 1994, III607-III612.
  • A. Andreas and S. Wu, Practical optimization algorithms and engineering applications, Springer US, 2007, 1-26.
  • R. Battiti, First-and second-order methods for learning: between steepest descent and New-ton's method, Neural Comput., 4(2) 1992, 141-166.
  • R. Battiti and F. Masulli, BFGS optimization for faster and automated supervised learning, In International neural network conference, Springer, Dordrecht, 1990 .
  • C. M. Bishop, Neural networks for pattern recognition. Oxford university press, 1995.
  • E. Birgin and J. Martinez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Opt., 43(2) 2001, 117-128.
  • C. Charalambous, Conjugate gradient algorithm for efficient training of artificial neural networks, “IEE Proceedings G (Circuits, Devices and Systems)”, 139(3) 1992, 301-310.
  • R. Fletcher and C.M. Reeves, Function minimization by conjugate gradients, Comput. J., 7(2) 1964, 149-154.
  • [9] L. Gong, C. Liu, Y. Li and Y. Fuqing, Training feed-forward neural networks using the gradient descent method with the optimal stepsize. Journal of Computational Information Systems, 8(4) 2012, 1359-1371.
  • S. Haykin, Neural networks: a comprehensive foundation, Prentice Hall PTR, 1994.
  • J. Hertz, A. Krogh and RG. Palmer, Introduction to the theory of neural computation, Addison Wesley, Longman, 1991.
  • M.R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Washington, 1952.
  • R.A. Jacobs, Increased rates of convergence through learning rate adaptation, Neural networks, 1(4) 1988, 295-307.
  • K. Abbo and M. Hind, Improving the learning rate of the Back-propagation Algorithm Aitken process, Iraqi J. of Statistical Sci., 2012.
  • A. E. Kostopoulos, D. G. Sotiropoulos and T. N. Grapsa, A new efficient variable learning rate for Perry’s spectral conjugate gradient training method, 2004.
  • I. E. Livieris and P. Pintelas, An advanced conjugate gradient training algorithm based on a modified secant equation, ISRN Artificial Intelligence, 2011.
Toplam 16 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Matematik
Bölüm Articles
Yazarlar

Khalil Abbo Bu kişi benim

Zahra Zahra Abdlkareem Bu kişi benim

Yayımlanma Tarihi 24 Şubat 2020
Yayımlandığı Sayı Yıl 2019 Cilt: 2 Sayı: 2

Kaynak Göster

APA Abbo, K., & Zahra Abdlkareem, Z. (2020). A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network. Journal of Multidisciplinary Modeling and Optimization, 2(2), 71-79.
AMA Abbo K, Zahra Abdlkareem Z. A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network. jmmo. Şubat 2020;2(2):71-79.
Chicago Abbo, Khalil, ve Zahra Zahra Abdlkareem. “A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-Forward Neural Network”. Journal of Multidisciplinary Modeling and Optimization 2, sy. 2 (Şubat 2020): 71-79.
EndNote Abbo K, Zahra Abdlkareem Z (01 Şubat 2020) A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network. Journal of Multidisciplinary Modeling and Optimization 2 2 71–79.
IEEE K. Abbo ve Z. Zahra Abdlkareem, “A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network”, jmmo, c. 2, sy. 2, ss. 71–79, 2020.
ISNAD Abbo, Khalil - Zahra Abdlkareem, Zahra. “A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-Forward Neural Network”. Journal of Multidisciplinary Modeling and Optimization 2/2 (Şubat 2020), 71-79.
JAMA Abbo K, Zahra Abdlkareem Z. A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network. jmmo. 2020;2:71–79.
MLA Abbo, Khalil ve Zahra Zahra Abdlkareem. “A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-Forward Neural Network”. Journal of Multidisciplinary Modeling and Optimization, c. 2, sy. 2, 2020, ss. 71-79.
Vancouver Abbo K, Zahra Abdlkareem Z. A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network. jmmo. 2020;2(2):71-9.