Yıl 2019,
Cilt: 2 Sayı: 2, 71 - 79, 24.02.2020
Khalil Abbo
Zahra Zahra Abdlkareem
Kaynakça
- L. E. Achenie, Computational experience with a quasi Newton method based training of feed-forward neural networks, Proc. World Congress on Neural Networks, San Diego, 1994, III607-III612.
- A. Andreas and S. Wu, Practical optimization algorithms and engineering applications, Springer US, 2007, 1-26.
- R. Battiti, First-and second-order methods for learning: between steepest descent and New-ton's method, Neural Comput., 4(2) 1992, 141-166.
- R. Battiti and F. Masulli, BFGS optimization for faster and automated supervised learning, In International neural network conference, Springer, Dordrecht, 1990 .
- C. M. Bishop, Neural networks for pattern recognition. Oxford university press, 1995.
- E. Birgin and J. Martinez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Opt., 43(2) 2001, 117-128.
- C. Charalambous, Conjugate gradient algorithm for efficient training of artificial neural networks, “IEE Proceedings G (Circuits, Devices and Systems)”, 139(3) 1992, 301-310.
- R. Fletcher and C.M. Reeves, Function minimization by conjugate gradients, Comput. J., 7(2) 1964, 149-154.
- [9] L. Gong, C. Liu, Y. Li and Y. Fuqing, Training feed-forward neural networks using the gradient descent method with the optimal stepsize. Journal of Computational Information Systems, 8(4) 2012, 1359-1371.
- S. Haykin, Neural networks: a comprehensive foundation, Prentice Hall PTR, 1994.
- J. Hertz, A. Krogh and RG. Palmer, Introduction to the theory of neural computation, Addison Wesley, Longman, 1991.
- M.R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Washington, 1952.
- R.A. Jacobs, Increased rates of convergence through learning rate adaptation, Neural networks, 1(4) 1988, 295-307.
- K. Abbo and M. Hind, Improving the learning rate of the Back-propagation Algorithm Aitken process, Iraqi J. of Statistical Sci., 2012.
- A. E. Kostopoulos, D. G. Sotiropoulos and T. N. Grapsa, A new efficient variable learning rate for Perry’s spectral conjugate gradient training method, 2004.
- I. E. Livieris and P. Pintelas, An advanced conjugate gradient training algorithm based on a modified secant equation, ISRN Artificial Intelligence, 2011.
A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network
Yıl 2019,
Cilt: 2 Sayı: 2, 71 - 79, 24.02.2020
Khalil Abbo
Zahra Zahra Abdlkareem
Öz
Non-linear unconstrained
optimization methods constitute
excellent neural network training methods characterized by their simplicity and
efficiency. In this paper, we propose a new preconditioned conjugate gradient neural
network training algorithm which guarantees descent property with standard
Wolfe condition. Encouraging numerical experiments verify that the proposed
algorithm provides fast and stable convergence.
Kaynakça
- L. E. Achenie, Computational experience with a quasi Newton method based training of feed-forward neural networks, Proc. World Congress on Neural Networks, San Diego, 1994, III607-III612.
- A. Andreas and S. Wu, Practical optimization algorithms and engineering applications, Springer US, 2007, 1-26.
- R. Battiti, First-and second-order methods for learning: between steepest descent and New-ton's method, Neural Comput., 4(2) 1992, 141-166.
- R. Battiti and F. Masulli, BFGS optimization for faster and automated supervised learning, In International neural network conference, Springer, Dordrecht, 1990 .
- C. M. Bishop, Neural networks for pattern recognition. Oxford university press, 1995.
- E. Birgin and J. Martinez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Opt., 43(2) 2001, 117-128.
- C. Charalambous, Conjugate gradient algorithm for efficient training of artificial neural networks, “IEE Proceedings G (Circuits, Devices and Systems)”, 139(3) 1992, 301-310.
- R. Fletcher and C.M. Reeves, Function minimization by conjugate gradients, Comput. J., 7(2) 1964, 149-154.
- [9] L. Gong, C. Liu, Y. Li and Y. Fuqing, Training feed-forward neural networks using the gradient descent method with the optimal stepsize. Journal of Computational Information Systems, 8(4) 2012, 1359-1371.
- S. Haykin, Neural networks: a comprehensive foundation, Prentice Hall PTR, 1994.
- J. Hertz, A. Krogh and RG. Palmer, Introduction to the theory of neural computation, Addison Wesley, Longman, 1991.
- M.R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Washington, 1952.
- R.A. Jacobs, Increased rates of convergence through learning rate adaptation, Neural networks, 1(4) 1988, 295-307.
- K. Abbo and M. Hind, Improving the learning rate of the Back-propagation Algorithm Aitken process, Iraqi J. of Statistical Sci., 2012.
- A. E. Kostopoulos, D. G. Sotiropoulos and T. N. Grapsa, A new efficient variable learning rate for Perry’s spectral conjugate gradient training method, 2004.
- I. E. Livieris and P. Pintelas, An advanced conjugate gradient training algorithm based on a modified secant equation, ISRN Artificial Intelligence, 2011.