Research Article
BibTex RIS Cite
Year 2019, Volume: 2 Issue: 2, 71 - 79, 24.02.2020

Abstract

References

  • L. E. Achenie, Computational experience with a quasi Newton method based training of feed-forward neural networks, Proc. World Congress on Neural Networks, San Diego, 1994, III607-III612.
  • A. Andreas and S. Wu, Practical optimization algorithms and engineering applications, Springer US, 2007, 1-26.
  • R. Battiti, First-and second-order methods for learning: between steepest descent and New-ton's method, Neural Comput., 4(2) 1992, 141-166.
  • R. Battiti and F. Masulli, BFGS optimization for faster and automated supervised learning, In International neural network conference, Springer, Dordrecht, 1990 .
  • C. M. Bishop, Neural networks for pattern recognition. Oxford university press, 1995.
  • E. Birgin and J. Martinez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Opt., 43(2) 2001, 117-128.
  • C. Charalambous, Conjugate gradient algorithm for efficient training of artificial neural networks, “IEE Proceedings G (Circuits, Devices and Systems)”, 139(3) 1992, 301-310.
  • R. Fletcher and C.M. Reeves, Function minimization by conjugate gradients, Comput. J., 7(2) 1964, 149-154.
  • [9] L. Gong, C. Liu, Y. Li and Y. Fuqing, Training feed-forward neural networks using the gradient descent method with the optimal stepsize. Journal of Computational Information Systems, 8(4) 2012, 1359-1371.
  • S. Haykin, Neural networks: a comprehensive foundation, Prentice Hall PTR, 1994.
  • J. Hertz, A. Krogh and RG. Palmer, Introduction to the theory of neural computation, Addison Wesley, Longman, 1991.
  • M.R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Washington, 1952.
  • R.A. Jacobs, Increased rates of convergence through learning rate adaptation, Neural networks, 1(4) 1988, 295-307.
  • K. Abbo and M. Hind, Improving the learning rate of the Back-propagation Algorithm Aitken process, Iraqi J. of Statistical Sci., 2012.
  • A. E. Kostopoulos, D. G. Sotiropoulos and T. N. Grapsa, A new efficient variable learning rate for Perry’s spectral conjugate gradient training method, 2004.
  • I. E. Livieris and P. Pintelas, An advanced conjugate gradient training algorithm based on a modified secant equation, ISRN Artificial Intelligence, 2011.

A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network

Year 2019, Volume: 2 Issue: 2, 71 - 79, 24.02.2020

Abstract

Non-linear unconstrained
optimization  methods constitute
excellent neural network training methods characterized by their simplicity and
efficiency. In this paper, we propose a new preconditioned conjugate gradient neural
network training algorithm which guarantees descent property with standard
Wolfe condition. Encouraging numerical experiments verify that the proposed
algorithm provides fast and stable convergence.

References

  • L. E. Achenie, Computational experience with a quasi Newton method based training of feed-forward neural networks, Proc. World Congress on Neural Networks, San Diego, 1994, III607-III612.
  • A. Andreas and S. Wu, Practical optimization algorithms and engineering applications, Springer US, 2007, 1-26.
  • R. Battiti, First-and second-order methods for learning: between steepest descent and New-ton's method, Neural Comput., 4(2) 1992, 141-166.
  • R. Battiti and F. Masulli, BFGS optimization for faster and automated supervised learning, In International neural network conference, Springer, Dordrecht, 1990 .
  • C. M. Bishop, Neural networks for pattern recognition. Oxford university press, 1995.
  • E. Birgin and J. Martinez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Opt., 43(2) 2001, 117-128.
  • C. Charalambous, Conjugate gradient algorithm for efficient training of artificial neural networks, “IEE Proceedings G (Circuits, Devices and Systems)”, 139(3) 1992, 301-310.
  • R. Fletcher and C.M. Reeves, Function minimization by conjugate gradients, Comput. J., 7(2) 1964, 149-154.
  • [9] L. Gong, C. Liu, Y. Li and Y. Fuqing, Training feed-forward neural networks using the gradient descent method with the optimal stepsize. Journal of Computational Information Systems, 8(4) 2012, 1359-1371.
  • S. Haykin, Neural networks: a comprehensive foundation, Prentice Hall PTR, 1994.
  • J. Hertz, A. Krogh and RG. Palmer, Introduction to the theory of neural computation, Addison Wesley, Longman, 1991.
  • M.R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Washington, 1952.
  • R.A. Jacobs, Increased rates of convergence through learning rate adaptation, Neural networks, 1(4) 1988, 295-307.
  • K. Abbo and M. Hind, Improving the learning rate of the Back-propagation Algorithm Aitken process, Iraqi J. of Statistical Sci., 2012.
  • A. E. Kostopoulos, D. G. Sotiropoulos and T. N. Grapsa, A new efficient variable learning rate for Perry’s spectral conjugate gradient training method, 2004.
  • I. E. Livieris and P. Pintelas, An advanced conjugate gradient training algorithm based on a modified secant equation, ISRN Artificial Intelligence, 2011.
There are 16 citations in total.

Details

Primary Language English
Subjects Mathematical Sciences
Journal Section Articles
Authors

Khalil Abbo This is me

Zahra Zahra Abdlkareem This is me

Publication Date February 24, 2020
Published in Issue Year 2019 Volume: 2 Issue: 2

Cite

APA Abbo, K., & Zahra Abdlkareem, Z. (2020). A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network. Journal of Multidisciplinary Modeling and Optimization, 2(2), 71-79.
AMA Abbo K, Zahra Abdlkareem Z. A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network. jmmo. February 2020;2(2):71-79.
Chicago Abbo, Khalil, and Zahra Zahra Abdlkareem. “A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-Forward Neural Network”. Journal of Multidisciplinary Modeling and Optimization 2, no. 2 (February 2020): 71-79.
EndNote Abbo K, Zahra Abdlkareem Z (February 1, 2020) A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network. Journal of Multidisciplinary Modeling and Optimization 2 2 71–79.
IEEE K. Abbo and Z. Zahra Abdlkareem, “A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network”, jmmo, vol. 2, no. 2, pp. 71–79, 2020.
ISNAD Abbo, Khalil - Zahra Abdlkareem, Zahra. “A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-Forward Neural Network”. Journal of Multidisciplinary Modeling and Optimization 2/2 (February 2020), 71-79.
JAMA Abbo K, Zahra Abdlkareem Z. A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network. jmmo. 2020;2:71–79.
MLA Abbo, Khalil and Zahra Zahra Abdlkareem. “A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-Forward Neural Network”. Journal of Multidisciplinary Modeling and Optimization, vol. 2, no. 2, 2020, pp. 71-79.
Vancouver Abbo K, Zahra Abdlkareem Z. A Preconditioned Unconstrained Optimization Method for Training Multilayer Feed-forward Neural Network. jmmo. 2020;2(2):71-9.