Training feed forward neural network with modified Fletcher-Reeves method
Year 2018,
Volume: 1 Issue: 1, 14 - 22, 01.08.2018
Yoksal A. Laylani
,
Khalil K. Abbo
,
Hisham M. Khudhur
Abstract
In this research, a modified Fletcher-Reeves
(FR) conjugate gradient algorithm for training large scale feed forward neural
network (FFNN) is presented. Under mild conditions, we establish that the
proposed method satisfies the sufficient descent condition, and it is globally
convergent under wolfe line search condition. The evidence which is provided by
experimental results showed that our proposed method is preferable and superior
to the classic methods.
References
- [1] Bishop C. (1995) ‘ Neural Network for pottern Reconition . Oxford.
- [2] Haykin S.(1994) ‘ Neural Network : A comprehensre Function’ Macnillan College Publishing Company New York.
- [3] Hmich A. Badri A. and Sahel A. (2011). Automatic speaker identification by using the nural network. In IEEE 2011 International Conference on Multimedia Computing Systems (ICMCS).
- [4] Rumelhart D. and McClell J. (1986) Parallel distributed processing . Exploration in the microstructure of Cognition. Cambridge Bradford Books . MTT press.
- [5] Livieris I . and Pintelas P.(2009) ‘ Performance evaluation of descent CG methods . For neural networks traning. In E.A. Lipitakis editor 9th Hellenic European Research on Computer Mathematics and it is Applications Conference (HERCMA 09).
- [6] Rumelhart D. , Hintion G. and Williams R. (1986) ‘ Learning internal representations and error propagation ; In Rumelhart and Mc Cleeard. Editors , parallel Distributed processing ; Exploration in the Microstructure of Cognition Cambridge.
- [7] Battiti R. (1992) . 1st and 2 and order method for learning between steepest descent and network method. Neural Comp 4(2).
- [8] Abbo K. and Mohammed H. (2014) ‘ Conjugate gradient algorithm based on Aitken’s Process for training neural networks ‘ Raf J. of comp and Math Vol (11) , No (1) .
- [9] Livieris I. and Pintelas P. (2008). "A survey on algorithms for training artificial neural networks", Technical Report No. TR08-01, Department of Mathematics University of Patras.
- [10] Livieris I. and Pintelas R. (2011). "An advanced conjugate gradient training algorithm based on a modified secant equation", Technical Report NO. TR11 03.
- [11] Livieris I., Sotiropoulos D., and Pintelas P. (2009). "On descent spectral CG algorithms for training recurrent neural networks", IEEE Computer Society. 13th Panellenic Conference of Informatics, pages 65–69.
- [12] Fletcher R .and Reeves G . (1964) Function Minimization by Conjugate gradients computer Journal Vol (7)
- [13] Gilbert C. and Nocedal J . (1992) ‘ lobal Convergence Properties of Conjugate gradient methods for . Optimization.
- [14] Powell D. (1977) Restart Procedares for the Conjugate gradient method .Math Programming 12.
- [15] Touti Ahmed and Story C (1990) efficient hybrid Conjugate gradient techniques .J. of optimization Theory and Applications G4.
Year 2018,
Volume: 1 Issue: 1, 14 - 22, 01.08.2018
Yoksal A. Laylani
,
Khalil K. Abbo
,
Hisham M. Khudhur
References
- [1] Bishop C. (1995) ‘ Neural Network for pottern Reconition . Oxford.
- [2] Haykin S.(1994) ‘ Neural Network : A comprehensre Function’ Macnillan College Publishing Company New York.
- [3] Hmich A. Badri A. and Sahel A. (2011). Automatic speaker identification by using the nural network. In IEEE 2011 International Conference on Multimedia Computing Systems (ICMCS).
- [4] Rumelhart D. and McClell J. (1986) Parallel distributed processing . Exploration in the microstructure of Cognition. Cambridge Bradford Books . MTT press.
- [5] Livieris I . and Pintelas P.(2009) ‘ Performance evaluation of descent CG methods . For neural networks traning. In E.A. Lipitakis editor 9th Hellenic European Research on Computer Mathematics and it is Applications Conference (HERCMA 09).
- [6] Rumelhart D. , Hintion G. and Williams R. (1986) ‘ Learning internal representations and error propagation ; In Rumelhart and Mc Cleeard. Editors , parallel Distributed processing ; Exploration in the Microstructure of Cognition Cambridge.
- [7] Battiti R. (1992) . 1st and 2 and order method for learning between steepest descent and network method. Neural Comp 4(2).
- [8] Abbo K. and Mohammed H. (2014) ‘ Conjugate gradient algorithm based on Aitken’s Process for training neural networks ‘ Raf J. of comp and Math Vol (11) , No (1) .
- [9] Livieris I. and Pintelas P. (2008). "A survey on algorithms for training artificial neural networks", Technical Report No. TR08-01, Department of Mathematics University of Patras.
- [10] Livieris I. and Pintelas R. (2011). "An advanced conjugate gradient training algorithm based on a modified secant equation", Technical Report NO. TR11 03.
- [11] Livieris I., Sotiropoulos D., and Pintelas P. (2009). "On descent spectral CG algorithms for training recurrent neural networks", IEEE Computer Society. 13th Panellenic Conference of Informatics, pages 65–69.
- [12] Fletcher R .and Reeves G . (1964) Function Minimization by Conjugate gradients computer Journal Vol (7)
- [13] Gilbert C. and Nocedal J . (1992) ‘ lobal Convergence Properties of Conjugate gradient methods for . Optimization.
- [14] Powell D. (1977) Restart Procedares for the Conjugate gradient method .Math Programming 12.
- [15] Touti Ahmed and Story C (1990) efficient hybrid Conjugate gradient techniques .J. of optimization Theory and Applications G4.