Year 2021,
Volume: 2 Issue: 2, 69 - 78, 15.12.2021
Hisham Mohammed
,
Khalil K. Abbo
,
Aydin Khudhur
References
- D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Internal Representations by Error Propagation,” in Readings in Cognitive Science: A Perspective from Psychology and Artificial Intelligence, 2013.
- K. K. Abbo and H. M. Khudhur, “New A hybrid Hestenes-Stiefel and Dai-Yuan conjugate gradient algorithms for unconstrained optimization,” Tikrit J. Pure Sci., vol. 21, no. 1, pp. 118–123, 2016.
- K. Abbo and H. Mohammed, “Conjugate Gradient Algorithm Based on Aitken’s Process for Training Neural Networks,” AL-Rafidain J. Comput. Sci. Math., vol. 11, no. 1, 2014, doi: 10.33899/csmj.2014.163730.
- K. Abbo and M. Hind, “Improving the learning rate of the Backpropagation Algorithm Aitken process’,” Iraqi J. Stat. Sci. Accept. (to Appear., 2012.
- D. Svozil, V. Kvasnička, and J. Pospíchal, “Introduction to multi-layer feed-forward neural networks,” in Chemometrics and Intelligent Laboratory Systems, 1997, vol. 39, no. 1, doi: 10.1016/S0169-7439(97)00061-0.
- N. Lange, C. M. Bishop, and B. D. Ripley, “Neural Networks for Pattern Recognition.,” J. Am. Stat. Assoc., vol. 92, no. 440, 1997, doi: 10.2307/2965437.
- A. Hmich, A. Badri, and A. Sahel, “Automatic speaker identification by using the neural network,” 2011, doi: 10.1109/ICMCS.2011.5945601.
- S. Walczak and N. Cerpa, “Heuristic principles for the design of artificial neural networks,” Inf. Softw. Technol., vol. 41, no. 2, 1999, doi: 10.1016/S0950-5849(98)00116-5.
- D. Nguyen and B. Widrow, “Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights,” in 1990 IJCNN International Joint Conference on Neural Networks, 1990, pp. 21–26.
- H. M. Khudhur, “Numerical and analytical study of some descent algorithms to solve unconstrained Optimization problems,” University of Mosul, 2015.
- I. E. Livieris and P. Pintelas, “An Advanced Conjugate Gradient Training Algorithm Based on a Modified Secant Equation,” ISRN Artif. Intell., vol. 2012, 2012, doi: 10.5402/2012/486361.
- I. E. Livieris, D. G. Sotiropoulos, and P. Pintelas, “On descent spectral CG algorithms for training recurrent neural networks,” 2009, doi: 10.1109/PCI.2009.33.
- K. K. Abbo and H. M. Khudhur, “New A hybrid conjugate gradient Fletcher-Reeves and Polak-Ribiere algorithm for unconstrained optimization,” Tikrit J. Pure Sci., vol. 21, no. 1, pp. 124–129, 2016.
- K. K. Abbo, Y. A. Laylani, and H. M. Khudhur, “Proposed new Scaled conjugate gradient algorithm for Unconstrained Optimization,” Int. J. Enhanc. Res. Sci. Technol. Eng., vol. 5, no. 7, 2016.
- H. M. Khudhur and K. K. Abbo, “A New Type of Conjugate Gradient Technique for Solving Fuzzy Nonlinear Algebraic Equations,” J. Phys. Conf. Ser., vol. 1879, no. 2, p. 22111, 2021, doi: 10.1088/1742-6596/1879/2/022111.
- R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” Comput. J., vol. 7, no. 2, pp. 149–154, 1964, doi: 10.1093/comjnl/7.2.149.
- E. Polak and G. Ribiere, “Note sur la convergence de méthodes de directions conjuguées,” ESAIM Math. Model. Numer. Anal. Mathématique Anal. Numérique, vol. 3, no. R1, pp. 35–43, 1969.
- M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, vol. 49, no. 1. NBS Washington, DC, 1952.
- M. Al-Baali, “Descent property and global convergence of the Fletcher—Reeves method with inexact line search,” IMA J. Numer. Anal., vol. 5, no. 1, pp. 121–124, 1985.
Training Feedforward Neural Networks to Predict the Size of the Population by Using a New Hybrid Method Hestenes-Stiefel (HS) and Dai-Yuan (DY)
Year 2021,
Volume: 2 Issue: 2, 69 - 78, 15.12.2021
Hisham Mohammed
,
Khalil K. Abbo
,
Aydin Khudhur
Abstract
We proposed a new conjugate gradient type hybrid approach in this study, which is based on merging Hestenes-Stiefel and Dai-Yuan algorithms using the spectral direction conjugate algorithm, we showed their absolute convergence. Under some assumptions and they satisfied the gradient property. The numerical results demonstrate the efficacy of the developed feedforward neural network training approach. To estimate the size of the population using the Thomas Malthus population model, and Our numerical results were very close to the model of the Tomas Malthose Model, we can use the method to predict other problems through the use of ann.
References
- D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Internal Representations by Error Propagation,” in Readings in Cognitive Science: A Perspective from Psychology and Artificial Intelligence, 2013.
- K. K. Abbo and H. M. Khudhur, “New A hybrid Hestenes-Stiefel and Dai-Yuan conjugate gradient algorithms for unconstrained optimization,” Tikrit J. Pure Sci., vol. 21, no. 1, pp. 118–123, 2016.
- K. Abbo and H. Mohammed, “Conjugate Gradient Algorithm Based on Aitken’s Process for Training Neural Networks,” AL-Rafidain J. Comput. Sci. Math., vol. 11, no. 1, 2014, doi: 10.33899/csmj.2014.163730.
- K. Abbo and M. Hind, “Improving the learning rate of the Backpropagation Algorithm Aitken process’,” Iraqi J. Stat. Sci. Accept. (to Appear., 2012.
- D. Svozil, V. Kvasnička, and J. Pospíchal, “Introduction to multi-layer feed-forward neural networks,” in Chemometrics and Intelligent Laboratory Systems, 1997, vol. 39, no. 1, doi: 10.1016/S0169-7439(97)00061-0.
- N. Lange, C. M. Bishop, and B. D. Ripley, “Neural Networks for Pattern Recognition.,” J. Am. Stat. Assoc., vol. 92, no. 440, 1997, doi: 10.2307/2965437.
- A. Hmich, A. Badri, and A. Sahel, “Automatic speaker identification by using the neural network,” 2011, doi: 10.1109/ICMCS.2011.5945601.
- S. Walczak and N. Cerpa, “Heuristic principles for the design of artificial neural networks,” Inf. Softw. Technol., vol. 41, no. 2, 1999, doi: 10.1016/S0950-5849(98)00116-5.
- D. Nguyen and B. Widrow, “Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights,” in 1990 IJCNN International Joint Conference on Neural Networks, 1990, pp. 21–26.
- H. M. Khudhur, “Numerical and analytical study of some descent algorithms to solve unconstrained Optimization problems,” University of Mosul, 2015.
- I. E. Livieris and P. Pintelas, “An Advanced Conjugate Gradient Training Algorithm Based on a Modified Secant Equation,” ISRN Artif. Intell., vol. 2012, 2012, doi: 10.5402/2012/486361.
- I. E. Livieris, D. G. Sotiropoulos, and P. Pintelas, “On descent spectral CG algorithms for training recurrent neural networks,” 2009, doi: 10.1109/PCI.2009.33.
- K. K. Abbo and H. M. Khudhur, “New A hybrid conjugate gradient Fletcher-Reeves and Polak-Ribiere algorithm for unconstrained optimization,” Tikrit J. Pure Sci., vol. 21, no. 1, pp. 124–129, 2016.
- K. K. Abbo, Y. A. Laylani, and H. M. Khudhur, “Proposed new Scaled conjugate gradient algorithm for Unconstrained Optimization,” Int. J. Enhanc. Res. Sci. Technol. Eng., vol. 5, no. 7, 2016.
- H. M. Khudhur and K. K. Abbo, “A New Type of Conjugate Gradient Technique for Solving Fuzzy Nonlinear Algebraic Equations,” J. Phys. Conf. Ser., vol. 1879, no. 2, p. 22111, 2021, doi: 10.1088/1742-6596/1879/2/022111.
- R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” Comput. J., vol. 7, no. 2, pp. 149–154, 1964, doi: 10.1093/comjnl/7.2.149.
- E. Polak and G. Ribiere, “Note sur la convergence de méthodes de directions conjuguées,” ESAIM Math. Model. Numer. Anal. Mathématique Anal. Numérique, vol. 3, no. R1, pp. 35–43, 1969.
- M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, vol. 49, no. 1. NBS Washington, DC, 1952.
- M. Al-Baali, “Descent property and global convergence of the Fletcher—Reeves method with inexact line search,” IMA J. Numer. Anal., vol. 5, no. 1, pp. 121–124, 1985.