BibTex RIS Kaynak Göster

GERİ-YAYILMALI ÖĞRENME ALGORİTMASINDAKİ ÖĞRENME PARAMETRELERİNİN GENETİK ALGORİTMA İLE BELİRLENMESİ

Yıl 2009, Cilt: 1 Sayı: 2, 61 - 73, 01.06.2009

Öz

Bu çalışmada, ileri beslemeli bir sinir ağının eğitiminde kullanılan geri-yayılmalı öğrenme algoritmasındaki öğrenme parametreleri genetik algoritmalar kullanılarak belirlenmiştir. Öğrenme parametreleri öğrenme ve momentum katsayıları olarak bilinmektedir. Öğrenme parametreleri ağın öğrenme hızının arttırılması, öğrenme esnasında oluşabilecek osilasyonların giderilmesi ve lokal minimumlardan kaçılması gibi özellikleri belirlemektedirler. Dolayısıyla bu parametrelerin uygun biçimde seçilmesi ağın daha etkin olarak eğitilmesinde oldukça önemlidir. Öğrenme parametrelerinin genetik algoritma ile belirlenmesi için, dört katmanlı ileri beslemeli bir ağ tasarlanmıştır. Tasarlanan ağdaki üç öğrenme ve üç momentum katsayısı, genetik bir kromozom ile ifade edilmiştir. Çalışmanın amacı; en uygun kromozomun seçilmesidir. Ortaya konulan yöntemin test edilmesinde özel tanımlı iki boyutlu regresyon problemlerinden yararlanılmıştır. Yapılan test çalışması ortaya konulan yöntemin geleneksel sabit parametreli öğrenme algoritmasına göre daha etkin olduğunu göstermiştir.

Kaynakça

  • Angeline, P. J., Saunders, G. M., Pollack, J. B. (1994). An evolutionary algorithm that constructs recurrent neural networks. IEEE Transactions on Neural Networks, 5(1), 54–65.
  • Arifovica, J., Gencay, R. (2001). Using genetic algorithms to select architecture of a feedforward artificial neural network. Physica A, 289(3-4), 574-594.
  • Blanco, A., Delgado, M., Pegalajar, M. C. (2001). A real-coded genetic algorithm for training recurrent neural Networks. Neural Networks. 14(1), 93-105.
  • Hwang, J. N. S., Lay, R. Maechler, M., Martin, R. D., Schimert, J. (1994).Regression modeling in back-propagation and projection pursuit learning, IEEE Transactions on Neural Networks, 5(3), 342-353.
  • Kamiyama, N., Iijima,N., Taguchi, A., Mitsui, H., Yoshida, Y., Sone, M. (1992).Tuning of learning rate and momentum on back-propagation. Singapore Iccs/Isita '92, Communications on the Move, 16-20 November 1992, Westin Stamford, Singapore, 528-532.
  • Kandi1, N., Khorasani, K., Patel, R. V., Sood, V. K.(1993).Optimum learning rate for backpropagation neural Networks. Canadian Conference on Electrical and Computer Engineering (pp.465-468), Vol.1.
  • Kwok, T. Y., Yeung, D. Y. (1997). Objective functions for training new hidden units in constructive neural Networks. IEEE Transactions on Neural Networks. 8(5), 1131-1148.
  • Leung, F. H., Lam, F. H., Ling, K. S. H., Tam, P. K. S. (2003). Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Transactions on Neural Networks. 14(1), 79-88.
  • Marwala, T. (2007). Bayesian training of neural networks using genetic programming. Pattern
  • Recognition Letters. 28(12),1452–1458.
  • Pedrajas, N. G., Boyer, D. O., Martinez, C. H. (2006). An alternative approach for neural network evolution with a genetic algorithm: Crossover by combinatorial optimization. Neural Networks, 19(4), 514–528.
  • Sexton, R. S., Gupta, J. N. D. (2000). Comparative evaluation of genetic algorithm and backpropagation for training neural Networks. Information Sciences, 129(1-4), 45-59.
  • Sheell, S. , Varshney, T., Varshney, R. (2007). Accelerated learning in MLP using adaptive learning rate with momentum coefficient, Second International Conference on Industrial and Information Systems. ICIIS 2007, 8 – 11 August 2007, Sri Lanka, (307-310).
  • Ustun,O. (2009a). Genetik algoritma kullanılarak ileri beslemeli bir sinir ağında etkinlik fonksiyonlarının belirlenmesi. Pamukkale Üniversitesi. Mühendislik Bilimleri Dergisi (Basımda).
  • Ustun, O. (2009b). Measurement and real-time modeling of inductance and flux linkage in switched reluctance motors, IEEE Transactions on Magnetics,(accepted paper).
  • Ustun, O. (2009c). A nonlinear full model of switched reluctance motor with artificial neural network. Energy Conversion and Management. Energy Conversion and Management, 50, –2421.
  • Yao, X. (1999). Evolving Artificial Neural Networks. Proceedings of the IEEE, 9(87), 1423–
  • Yu, C. C., Liu, B. D. (2002). A back propagation algorithm with adaptive learning rate and momentum coefficient. Proceedings of the 2002 International Joint Conference on Neural Networks (1218–1223), Vol. 2.
  • Yu, X.H., Chen G.A., Cheng S.X. (1993). Acceleratıon of backpropagation learning using optimised learning rate and momentum, Electronics. 29(14), 1288-1290.
  • Yu, X. H., Chen, G. A. (1997). Efficient backpropagation learning using optimal learning rate and momentum.Neural Networks. 10(3), 517-527.
  • Zaghwl, A., Dong, W. M. (1994). An automated approach for selecting the learning rate and momentum in back-propagation Networks. IEEE International Conference on Neural
  • Networks. 27 Jun-2 Jul 1994, 464-469 Vol.1, Orlando. FL. USA.

DETERMINATION OF THE LEARNING PARAMETERS IN BACKPROPAGATION LEARNING ALGORITHM BY GENETIC ALGORITHM

Yıl 2009, Cilt: 1 Sayı: 2, 61 - 73, 01.06.2009

Öz

In this study, the learning parameters in backpropagation learning algorithm is determined by using genetic algorithm, which is used for training of a feedforward neural network. Learning parameters are known as learning rate and momentum rate. The learning parameters find out features such as acceleration of learning, dampening oscillations and getting rid of local minima during learning of the network. Therefore, a selection of these parameters is quite important for training of the network more efficiently. A feedforward neural network with four layers is designed to define learning parameters by genetic algorithm. Three learning rates and three momentum coefficients in the designed network have been denoted with one genetic chromosome. The aim of the study is to choice fittest chromosome. In order to test the proposed method, a specific described two dimensional regression problems are utilized. Test results show that the suggested method is more efficient than conventional learning algorithm with fixed parameter.

Kaynakça

  • Angeline, P. J., Saunders, G. M., Pollack, J. B. (1994). An evolutionary algorithm that constructs recurrent neural networks. IEEE Transactions on Neural Networks, 5(1), 54–65.
  • Arifovica, J., Gencay, R. (2001). Using genetic algorithms to select architecture of a feedforward artificial neural network. Physica A, 289(3-4), 574-594.
  • Blanco, A., Delgado, M., Pegalajar, M. C. (2001). A real-coded genetic algorithm for training recurrent neural Networks. Neural Networks. 14(1), 93-105.
  • Hwang, J. N. S., Lay, R. Maechler, M., Martin, R. D., Schimert, J. (1994).Regression modeling in back-propagation and projection pursuit learning, IEEE Transactions on Neural Networks, 5(3), 342-353.
  • Kamiyama, N., Iijima,N., Taguchi, A., Mitsui, H., Yoshida, Y., Sone, M. (1992).Tuning of learning rate and momentum on back-propagation. Singapore Iccs/Isita '92, Communications on the Move, 16-20 November 1992, Westin Stamford, Singapore, 528-532.
  • Kandi1, N., Khorasani, K., Patel, R. V., Sood, V. K.(1993).Optimum learning rate for backpropagation neural Networks. Canadian Conference on Electrical and Computer Engineering (pp.465-468), Vol.1.
  • Kwok, T. Y., Yeung, D. Y. (1997). Objective functions for training new hidden units in constructive neural Networks. IEEE Transactions on Neural Networks. 8(5), 1131-1148.
  • Leung, F. H., Lam, F. H., Ling, K. S. H., Tam, P. K. S. (2003). Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Transactions on Neural Networks. 14(1), 79-88.
  • Marwala, T. (2007). Bayesian training of neural networks using genetic programming. Pattern
  • Recognition Letters. 28(12),1452–1458.
  • Pedrajas, N. G., Boyer, D. O., Martinez, C. H. (2006). An alternative approach for neural network evolution with a genetic algorithm: Crossover by combinatorial optimization. Neural Networks, 19(4), 514–528.
  • Sexton, R. S., Gupta, J. N. D. (2000). Comparative evaluation of genetic algorithm and backpropagation for training neural Networks. Information Sciences, 129(1-4), 45-59.
  • Sheell, S. , Varshney, T., Varshney, R. (2007). Accelerated learning in MLP using adaptive learning rate with momentum coefficient, Second International Conference on Industrial and Information Systems. ICIIS 2007, 8 – 11 August 2007, Sri Lanka, (307-310).
  • Ustun,O. (2009a). Genetik algoritma kullanılarak ileri beslemeli bir sinir ağında etkinlik fonksiyonlarının belirlenmesi. Pamukkale Üniversitesi. Mühendislik Bilimleri Dergisi (Basımda).
  • Ustun, O. (2009b). Measurement and real-time modeling of inductance and flux linkage in switched reluctance motors, IEEE Transactions on Magnetics,(accepted paper).
  • Ustun, O. (2009c). A nonlinear full model of switched reluctance motor with artificial neural network. Energy Conversion and Management. Energy Conversion and Management, 50, –2421.
  • Yao, X. (1999). Evolving Artificial Neural Networks. Proceedings of the IEEE, 9(87), 1423–
  • Yu, C. C., Liu, B. D. (2002). A back propagation algorithm with adaptive learning rate and momentum coefficient. Proceedings of the 2002 International Joint Conference on Neural Networks (1218–1223), Vol. 2.
  • Yu, X.H., Chen G.A., Cheng S.X. (1993). Acceleratıon of backpropagation learning using optimised learning rate and momentum, Electronics. 29(14), 1288-1290.
  • Yu, X. H., Chen, G. A. (1997). Efficient backpropagation learning using optimal learning rate and momentum.Neural Networks. 10(3), 517-527.
  • Zaghwl, A., Dong, W. M. (1994). An automated approach for selecting the learning rate and momentum in back-propagation Networks. IEEE International Conference on Neural
  • Networks. 27 Jun-2 Jul 1994, 464-469 Vol.1, Orlando. FL. USA.
Toplam 22 adet kaynakça vardır.

Ayrıntılar

Diğer ID JA94KB56ZT
Bölüm Araştırma Makalesi
Yazarlar

Oğuz Üstün Bu kişi benim

İlker Yıldız Bu kişi benim

Yayımlanma Tarihi 1 Haziran 2009
Yayımlandığı Sayı Yıl 2009 Cilt: 1 Sayı: 2

Kaynak Göster

IEEE O. Üstün ve İ. Yıldız, “GERİ-YAYILMALI ÖĞRENME ALGORİTMASINDAKİ ÖĞRENME PARAMETRELERİNİN GENETİK ALGORİTMA İLE BELİRLENMESİ”, UTBD, c. 1, sy. 2, ss. 61–73, 2009.

Dergi isminin Türkçe kısaltması "UTBD" ingilizce kısaltması "IJTS" şeklindedir.

Dergimizde yayınlanan makalelerin tüm bilimsel sorumluluğu yazar(lar)a aittir. Editör, yardımcı editör ve yayıncı dergide yayınlanan yazılar için herhangi bir sorumluluk kabul etmez.