Research Article
BibTex RIS Cite

A Reinforcement Learning Algorithm Using Multi-Layer Artificial Neural Networks for Semi-Markov Decision Problems

Year 2013, Volume: 17 Issue: 3, 307 - 307, 01.06.2013

Abstract

Real life problems are generally large-scale and difficult to model. Therefore, these problems can’t be mostly solved by classical optimization methods. This paper presents a reinforcement learning algorithm using a multi-layer artificial neural network to find an approximate solution for large-scale semi Markov decision problems. Performance of the developed algorithm is measured and compared to the classical reinforcement algorithm on a small-scale numerical example. According to results of numerical examples, the number of hidden layer is the key success factor, and average cost of the solution generated by the developed algorithm is approximately equal to that generated by the classical reinforcement algorithm.

References

  • (REFERENCES) Ocaktan, M.A.B. (2012) İkame ürün dağıtım ağlarında stok optimizasyonu ve optimal dağıtım politikaları, Doktora Tezi, Sakarya Üniversitesi, Endüstri Mühendisliği A.B.D.
  • Gosavi, A. (2004) ‘A reinforcement learning algorithm based on policy iteration for average reward: empirical results with yield management and convergence analysis’, Machine Learning, vol. 55, pp. 5-29.
  • Tadepalli, P. and Ok, D. (1998) ‘Model based average reward reinforcement learning algorithms’, Artificial Intelligence, vol. 100, pp. 177-2
  • Shioyama, T. (1991) ‘Optimal control of a queuing network system with two types of customers’, European Journal of Operational Research, vol. 52, pp. 361-372.
  • Dimitri P.B. and Tsitsiklis, J. (1996) Neurodynamic programming, Athena Scientific.
  • Sutton, R. and Barto, A.G. (1998) Reinforcement learning, Cambridge: The MIT Press.
  • Gosavi,G. (2003) Simulation-based optimization, Kluwer Academic Publishers.
  • Buşoniu, L., Babuska, R., Schutter, B.D. and Ernst, D. (2010) Reinforcement learning and dynamic programming using function approximators, CRC Press.
  • Puterman, M.L. (1994) Markov decision processes: discrete stochastic dynamic programming, John Wiley & Sons.
  • Das, T.K., Gosavi, A., Mahadevan, S. and Marchalleck, N. (1999) ‘Solving semi-Markov decisions problems using average reward reinforcement learning’, Management Science, vol. 45, no. 4, pp. 560-574.
  • Gosavi, A. (2004) ‘Reinforcement learning for long-run average cost’, European Journal of Operational Research, vol.155, no. 3, pp. 6546
  • Bellman, R. (1954) ‘The theory of dynamic programming’, Bulletin of American Society, vol. 60, pp. 503-516.

Yarı Markov Karar Süreci Problemlerinin Çözümünde Çok Katmanlı Yapay Sinir Ağlarıyla Fonksiyon Yaklaşımlı Ödüllü Öğrenme Algoritması

Year 2013, Volume: 17 Issue: 3, 307 - 307, 01.06.2013

Abstract

Real life problems are generally large-scale and difficult to model. Therefore, these problems can't be mostly solved by classical optimisation methods. This paper presents a reinforcement learning algorithm using a multi-layer artificial neural network to find an approximate solution for large-scale semi Markov decision problems. Performance of the developed algorithm is measured and compared to the classical reinforcement algorithm on a small-scale numerical example. According to results of numerical examples, a number of hidden layer are the key success factors, and average cost of the solution generated by the developed algorithm is approximately equal to that generated by the classical reinforcement algorithm.

References

  • (REFERENCES) Ocaktan, M.A.B. (2012) İkame ürün dağıtım ağlarında stok optimizasyonu ve optimal dağıtım politikaları, Doktora Tezi, Sakarya Üniversitesi, Endüstri Mühendisliği A.B.D.
  • Gosavi, A. (2004) ‘A reinforcement learning algorithm based on policy iteration for average reward: empirical results with yield management and convergence analysis’, Machine Learning, vol. 55, pp. 5-29.
  • Tadepalli, P. and Ok, D. (1998) ‘Model based average reward reinforcement learning algorithms’, Artificial Intelligence, vol. 100, pp. 177-2
  • Shioyama, T. (1991) ‘Optimal control of a queuing network system with two types of customers’, European Journal of Operational Research, vol. 52, pp. 361-372.
  • Dimitri P.B. and Tsitsiklis, J. (1996) Neurodynamic programming, Athena Scientific.
  • Sutton, R. and Barto, A.G. (1998) Reinforcement learning, Cambridge: The MIT Press.
  • Gosavi,G. (2003) Simulation-based optimization, Kluwer Academic Publishers.
  • Buşoniu, L., Babuska, R., Schutter, B.D. and Ernst, D. (2010) Reinforcement learning and dynamic programming using function approximators, CRC Press.
  • Puterman, M.L. (1994) Markov decision processes: discrete stochastic dynamic programming, John Wiley & Sons.
  • Das, T.K., Gosavi, A., Mahadevan, S. and Marchalleck, N. (1999) ‘Solving semi-Markov decisions problems using average reward reinforcement learning’, Management Science, vol. 45, no. 4, pp. 560-574.
  • Gosavi, A. (2004) ‘Reinforcement learning for long-run average cost’, European Journal of Operational Research, vol.155, no. 3, pp. 6546
  • Bellman, R. (1954) ‘The theory of dynamic programming’, Bulletin of American Society, vol. 60, pp. 503-516.
There are 12 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section Research Articles
Authors

Mustafa Ahmet Beyazıt Ocaktan This is me

Ufuk Kula This is me

Publication Date June 1, 2013
Submission Date January 24, 2013
Acceptance Date April 3, 2013
Published in Issue Year 2013 Volume: 17 Issue: 3

Cite

APA Ocaktan, M. A. B., & Kula, U. (2013). Yarı Markov Karar Süreci Problemlerinin Çözümünde Çok Katmanlı Yapay Sinir Ağlarıyla Fonksiyon Yaklaşımlı Ödüllü Öğrenme Algoritması. Sakarya University Journal of Science, 17(3), 307-307. https://doi.org/10.16984/saufbed.75737
AMA Ocaktan MAB, Kula U. Yarı Markov Karar Süreci Problemlerinin Çözümünde Çok Katmanlı Yapay Sinir Ağlarıyla Fonksiyon Yaklaşımlı Ödüllü Öğrenme Algoritması. SAUJS. December 2013;17(3):307-307. doi:10.16984/saufbed.75737
Chicago Ocaktan, Mustafa Ahmet Beyazıt, and Ufuk Kula. “Yarı Markov Karar Süreci Problemlerinin Çözümünde Çok Katmanlı Yapay Sinir Ağlarıyla Fonksiyon Yaklaşımlı Ödüllü Öğrenme Algoritması”. Sakarya University Journal of Science 17, no. 3 (December 2013): 307-7. https://doi.org/10.16984/saufbed.75737.
EndNote Ocaktan MAB, Kula U (December 1, 2013) Yarı Markov Karar Süreci Problemlerinin Çözümünde Çok Katmanlı Yapay Sinir Ağlarıyla Fonksiyon Yaklaşımlı Ödüllü Öğrenme Algoritması. Sakarya University Journal of Science 17 3 307–307.
IEEE M. A. B. Ocaktan and U. Kula, “Yarı Markov Karar Süreci Problemlerinin Çözümünde Çok Katmanlı Yapay Sinir Ağlarıyla Fonksiyon Yaklaşımlı Ödüllü Öğrenme Algoritması”, SAUJS, vol. 17, no. 3, pp. 307–307, 2013, doi: 10.16984/saufbed.75737.
ISNAD Ocaktan, Mustafa Ahmet Beyazıt - Kula, Ufuk. “Yarı Markov Karar Süreci Problemlerinin Çözümünde Çok Katmanlı Yapay Sinir Ağlarıyla Fonksiyon Yaklaşımlı Ödüllü Öğrenme Algoritması”. Sakarya University Journal of Science 17/3 (December 2013), 307-307. https://doi.org/10.16984/saufbed.75737.
JAMA Ocaktan MAB, Kula U. Yarı Markov Karar Süreci Problemlerinin Çözümünde Çok Katmanlı Yapay Sinir Ağlarıyla Fonksiyon Yaklaşımlı Ödüllü Öğrenme Algoritması. SAUJS. 2013;17:307–307.
MLA Ocaktan, Mustafa Ahmet Beyazıt and Ufuk Kula. “Yarı Markov Karar Süreci Problemlerinin Çözümünde Çok Katmanlı Yapay Sinir Ağlarıyla Fonksiyon Yaklaşımlı Ödüllü Öğrenme Algoritması”. Sakarya University Journal of Science, vol. 17, no. 3, 2013, pp. 307-, doi:10.16984/saufbed.75737.
Vancouver Ocaktan MAB, Kula U. Yarı Markov Karar Süreci Problemlerinin Çözümünde Çok Katmanlı Yapay Sinir Ağlarıyla Fonksiyon Yaklaşımlı Ödüllü Öğrenme Algoritması. SAUJS. 2013;17(3):307-.