BibTex RIS Kaynak Göster

Sürekli Zaman Markov Karar Süreçlerinin Özgüleştirilmesi

Yıl 2018, Cilt: 19 Sayı: 2, 65 - 74, 01.07.2018

Öz

Özgüleştirme tekniğinin amacı, bir üstel yarı-Markov karar sürecini ÜYMKS kendine özdeş, ama farklı formülasyona sahip bir başka ÜYMKS’ye dönüştürmektir. Bu sayede, özellikle optimal politikaların yapısal özelliklerini daha kolay bir şekilde ispatlamak mümkündür. Özgüleştirme tekniğinin literatürdeki mevcut hali, beklenen toplam indirgenmiş maliyeti en küçüklemeye çalışan ÜYMKS’lere uygulanmaktadır. Bu makale, ÜYMKS’ler için önerilmiş olan özgüleştirme tekniğinin sürekli zaman Markov karar süreçlerine SZMKS nasıl uygulanabileceğini, sınırlı maliyet fonksiyonu ve üstten sınırlı geçiş hızları varsayımları altında, göstermeyi hedeflemektedir. Bu amaçla, verilen SZMKS, öncelikle bir ÜYMKS’ye dönüştürülmüştür ve daha sonra bu yeni ÜYMKS özgüleştirilmiştir.

Kaynakça

  • Beutler, F. J., ve Keith, W. R. (1987). Uniformization for semi-Markov decision processes under stationary policies. Journal of Applied Probability 24 (3). Cambridge University Press, 644–56.
  • Çekyay, B. (2018). Customizing exponential semi-Markov decision processes under the discounted cost criterion. European Journal of Operational Research 266 (1), 168– 78. doi:10.1016/j.ejor.2017.09.016.
  • Feinberg, E. A., Shwartz, A., ed. (2012). Handbook of Markov decision processes: methods and applications (Vol. 40). Springer Science & Business Media.
  • Feinberg, E. A. (2004). Continuous time discounted jump Markov decision processes: a discrete-event approach. Mathematics of Operations Research 29 (3). INFORMS, 492–524.
  • Guo, X. ve Hernández-Lerma, O.. (2009). Continuous-time Markov decision processes: theory and applications. Stochastic Modelling and Applied Probability. Springer Verlag.
  • Hu, Q., ve Yue, W. (2007). Markov decision processes with their applications. Vol. 14. Springer.
  • Kakumanu, P. (1977). Relation between continuous and discrete time Markovian decision problems.Naval Research Logistics Quarterly 24 (3). Wiley Online Library, 431– 39.
  • Lippman, S. A., (1975). Applying a new device in the optimization of exponential queuing systems.Operations Research 23 (4). INFORMS, 687–710.
  • Puterman, M. L., (2005). Markov decision processes: discrete stochastic dynamic programming. Wiley Series in Probability and Statistics. New Jersey: John Wiley and Sons.
  • Serfozo, R. F., (1979). “An equivalence between continuous and discrete time Markov decision processes.” Operations Research 27 (3). INFORMS, 616–20.

Customizing Continuous-time Markov Decision Processes

Yıl 2018, Cilt: 19 Sayı: 2, 65 - 74, 01.07.2018

Öz

The customization technique can convert a given exponential semi-Markov decision process ESMDP into another equivalent ESMDP whose formulation makes the proof of a specific structural property of the optimal policy easier. The customization technique is first proposed for ESMDPs with the expected total discounted cost criterion. This paper aims to show how the customization technique for ESMDPs can be applied to continuous-time Markov decision processes CTMDPs under the assumptions of bounded cost function and bounded-above transition rates. This is achieved by converting the initial CTMDP into an ESMDP which is customized later

Kaynakça

  • Beutler, F. J., ve Keith, W. R. (1987). Uniformization for semi-Markov decision processes under stationary policies. Journal of Applied Probability 24 (3). Cambridge University Press, 644–56.
  • Çekyay, B. (2018). Customizing exponential semi-Markov decision processes under the discounted cost criterion. European Journal of Operational Research 266 (1), 168– 78. doi:10.1016/j.ejor.2017.09.016.
  • Feinberg, E. A., Shwartz, A., ed. (2012). Handbook of Markov decision processes: methods and applications (Vol. 40). Springer Science & Business Media.
  • Feinberg, E. A. (2004). Continuous time discounted jump Markov decision processes: a discrete-event approach. Mathematics of Operations Research 29 (3). INFORMS, 492–524.
  • Guo, X. ve Hernández-Lerma, O.. (2009). Continuous-time Markov decision processes: theory and applications. Stochastic Modelling and Applied Probability. Springer Verlag.
  • Hu, Q., ve Yue, W. (2007). Markov decision processes with their applications. Vol. 14. Springer.
  • Kakumanu, P. (1977). Relation between continuous and discrete time Markovian decision problems.Naval Research Logistics Quarterly 24 (3). Wiley Online Library, 431– 39.
  • Lippman, S. A., (1975). Applying a new device in the optimization of exponential queuing systems.Operations Research 23 (4). INFORMS, 687–710.
  • Puterman, M. L., (2005). Markov decision processes: discrete stochastic dynamic programming. Wiley Series in Probability and Statistics. New Jersey: John Wiley and Sons.
  • Serfozo, R. F., (1979). “An equivalence between continuous and discrete time Markov decision processes.” Operations Research 27 (3). INFORMS, 616–20.
Toplam 10 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Bölüm Araştırma Makalesi
Yazarlar

Bora Çekyay Bu kişi benim

Yayımlanma Tarihi 1 Temmuz 2018
Yayımlandığı Sayı Yıl 2018 Cilt: 19 Sayı: 2

Kaynak Göster

APA Çekyay, B. (2018). Sürekli Zaman Markov Karar Süreçlerinin Özgüleştirilmesi. Doğuş Üniversitesi Dergisi, 19(2), 65-74.