![]() |
[email protected] |
![]() |
3275638434 |
![]() |
![]() |
Paper Publishing WeChat |
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Distribution of First Passage Times for Lumped States in Markov Chains
Murat Gül and Salih Çelebioğlu
Full-Text PDF
XML 762 Views
DOI:10.17265/2159-5291/2015.08.002
1. Department of Statistic, Giresun University, Güre, Giresun, Turkey. 2. Department of Statistic, Gazi University , Teknikokullar, Ankara, Turkey.
First passage time in Markov chains is defined as the first time that a chain passes a specified state or lumped states. This state or lumped states may indicate first passage time of an interesting, rare and amazing event. In this study, obtaining distribution of the first passage time relating to lumped states which are constructed by gathering the states through lumping method for a irreducible Markov chain whose state space is finite was deliberated. Thanks to lumping method the chain’s Markov property has been preserved. Another benefit of lumping method in the way of practice is reduction of the state space thanks to gathering states together. As the obtained first passage distributions are continuous, it may be used in many fields such as reliability and risk analysis.
Markov chain, distribution of first passage time, lumped states.
[2] A. R. Bulsara, T. C. Elston, C. R. Doering, S. B. Lowen, K. Lindenberg, Cooperative behavior in periodically driven noisy integrate-fire models of neuronal dynamics. Phys. Rev. E 53, 3958-3969, 1996.
[3] S. E. Fienberg, Stochastic models for single neuron firing trains: a survey. Biometrics 30, 399-427, 1974.
[4] G. L Gerstein, B. B. Mandelbrot, Random walk models for the spike activity of a single neuron. Biophys. J. 4,41-68, 1964.
[5] H. C. Tuckwell, Stochactic Process in the Neurosciences, Society for industrial and Applied Mathematics, Philadelphia, PA, 1989.
[6] S. Kammer, A general first-passage-time model for multivariate credit spreads and a note on barrier option pricing ,Ph.D Thesis, Justus-Liebig-Universit at Giessen, 8-20, 2007.
[7] H. Li, M. Shaked, On the first passage times for Markov processes with monotone convex transition kernels , Stochastic Processes and their Applications, 58(2): 205-206 ,1995.
[8] M. Mandel, Estimating disease progression using panel data, Biostatistics,11(2): 1-13,2010.
[9] A. Ridder, Asymptotic optimality of the cross-entropy method for Markov chain problems, Procedia Computer Science , 1(1): 1565-1572 ,2010.
[10] J.P. Tian, D. Kannan, Lumpability and Commutativity of Markov Processes, Stochastic Analysis and Applications; 24(3):685-702, 2006.
[11] F.S.Hillier, G.J. Lieberman, Introduction to operations research, Seventh Edition, Mcgraw-Hill, 2001
[12] E. Çınlar, Introduction to Stochastic Processes, Englewood Cliffs, New Jersey, 106-277, 1997.
[13] D. Gupta, Fundamentals of Probability: A first course ,Springer Texts in Statistics, 671-672, 2010.
[14] L. Zhang, Lumpability Approximation Methods for Markov Models , Ph.D. Thesis, Purdue University, 6-16, 27-39, 2006.
[15] J.G. Kemeny, J.L. Snell, Finite Markov Chains, Springer, Berlin, 124-132, 1976.
[16] A. Alzaatreh, C.Lee, F. Famoye, On the discrete analogues of continuous distributions, Statistical Methodology, 9: 589–603, 2012.