Random Walks, Directed Cycles and Markov Chains

Kate Gingell,Franklin Mendivil

AMERICAN MATHEMATICAL MONTHLY(2022)

引用 0|浏览0
暂无评分
摘要
A Markov chain is a random process which iteratively travels around in its state space with each transition only depending on the current position and not on the past. When the state space is discrete, we can think of a Markov chain as a special type of random walk on a directed graph. Although a Markov chain normally never settles down but keeps moving around, it does usually have a well-defined limiting behavior in a statistical sense. A given finite directed graph can potentially support many different random walks or Markov chains and each one could have one or more invariant (stationary) distributions. In this paper we explore the question of characterizing the set of all possible invariant distributions. The answer turns out to be quite simple and very natural and involves the cycles on the graph.
更多
查看译文
关键词
60J10
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要