1/26
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Markov Chain
Let (lambda_i) be a probability distribution on a sample space S. Let P_ij, where i,j in S be such that P_ij >= 0 for all i,j and sum P_ij = 1 for all i. Let the time index be n = 0,1,2,… then ‘time homogeneous discrete time Markov Process’ or MC (Xn) with initial distribution (lambda_i) and transition probabilities P_ij is defined by P(X0 = i) = lambda_i, P(X(n+1) = j | Xn = i, X_(n-1) = x_(n-1), … , X0 = x0) = P(X(n+1) = j | Xn = i) = P_ij.
Markov Property
Let (Xn) = (X0,X1,X2,…) be a stochastic process in discrete time n = 0,1,2,… and discrete space S. Then Xn has MP if, for all times n and all states x0,x1,…,xn,x(n+1) in S we have P(X(n+1) = x(n+1) | Xn = xn , X(n-1) = x(n-1), … , X0 =x0) = P(X(n+1) = x(n+1) | Xn = xn).
Stochastic Process
An indexed sequence of random variables that are (usually) dependent on each other.
Accessible
Consider a MC on a state space S with transition matrix P. We say that state j in S is accessible from state i in S and write i -> j if, for some n, P_ij(n) > 0.
Communicates
If i -> j and j <- i, write i <-> j.
Irreducible
If the entire state space S is one communicating class.
Closed
Communicating class is closed if no state outside the class is accessible from any state within the class. (Class C subset of S is closed if whenever there exist i in C and j in S with i -> j, then j in C also).
Open
If a class is not closed.
Absorbing
If a state i is in a communicating class {i} by itself and that class is closed.
Period
Consider a MC with transition matrix P. We say that a state 𝑖 ∈ 𝒮 has
period 𝑑_𝑖, where 𝑑_𝑖 = gcd{𝑛 ∈ {1,2,…,} ∶ 𝑝_𝑖𝑖(𝑛) > 0}.
Periodic
If d_i > 1.
Aperiodic
If d_i = 1.
Hitting Time
Let (𝑋𝑛) be a MC on state space 𝒮. Let 𝐻_𝐴 be a RV representing
the hitting time to hit the set 𝐴 ⊂ 𝒮, given by
𝐻_𝐴= min{𝑛∈{0,1,2,…} ∶ 𝑋𝑛 ∈ 𝐴}.
Hitting Probability
The hitting probability ℎ_𝑖 𝐴 of the set 𝐴 starting from state 𝑖 is
ℎ_𝑖𝐴 = ℙ(𝑋𝑛 ∈𝐴 for some 𝑛≥0 ∣𝑋0 = 𝑖) = ℙ(𝐻_𝐴 < ∞ ∣ 𝑋0 = 𝑖).
Expected Hitting Time
The expected hitting time 𝜂_𝑖𝐴 of the set 𝐴 starting from state 𝑖 is
𝜂_𝑖𝐴 = 𝔼(𝐻_𝐴 ∣ 𝑋0 = 𝑖).
Return Probability
𝑚_𝑖 = ℙ(𝑋𝑛 = 𝑖 for some 𝑛 ≥ 1 ∣ 𝑋0 =𝑖) = ℙ(𝑀_𝑖
< ∞ ∣ 𝑋0 = 𝑖)
Expected Return Time
𝜇_𝑖 = 𝔼(𝑀𝑖 ∣ 𝑋0 = 𝑖)
Recurrent
If m_i = 1.
Transient
If m_i < 1.
Positive Recurrent
Let (𝑋𝑛) be a Markov chain on a state space 𝒮. Let 𝑖 ∈ 𝒮 be a recurrent state, so
𝑚_𝑖 = 1, and let 𝜇_𝑖 be the expected return time. If 𝜇_𝑖 < ∞, we say that state 𝑖 is positive recurrent.
Null Recurrent
If 𝜇_𝑖 = ∞.
Stopping Time
Let (𝑋𝑛) be a stochastic process in discrete time, and let 𝑇 be a random time. Then 𝑇 is a stopping time if for all 𝑛, whether or not the event {𝑇 = 𝑛} occurs is completely determined by the random variables 𝑋0, 𝑋1, … , 𝑋n.
Stationary Distribution
Let (𝑋𝑛) be a MC on a state space 𝒮 with transition matrix P. Let 𝜋 = (𝜋_𝑖)
be a distribution on 𝒮, in that 𝜋_𝑖 ≥ 0 for all 𝑖 ∈ 𝒮 and ∑𝜋_𝑖 = 1. We call 𝜋 a stationary distribution if 𝜋_𝑗 = ∑𝜋_𝑖P_𝑖𝑗 for all 𝑗∈𝒮,
or if 𝜋=𝜋P.
Reversed Chain
Let (𝑋𝑛) be a MC on state space 𝒮. For a fixed integer 𝑁 define the
reversed chain 𝑌𝑛 ∶= 𝑋_(𝑁−n).
Reversible
Let (𝑋𝑛) be a MC on state space 𝒮 with transition probabilities P(𝑥,𝑦). If there is a probability distribution 𝜋 on 𝒮 such that 𝑋𝑛 and 𝜋 satisfy the detailed balance conditions
𝜋(𝑥)𝑝(𝑥,𝑦) = 𝜋(𝑦)𝑝(𝑦,𝑥) for all 𝑥,𝑦 ∈ 𝒮,
then we say that 𝑋𝑛 is reversible.
Graph of Xn
Let (𝑋𝑛) be a MC on state space 𝒮. The graph of 𝑋𝑛 is obtained by taking the transition diagram of 𝑋𝑛 and forgetting all the directions of edges and removing multiple edges and self loops.
Equilibrium Distribution
Let (𝑋𝑛) be a Markov chain on a state space 𝒮 with transition matrix P. Suppose there exists a distribution p∗ = (𝑝∗_𝑖) on 𝒮 (so 𝑝∗_𝑖 ≥ 0 and ∑ 𝑝∗_𝑖 = 1) such that, whatever the initial distribution 𝜆 = (𝜆_𝑖), we have ℙ(𝑋𝑛 = 𝑗) → 𝑝∗
_𝑗 as 𝑛 → ∞ for all 𝑗 ∈ 𝒮. Then we say that p∗ is an equilibrium distribution.