1/27
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Markov Chain
Let (lambda_i) be a probability distribution on a sample space S. Let P_ij, where i,j in S be such that P_ij >= 0 for all i,j and sum P_ij = 1 for all i. Let the time index be n = 0,1,2,โฆ then โtime homogeneous discrete time Markov Processโ or MC (Xn) with initial distribution (lambda_i) and transition probabilities P_ij is defined by P(X0 = i) = lambda_i, P(X(n+1) = j | Xn = i, X_(n-1) = x_(n-1), โฆ , X0 = x0) = P(X(n+1) = j | Xn = i) = P_ij.
Markov Property
Let (Xn) = (X0,X1,X2,โฆ) be a stochastic process in discrete time n = 0,1,2,โฆ and discrete space S. Then Xn has MP if, for all times n and all states x0,x1,โฆ,xn,x(n+1) in S we have P(X(n+1) = x(n+1) | Xn = xn , X(n-1) = x(n-1), โฆ , X0 =x0) = P(X(n+1) = x(n+1) | Xn = xn).
Stochastic Process
An indexed sequence of random variables that are (usually) dependent on each other.
Accessible
Consider a MC on a state space S with transition matrix P. We say that state j in S is accessible from state i in S and write i -> j if, for some n, P_ij(n) > 0.
Communicates
If i -> j and j <- i, write i <-> j.
Irreducible
If the entire state space S is one communicating class.
Closed
Communicating class is closed if no state outside the class is accessible from any state within the class. (Class C subset of S is closed if whenever there exist i in C and j in S with i -> j, then j in C also).
Open
If a class is not closed.
Absorbing
If a state i is in a communicating class {i} by itself and that class is closed.
Period
Consider a MC with transition matrix P. We say that a state ๐ โ ๐ฎ has
period ๐_๐, where ๐_๐ = gcd{๐ โ {1,2,โฆ,} โถ ๐_๐๐(๐) > 0}.
Periodic
If d_i > 1.
Aperiodic
If d_i = 1.
Hitting Time
Let (๐๐) be a MC on state space ๐ฎ. Let ๐ป_๐ด be a RV representing
the hitting time to hit the set ๐ด โ ๐ฎ, given by
๐ป_๐ด= min{๐โ{0,1,2,โฆ} โถ ๐๐ โ ๐ด}.
Hitting Probability
The hitting probability โ_๐๐ด of the set ๐ด starting from state ๐ is
โ_๐๐ด = โ(๐๐ โ ๐ด for some ๐โฅ0 โฃ๐0 = ๐) = โ(๐ป_๐ด < โ โฃ ๐0 = ๐).
Expected Hitting Time
The expected hitting time ๐_๐๐ด of the set ๐ด starting from state ๐ is
๐_๐๐ด = ๐ผ(๐ป_๐ด โฃ ๐0 = ๐).
Return Time
M_i = min{n โ {1,2,โฆ} : Xn = i}
Return Probability
๐_๐ = โ(๐๐ = ๐ for some ๐ โฅ 1 โฃ ๐0 =๐) = โ(๐_๐
< โ โฃ ๐0 = ๐)
Expected Return Time
๐_๐ = ๐ผ(๐๐ โฃ ๐0 = ๐)
Recurrent
If m_i = 1.
Transient
If m_i < 1.
Positive Recurrent
Let (๐๐) be a Markov chain on a state space ๐ฎ. Let ๐ โ ๐ฎ be a recurrent state, so ๐_๐ = 1, and let ๐_๐ be the expected return time. If ๐_๐ < โ, we say that state ๐ is positive recurrent.
Null Recurrent
If ๐_๐ = โ.
Stopping Time
Let (๐๐) be a stochastic process in discrete time, and let ๐ be a random time. Then ๐ is a stopping time if for all ๐, whether or not the event {๐ = ๐} occurs is completely determined by the random variables ๐0, ๐1, โฆ , ๐n.
Stationary Distribution
Let (๐๐) be a MC on a state space ๐ฎ with transition matrix P. Let ๐ = (๐_๐)
be a distribution on ๐ฎ, in that ๐_๐ โฅ 0 for all ๐ โ ๐ฎ and โ๐_๐ = 1. We call ๐ a stationary distribution if ๐_๐ = โ๐_๐P_๐๐ for all ๐โ๐ฎ,
or if ๐=๐P.
Reversed Chain
Let (๐๐) be a MC on state space ๐ฎ. For a fixed integer ๐ define the
reversed chain ๐๐ โถ= ๐_(๐โn).
Reversible
Let (๐๐) be a MC on state space ๐ฎ with transition probabilities P(๐ฅ,๐ฆ). If there is a probability distribution ๐ on ๐ฎ such that ๐๐ and ๐ satisfy the detailed balance conditions ๐(๐ฅ)๐(๐ฅ,๐ฆ) = ๐(๐ฆ)๐(๐ฆ,๐ฅ) for all ๐ฅ,๐ฆ โ ๐ฎ, then we say that ๐๐ is reversible.
Graph of Xn
Let (๐๐) be a MC on state space ๐ฎ. The graph of ๐๐ is obtained by taking the transition diagram of ๐๐ and forgetting all the directions of edges and removing multiple edges and self loops.
Equilibrium Distribution
Let (๐๐) be a Markov chain on a state space ๐ฎ with transition matrix P. Suppose there exists a distribution pโ = (๐โ_๐) on ๐ฎ (so ๐โ_๐ โฅ 0 and โ ๐โ_๐ = 1) such that, whatever the initial distribution ๐ = (๐_๐), we have โ(๐๐ = ๐) โ ๐โ
_๐ as ๐ โ โ for all ๐ โ ๐ฎ. Then we say that pโ is an equilibrium distribution.