Discrete Markov Processes

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/26

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

27 Terms

1
New cards

Markov Chain

Let (lambda_i) be a probability distribution on a sample space S. Let P_ij, where i,j in S be such that P_ij >= 0 for all i,j and sum P_ij = 1 for all i. Let the time index be n = 0,1,2,… then ‘time homogeneous discrete time Markov Process’ or MC (Xn) with initial distribution (lambda_i) and transition probabilities P_ij is defined by P(X0 = i) = lambda_i, P(X(n+1) = j | Xn = i, X_(n-1) = x_(n-1), … , X0 = x0) = P(X(n+1) = j | Xn = i) = P_ij.

2
New cards

Markov Property

Let (Xn) = (X0,X1,X2,…) be a stochastic process in discrete time n = 0,1,2,… and discrete space S. Then Xn has MP if, for all times n and all states x0,x1,…,xn,x(n+1) in S we have P(X(n+1) = x(n+1) | Xn = xn , X(n-1) = x(n-1), … , X0 =x0) = P(X(n+1) = x(n+1) | Xn = xn).

3
New cards

Stochastic Process

An indexed sequence of random variables that are (usually) dependent on each other.

4
New cards

Accessible

Consider a MC on a state space S with transition matrix P. We say that state j in S is accessible from state i in S and write i -> j if, for some n, P_ij(n) > 0.

5
New cards

Communicates

If i -> j and j <- i, write i <-> j.

6
New cards

Irreducible

If the entire state space S is one communicating class.

7
New cards

Closed

Communicating class is closed if no state outside the class is accessible from any state within the class. (Class C subset of S is closed if whenever there exist i in C and j in S with i -> j, then j in C also).

8
New cards

Open

If a class is not closed.

9
New cards

Absorbing

If a state i is in a communicating class {i} by itself and that class is closed.

10
New cards

Period

Consider a MC with transition matrix P. We say that a state 𝑖 ∈ 𝒮 has
period 𝑑_𝑖, where 𝑑_𝑖 = gcd{𝑛 ∈ {1,2,…,} ∶ 𝑝_𝑖𝑖(𝑛) > 0}.

11
New cards

Periodic

If d_i > 1.

12
New cards

Aperiodic

If d_i = 1.

13
New cards

Hitting Time

Let (𝑋𝑛) be a MC on state space 𝒮. Let 𝐻_𝐴 be a RV representing
the hitting time to hit the set 𝐴 ⊂ 𝒮, given by
𝐻_𝐴= min{𝑛∈{0,1,2,…} ∶ 𝑋𝑛 ∈ 𝐴}.

14
New cards

Hitting Probability

The hitting probability ℎ_𝑖 𝐴 of the set 𝐴 starting from state 𝑖 is
ℎ_𝑖𝐴 = ℙ(𝑋𝑛 ∈𝐴 for some 𝑛≥0 ∣𝑋0 = 𝑖) = ℙ(𝐻_𝐴 < ∞ ∣ 𝑋0 = 𝑖).

15
New cards

Expected Hitting Time

The expected hitting time 𝜂_𝑖𝐴 of the set 𝐴 starting from state 𝑖 is
𝜂_𝑖𝐴 = 𝔼(𝐻_𝐴 ∣ 𝑋0 = 𝑖).

16
New cards

Return Probability

𝑚_𝑖 = ℙ(𝑋𝑛 = 𝑖 for some 𝑛 ≥ 1 ∣ 𝑋0 =𝑖) = ℙ(𝑀_𝑖
< ∞ ∣ 𝑋0 = 𝑖)

17
New cards

Expected Return Time

𝜇_𝑖 = 𝔼(𝑀𝑖 ∣ 𝑋0 = 𝑖)

18
New cards

Recurrent

If m_i = 1.

19
New cards

Transient

If m_i < 1.

20
New cards

Positive Recurrent

Let (𝑋𝑛) be a Markov chain on a state space 𝒮. Let 𝑖 ∈ 𝒮 be a recurrent state, so
𝑚_𝑖 = 1, and let 𝜇_𝑖 be the expected return time. If 𝜇_𝑖 < ∞, we say that state 𝑖 is positive recurrent.

21
New cards

Null Recurrent

If 𝜇_𝑖 = ∞.

22
New cards

Stopping Time

Let (𝑋𝑛) be a stochastic process in discrete time, and let 𝑇 be a random time. Then 𝑇 is a stopping time if for all 𝑛, whether or not the event {𝑇 = 𝑛} occurs is completely determined by the random variables 𝑋0, 𝑋1, … , 𝑋n.

23
New cards

Stationary Distribution

Let (𝑋𝑛) be a MC on a state space 𝒮 with transition matrix P. Let 𝜋 = (𝜋_𝑖)
be a distribution on 𝒮, in that 𝜋_𝑖 ≥ 0 for all 𝑖 ∈ 𝒮 and ∑𝜋_𝑖 = 1. We call 𝜋 a stationary distribution if 𝜋_𝑗 = ∑𝜋_𝑖P_𝑖𝑗 for all 𝑗∈𝒮,
or if 𝜋=𝜋P.

24
New cards

Reversed Chain

Let (𝑋𝑛) be a MC on state space 𝒮. For a fixed integer 𝑁 define the
reversed chain 𝑌𝑛 ∶= 𝑋_(𝑁−n).

25
New cards

Reversible

Let (𝑋𝑛) be a MC on state space 𝒮 with transition probabilities P(𝑥,𝑦). If there is a probability distribution 𝜋 on 𝒮 such that 𝑋𝑛 and 𝜋 satisfy the detailed balance conditions
𝜋(𝑥)𝑝(𝑥,𝑦) = 𝜋(𝑦)𝑝(𝑦,𝑥) for all 𝑥,𝑦 ∈ 𝒮,
then we say that 𝑋𝑛 is reversible.

26
New cards

Graph of Xn

Let (𝑋𝑛) be a MC on state space 𝒮. The graph of 𝑋𝑛 is obtained by taking the transition diagram of 𝑋𝑛 and forgetting all the directions of edges and removing multiple edges and self loops.

27
New cards

Equilibrium Distribution

Let (𝑋𝑛) be a Markov chain on a state space 𝒮 with transition matrix P. Suppose there exists a distribution p∗ = (𝑝∗_𝑖) on 𝒮 (so 𝑝∗_𝑖 ≥ 0 and ∑ 𝑝∗_𝑖 = 1) such that, whatever the initial distribution 𝜆 = (𝜆_𝑖), we have ℙ(𝑋𝑛 = 𝑗) → 𝑝∗
_𝑗 as 𝑛 → ∞ for all 𝑗 ∈ 𝒮. Then we say that p∗ is an equilibrium distribution.