Discrete Markov Processes

0.0(0)
studied byStudied by 0 people
0.0(0)
call with kaiCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/27

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

28 Terms

1
New cards

Markov Chain

Let (lambda_i) be a probability distribution on a sample space S. Let P_ij, where i,j in S be such that P_ij >= 0 for all i,j and sum P_ij = 1 for all i. Let the time index be n = 0,1,2,โ€ฆ then โ€˜time homogeneous discrete time Markov Processโ€™ or MC (Xn) with initial distribution (lambda_i) and transition probabilities P_ij is defined by P(X0 = i) = lambda_i, P(X(n+1) = j | Xn = i, X_(n-1) = x_(n-1), โ€ฆ , X0 = x0) = P(X(n+1) = j | Xn = i) = P_ij.

2
New cards

Markov Property

Let (Xn) = (X0,X1,X2,โ€ฆ) be a stochastic process in discrete time n = 0,1,2,โ€ฆ and discrete space S. Then Xn has MP if, for all times n and all states x0,x1,โ€ฆ,xn,x(n+1) in S we have P(X(n+1) = x(n+1) | Xn = xn , X(n-1) = x(n-1), โ€ฆ , X0 =x0) = P(X(n+1) = x(n+1) | Xn = xn).

3
New cards

Stochastic Process

An indexed sequence of random variables that are (usually) dependent on each other.

4
New cards

Accessible

Consider a MC on a state space S with transition matrix P. We say that state j in S is accessible from state i in S and write i -> j if, for some n, P_ij(n) > 0.

5
New cards

Communicates

If i -> j and j <- i, write i <-> j.

6
New cards

Irreducible

If the entire state space S is one communicating class.

7
New cards

Closed

Communicating class is closed if no state outside the class is accessible from any state within the class. (Class C subset of S is closed if whenever there exist i in C and j in S with i -> j, then j in C also).

8
New cards

Open

If a class is not closed.

9
New cards

Absorbing

If a state i is in a communicating class {i} by itself and that class is closed.

10
New cards

Period

Consider a MC with transition matrix P. We say that a state ๐‘– โˆˆ ๐’ฎ has
period ๐‘‘_๐‘–, where ๐‘‘_๐‘– = gcd{๐‘› โˆˆ {1,2,โ€ฆ,} โˆถ ๐‘_๐‘–๐‘–(๐‘›) > 0}.

11
New cards

Periodic

If d_i > 1.

12
New cards

Aperiodic

If d_i = 1.

13
New cards

Hitting Time

Let (๐‘‹๐‘›) be a MC on state space ๐’ฎ. Let ๐ป_๐ด be a RV representing
the hitting time to hit the set ๐ด โŠ‚ ๐’ฎ, given by
๐ป_๐ด= min{๐‘›โˆˆ{0,1,2,โ€ฆ} โˆถ ๐‘‹๐‘› โˆˆ ๐ด}.

14
New cards

Hitting Probability

The hitting probability โ„Ž_๐‘–๐ด of the set ๐ด starting from state ๐‘– is
โ„Ž_๐‘–๐ด = โ„™(๐‘‹๐‘› โˆˆ ๐ด for some ๐‘›โ‰ฅ0 โˆฃ๐‘‹0 = ๐‘–) = โ„™(๐ป_๐ด < โˆž โˆฃ ๐‘‹0 = ๐‘–).

15
New cards

Expected Hitting Time

The expected hitting time ๐œ‚_๐‘–๐ด of the set ๐ด starting from state ๐‘– is
๐œ‚_๐‘–๐ด = ๐”ผ(๐ป_๐ด โˆฃ ๐‘‹0 = ๐‘–).

16
New cards

Return Time

M_i = min{n โˆˆ {1,2,โ€ฆ} : Xn = i}

17
New cards

Return Probability

๐‘š_๐‘– = โ„™(๐‘‹๐‘› = ๐‘– for some ๐‘› โ‰ฅ 1 โˆฃ ๐‘‹0 =๐‘–) = โ„™(๐‘€_๐‘–
< โˆž โˆฃ ๐‘‹0 = ๐‘–)

18
New cards

Expected Return Time

๐œ‡_๐‘– = ๐”ผ(๐‘€๐‘– โˆฃ ๐‘‹0 = ๐‘–)

19
New cards

Recurrent

If m_i = 1.

20
New cards

Transient

If m_i < 1.

21
New cards

Positive Recurrent

Let (๐‘‹๐‘›) be a Markov chain on a state space ๐’ฎ. Let ๐‘– โˆˆ ๐’ฎ be a recurrent state, so ๐‘š_๐‘– = 1, and let ๐œ‡_๐‘– be the expected return time. If ๐œ‡_๐‘– < โˆž, we say that state ๐‘– is positive recurrent.

22
New cards

Null Recurrent

If ๐œ‡_๐‘– = โˆž.

23
New cards

Stopping Time

Let (๐‘‹๐‘›) be a stochastic process in discrete time, and let ๐‘‡ be a random time. Then ๐‘‡ is a stopping time if for all ๐‘›, whether or not the event {๐‘‡ = ๐‘›} occurs is completely determined by the random variables ๐‘‹0, ๐‘‹1, โ€ฆ , ๐‘‹n.

24
New cards

Stationary Distribution

Let (๐‘‹๐‘›) be a MC on a state space ๐’ฎ with transition matrix P. Let ๐œ‹ = (๐œ‹_๐‘–)
be a distribution on ๐’ฎ, in that ๐œ‹_๐‘– โ‰ฅ 0 for all ๐‘– โˆˆ ๐’ฎ and โˆ‘๐œ‹_๐‘– = 1. We call ๐œ‹ a stationary distribution if ๐œ‹_๐‘— = โˆ‘๐œ‹_๐‘–P_๐‘–๐‘— for all ๐‘—โˆˆ๐’ฎ,
or if ๐œ‹=๐œ‹P.

25
New cards

Reversed Chain

Let (๐‘‹๐‘›) be a MC on state space ๐’ฎ. For a fixed integer ๐‘ define the
reversed chain ๐‘Œ๐‘› โˆถ= ๐‘‹_(๐‘โˆ’n).

26
New cards

Reversible

Let (๐‘‹๐‘›) be a MC on state space ๐’ฎ with transition probabilities P(๐‘ฅ,๐‘ฆ). If there is a probability distribution ๐œ‹ on ๐’ฎ such that ๐‘‹๐‘› and ๐œ‹ satisfy the detailed balance conditions ๐œ‹(๐‘ฅ)๐‘(๐‘ฅ,๐‘ฆ) = ๐œ‹(๐‘ฆ)๐‘(๐‘ฆ,๐‘ฅ) for all ๐‘ฅ,๐‘ฆ โˆˆ ๐’ฎ, then we say that ๐‘‹๐‘› is reversible.

27
New cards

Graph of Xn

Let (๐‘‹๐‘›) be a MC on state space ๐’ฎ. The graph of ๐‘‹๐‘› is obtained by taking the transition diagram of ๐‘‹๐‘› and forgetting all the directions of edges and removing multiple edges and self loops.

28
New cards

Equilibrium Distribution

Let (๐‘‹๐‘›) be a Markov chain on a state space ๐’ฎ with transition matrix P. Suppose there exists a distribution pโˆ— = (๐‘โˆ—_๐‘–) on ๐’ฎ (so ๐‘โˆ—_๐‘– โ‰ฅ 0 and โˆ‘ ๐‘โˆ—_๐‘– = 1) such that, whatever the initial distribution ๐œ† = (๐œ†_๐‘–), we have โ„™(๐‘‹๐‘› = ๐‘—) โ†’ ๐‘โˆ—
_๐‘— as ๐‘› โ†’ โˆž for all ๐‘— โˆˆ ๐’ฎ. Then we say that pโˆ— is an equilibrium distribution.

Explore top flashcards