Stochastic Modeling Midterm 1

studied byStudied by 0 people
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 50

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

51 Terms

1

stochastic process

family of random variables x_t for every t in a set S {X(t) : t >= 0}

New cards
2

stochastic process example

Gaussian process (X_t) for t in [0,1], there are f_1, f_2, …, f_n functions on [0,1], Z_1, …, Z_n N(0, sigma²) X_t = sum from l = 1 to n : z_k f_l (t)

New cards
3

markov chain (X_n)

a stochastic process indexed by Z^+ (subset) and taking values in Z^+

New cards
4

markov property

P(X_n = j | X_n-1 = i_n-1, …, x_0 = i_0) = P(X_n = j | X_n-1 = i_n-1)

New cards
5

a markov chain has a stationary transition probability matrix iff

P(X_n = j | X_n-1 = 1) = f(i,j) (depends only on i and j) (result ony depends on steps!)

New cards
6

example of a markov chain with stationary transitions

(U_n) iid (upside down pi) X_0 (independent of initial state)

New cards
7

markov chain completely defined by

transition probability matrix P and initial distribution pi^(0)

New cards
8

P_ij =

P(x_1 = j | x_0 = i)

New cards
9

what is the initial distribution symbolically?

pi_vector ^ (0) = (pi_0, …, pi_N-1, …) and pi_j = P(x_0 = i) therefore, pi_vector^(n) = pi_vector^(0) P^n

New cards
10

Properties of Transition Prob. Matrix P

(i) using law of total probability, for any fixed i, the sum from j to num states P_ij = 1 (each row must add to 1) (ii) P_ij^(n) = PP(X_n = j | X_0 = i) (n-step probability matrix) = (P^n)_ij (iii) P is doubly stochastic iff each column adds to 1

New cards
11

Absorbing markov chains example matrix

( 1 0 0 0 ; P_10 … P_13; P_20 … P_23; 0 0 0 1)

New cards
12

time to absorption

T = min{ n : X_n in {0,3}} (T is the irst step (or time index n) when you enter one of the absorbant states (0, or 3)

New cards
13

probability of getting trapped in state 0 starting at state i/ (prob of getting absorbed first in 0 starting at i. (aborption probabilities U_i)

U_i = PP (X_T = 0 | X_0 = i) i in {1,2} (Transient States) we know that U_0 = 1, U_3 = 0 (absorbing states) (if it were PP (X_T = 3 | X_0 = i) then U_0 = 0 and U_3 = 1)

New cards
14

mean time to absorption (expected time to absorption)

t_ i = EE(T | X_0 = i) i in {1,2} (transient states), we know t_0 = t_3 = 0, where t_i = average number of steps (or time) it takes to reach an absorbing state (either 0 or 3 in this case) t_3 = 0 = t_0 because you are already trapped, so no time is needed

New cards
15

equations for U (prob gettting absorbed first in i) given example in form : prob getting absorbed first in 0.

in this case, u_1 = P_10u_0 + P_11u_1 + P_12u_2 + P_13u_3 but we know that u_3 = 0 and u_0 = 1 (if we are solving the prob of getting absorbed first in 0 and vice versa if prob of getting absorbed first in 3) DO NOT SWITCH UNLESS PROBLEM STATEMENT SWITCHES!!! U_2 = P_20+P_21u_1 + P_22u_2

New cards
16

equations for t_i:

t_1 = 1 + P_11t_1 + P_12t_2; t_2 = 1 + P_21t_1 + P_22t_2 (because general equation is t_i = 1 + P_i0t_0 + P_i2t_2 + P_04t_4 …… we also know that t_0 and t_3 = 0, so we are not including those terms.)

New cards
17

General Absorbing Chain

P = (Q R; 0 I) Note: when taking powers of this matrix, it looks the same. Q : r x r, 0 : (N - r + 1) x r I : (N - r + 1) x (N - r + 1)

New cards
18

Time to absorption for general absorbing markov chain

T = min{n : X_n in {r, …, N-1}} → we know that our abosrbing states are from r to N-1 in the general absorbing matrix chain.

New cards
19

u_ik for general absorbing markov chain (probability of 1st getting absorbed in k starting at i)

Note: two indicies. u_ik = PP(X_T = k | x_0 = i) where i in {0, …, r - 1} (transient states) and k in {r, … N - 1} (absorbing states). To find matrix of probabilities u: (I-Q)U = R0

New cards
20

limiting distribution

a chain (X_n) has a limiting distribution pi_vector = (pi_0, pi_1, …)N iff Lim n→ inf P_ij ^(n) = pi_j > 0 forall i, j. in terms of the markov chain (X_n): lim n→inf PP(X_n = j | X_0 = i) = pi_j > 0 forall j. This convergence means that in the long run, the probability of finding the Markov chain in state j is approx. pi_j no matter in which state the chain began at time 0. Definition in english: after the process has been in operation for a long duration, the prob of finding the process in state j is pi_j, irrespective of the starting state. AKA the distribution that the markov chain converges to as n → inf.

New cards
21

limiting distribution example:`

P = ( 1-a a ; b 1-b) where limiting dist: pi = (b/(a+b), a/(a+b)). when you take powers of P, each row matches as you approach infinity. The limiting probs are: b/(a+b) and a/(a+b)

New cards
22

stationary distribution:

pi_vector is a stationary distribution of (X_n) iff pi_vector * P = pi_vector forall pi_j >= 0, the sum of the pi_j = 1

New cards
23

T/F a limiting dist is a stationary dist

T

New cards
24

why is it called a “stationary” distribution?

long-run behaviour of a markov chain, left eigenvector of P corresponding to eigenvalue 1.

New cards
25

(P²)_ij =

P^(2)_ij

New cards
26

compute the distribution of X_n in terms of P and distribution of X_0

pi_vector^(n) = (PP(X_n = 0). PP(X_n = 1), PP(X_n=2), …)

New cards
27

uniform initial distribution

pi_vector^(0) = (1/n, …,1/n)

New cards
28

if i communicates with j

then i is recurrent iff j is recurrent

New cards
29

example for: if there is a limiting distribution, we find it by solving eigenvector problems, but the converse is not true (eg a stationary distribution does not imply that a limiting distribution exists)

The existence of a stationary distribution does not automatically guarantee that the chain’s distribution converges to it as time goes to infinity (aka that a limiting distribution exists) Example: P = (0 1; 1 0) 1} findig the probability vector (pi_1 pi_2) st pi = pi P and pi_1 + pi_2 = 1. pi_1 = 0 pi_ 1 + 1 pi_2 and for state 2: pi_2 = 1 pi_1 + 0 pi_2 so we get pi_1 = pi_2 = ½ so, the stationary distribution is (1/2, 1/2). But, the limiting distribution does not exist. The chain is periodic with period two, if you start in state 1 at n = 0 you are in state 1 w.p. 1 so the dist is (1,0) and then at n = 2 you move to state 2 w.p. 1, so the dist is (0,1) and so on, continuing to oscillate between the two stwo probabilities, never settling to the stationary distribution (1/2, 1/2)

New cards
30

T/F We can always solve pi = pi P to get a stationary distribution

T

New cards
31

Regular Markov Chain

w/ finitely many states (X_n) with states {0, … N-1} x is regular iff there is a time n s.t. P_ij ^ (n) > 0 forall i, j

New cards
32

Example of regular markov chain

(0.9, 0.1; 0.2, 0.8) Is regular because all entries are positive, meaning that in one step there is a nonzero chance of transitioning from any state to any other state. Irreducible: you can get from any state to any other state (Either directly or over several steps) Aperiodic: There is no fixed cycle; for example, because each state has a positive probability of remaining the same (self-transitioning), the chain does NOT force you into a cyclic pattern.

New cards
33

T/F If (X_n) is regular,then it has a limiting distribution with all pi_i > 0

T

New cards
34

T/F If (X_n) has a limiting distribution with all pi_i > 0 then it is regular

F

New cards
35

Does every doubly stochastic matrix have a limiting distribution?

no (for example (0, 1; 1, 0)) but yes if it is ALSO regular

New cards
36

Irreducibility

the states i and j communicate iff i → j & j → i at some finite (X_n) is irreducible iff there is only one equivalence class i e every two states communicate

New cards
37

example of non irreducibility (a reducible chain)

P = (1/3 2/3 0; 2/3 1/3 0; 0 0 1) {0,1) , {2} bc 2 does not communicate with the other states

New cards
38

example of irreducibility

P = (1 0 0; 0 0 1; 0 1 0)

New cards
39

Periodicity

Define period d(i) of state i = gcd (greatest common divisor) {n: P^(n)_ii > 0}. A chain is aperiodic iff d(i) = 1 forall 1 (each state has a period of 1)

New cards
40

example of periodicity

suppose PP(X_1 = i | X_0 = i) > 0, P^(2)ii = sum P_ijP_ji >= P²_ii > 0 so d(i) = 1. In english, the probability of getting from state i to i is greater than zero, so the trans prob matrix to the second power = sum of the two intermediate transition probability matrices which is the trans prob matrix squared is also greater than zero. Therefore the period = 1 for all i

New cards
41

A chain is aperiodic if

you have trans. prob. matrix with only positive matrix or a markov chain with diagonal positive entries

New cards
42

Returning times to initial state (NOT TIME TO ABSORPTION)

T_i = min{n : X_n = i}, returning time: PP(T < inf | X_0 = i) in english: the probability of returning to i within a finite time

New cards
43

when is i recurrent?

iff sum from n = 1 to inf P^(n)_ii = inf or f_ii = 1 given f_ii = prob the chain returns to i in a finite time

New cards
44

when is i transient?

iff sum from n = 1 to inf P^(n)_ii < inf or f_ii < 1 given f_ii = prob the chain returns to i in a finite time

New cards
45

if (X_n) has finitely many states, it is aperiodic and irreducible then,

it is regular and recurrent

New cards
46

example (X_n) is regular and aperiodic

then it is recurrent

New cards
47

give an example of a non recurrent P

(1/3 2/3 0; 2/3 1/3 1/3; 0 0 1) → not recurrent bc you can get suck in state 2 if you go from state 1 to 2. For a state to be recurrent, starting from that state, you must return to it with prob. 1. In this case, if you start in state 0 or 1 theres a nonzero chance you’ll be eventually absorbed into state 2 and never return to starting state. Therefore 0 and 1 are transient and NOT recurrent)

New cards
48

example: regular because recurrent and positive

(1/3 2/3 0; 2/3 1/3 0; 0 0 1) → regular bc all entries positive and recurrent (recurrent because 0 and 1 form a closed communicating class {0,1} and cannot get to 2, in 2 it is absorbing but because you cannot get to it from the other states. If you start in state 2 you’re returning to state 2 in every step. This is not to be confused with irreducibility as irreducibility means that EVERY state can be reached from EVERY OTHER state.)

New cards
49

Limit Theorem

If (X_n) is aperiodic, irreducible, and recurrent then, where m_i is the mean waiting time Lim n→ inf P^(n)_ji = 1/m_i forall j i but 1/m_i does not depend on j m_i = PP(T_i^(n+1) - T_i^(n) | x_0 = i)

New cards
50

T/F i is positive recurrent iff lim n→inf P_ji^(n) > 0

T

New cards
51

equivalence class properties: assume i ←> j ie j E [i]

(i) d(i) = d(j) (ii) i recurrent → j recurrent (iii) j aperiodic, recurrent, positive recurrent → j also (iv) if i is aperiodic and recurrent then the restriction of p to the states in [i] is an aperiodic recurrent and irreducible trans. prob. matrix. i e (x_vector_i) to state in [i]

New cards

Explore top notes

note Note
studied byStudied by 344 people
752 days ago
5.0(2)
note Note
studied byStudied by 5 people
815 days ago
5.0(1)
note Note
studied byStudied by 138 people
970 days ago
5.0(1)
note Note
studied byStudied by 16 people
691 days ago
5.0(2)
note Note
studied byStudied by 35 people
861 days ago
5.0(1)
note Note
studied byStudied by 16 people
720 days ago
5.0(1)
note Note
studied byStudied by 31 people
521 days ago
5.0(1)
note Note
studied byStudied by 15 people
741 days ago
5.0(2)

Explore top flashcards

flashcards Flashcard (33)
studied byStudied by 9 people
757 days ago
5.0(1)
flashcards Flashcard (20)
studied byStudied by 4 people
543 days ago
5.0(3)
flashcards Flashcard (22)
studied byStudied by 57 people
708 days ago
4.5(2)
flashcards Flashcard (50)
studied byStudied by 5 people
554 days ago
5.0(1)
flashcards Flashcard (42)
studied byStudied by 12 people
485 days ago
5.0(1)
flashcards Flashcard (33)
studied byStudied by 1 person
694 days ago
5.0(1)
flashcards Flashcard (31)
studied byStudied by 23 people
780 days ago
5.0(1)
flashcards Flashcard (54)
studied byStudied by 18568 people
709 days ago
4.5(362)
robot