Title: Stochastic processes and simulation in the natural sciences
Instructor: Giacomo Zanella
Date: March 4, 2025
Discrete-time Markov chains
1.1 Introduction to Stochastic Processes
1.2 Markov Chains
1.2.1 Chapman-Kolmogorov equations
1.3 Classification of states and recurrence
1.3.1 Communication classes
1.3.2 Transience and recurrence
1.3.3 Random walks in d dimension
1.4 Convergence to stationarity
1.4.1 Stationary distributions
1.4.2 Convergence in distribution
1.4.3 Convergence of asymptotic frequencies
1.4.4 Ergodic averages and law of large numbers for MCs
1.5 Counter-examples where the convergence theorem fails
1.5.1 Infinite state spaces and non-existence of stationary distributions
1.5.2 Lack of irreducibility and hitting times
1.5.3 Time-inhomogeneity and self-reinforcing processes
1.6 Reversibility and detailed balance
Illustrative examples and applications of discrete-time Markov chains
2.1 Random walks on graphs and PageRank
2.2 Sequential testing
2.3 Hidden Markov models
Continuous-time Markov chains
3.1 Background and tools
3.1.1 Exponential distribution
3.1.2 Hazard rates
3.1.3 First events properties
3.2 Markov property and transition probabilities
3.2.1 Generator/Jump rates
3.3 Examples of CTMCs
3.3.1 Binary chain
3.3.2 Poisson process
3.3.3 Birth process with linear birth
3.3.4 Process with birth, death and immigration
3.4 Kolmogorov differential equations
3.4.1 Kolmogorov backward and forward equations
3.5 Jump chain and holding time construction
3.6 Limiting behavior of CTMCs
3.7 Birth and death processes
3.8 The Poisson Process
3.8.1 One-dimensional, homogeneous case
3.8.2 One-dimensional, non-homogeneous case
3.8.3 General case
Stochastic processes represent models for random quantities that change or evolve over time.
Formally defined as a collection of random variables {Xt, t ∈ T} where Xt maps outcomes to states.
Definition: A discrete time stochastic process is a Markov chain if the future state depends only on the present state, not on the prior states.
Transition Matrix: It characterizes the behavior of the Markov chain with probabilities Pij = Pr(X(t+1) = j | X(t) = i).
Examples include the Gambler's ruin and random walks.
Two states communicate if each is accessible from the other.
States partition into classes based on communication.
A state is recurrent if, starting from that state, it will eventually return with probability 1; otherwise, it's transient.
Discusses different recurrence properties in random walks across multiple dimensions.
Stationary Distribution: A distribution π for a Markov chain that remains invariant over time.
Convergence theorems establish conditions under which a Markov chain approaches this stationary distribution over time.
Discusses scenarios such as infinite state spaces where stationary distributions may not exist.
A Markov chain is reversible if the probabilities of transitioning between states maintain balance over time.
PageRank relies on the structure of directed graphs to rank web pages based on the probability of random walks.
Involves frameworks for comparing the effectiveness of drugs or treatments using randomized trials.
These models involve latent processes and are widely applicable in signal processing.
Continuous random variable with a specific distribution characterized by a constant hazard rate.
Continuous-time stochastic processes must satisfy a Markov property.
Discusses practical applications of CTMCs such as birth-death processes and Poisson processes.
Define and derive the differential equations governing the transition probabilities in CTMCs.
Introduces techniques to simulate trajectories of CTMCs using sequences of jump chains and holding times.
Discusses how to derive the long-run behavior of CTMCs and identifies limiting distributions.
Explains modeling techniques for populations and queues using birth-death chains.
Discusses the non-homogeneous Poisson process and its definition in the context of counting processes.