Probabilistic Reasoning Over Time - In Depth Notes
Introduction
- Motivation for modeling uncertainty in dynamic systems:
- Applications include robotics, medicine, finance, and weather forecasting.
- Importance of time-based modeling: dynamic systems evolve over time, necessitating effective forecasting and decision-making techniques.
Probabilistic Reasoning Over Time
- Core Concept:
- Helps track, predict, and act in uncertain dynamic systems.
- Models involve hidden states that evolve with observations dependent on those states.
Static vs. Dynamic Models
- Static Models:
- Assume independent and identically distributed (i.i.d) samples, lacking time dependence.
- Dynamic Models:
- Account for changes over time and dependencies, making them more reflective of real-world systems at play.
The Markov Property
- Definition: The future state is dependent only on the current state, not on past states.
- Important for simplifying models via the first-order Markov assumption, leads to recursive and modular model structures.
Use-Cases for Probabilistic Models
- Applications in:
- Speech recognition
- Robot localization
- User attention metrics
- Medical monitoring
- Language processing and generation
Dynamic Bayesian Networks (DBNs)
- Structure:
- Nodes represent variables over time, and edges denote dependencies among them.
- DBNs can be unrolled for multiple time steps to visualize transitions.
Inference Tasks in Temporal Models
- Key Tasks include:
- Filtering: Determine current hidden state from past evidence.
- Prediction: Estimate future states of the system.
- Smoothing: Refine estimates of earlier states based on additional data.
- Decoding: Identify the most likely sequence of hidden states.
Markov Assumptions
- Address dependencies of current variables on prior states.
- Key Characteristics:
- Only necessitate recent states for current state estimation (often only one state back).
- Enforces a stationary process for simplification.
Bayes Net Construction
- Objective is to model hidden states using:
- Prior probabilities: P(X0)
- Transition model: P(Xt+1|Xt)
- Sensor model: P(Et|Xt)
Filtering Process
- Calculate current probable states from past evidence.
- Example: Given symptoms (runny nose, fever), compute likelihood of Influenza.
- Formula:
P(X{t+1} | e{1:t+1}) = a P(e{t+1} | X{t+1}) P(X{t+1} | e{1:t})
- Where ( a ) is a normalization constant.
Prediction Steps
- Model Setup:
- Use transition models to predict future states based on historical evidence.
- Recursion Approach:
- Transition model applies iteratively for future state estimation.
Smoothing Techniques
- Purpose: Calculate earlier states based on new evidence, often using a forward-backward algorithm.
- Involves creating backward messages to integrate evidence from future states assist in refining past estimates.
- Distinguishes between most likely sequence and the sequence of most likely states.
- Updates are based on maximizing probabilities through iterative evaluations.
Hidden Markov Models (HMM)
- Comprised of transition and observation probabilities, useful in modeling systems where states are not directly observable but can be inferred through data.
Kalman Filter
- Implementation for continuous variables and systems under Gaussian noise.
- Any sampling or prediction produces Gaussian outputs provided the system remains linear.
Particle Filtering
- Uses a set of particles to approximate the distribution over time, very effective in high-dimensional state spaces.
- Key Steps:
- Generate samples based on prior states.
- Probabilistically propagate them through transitions and resample based on likelihood.
Summary
- Dynamic models incorporate time and uncertainty in their structure, leveraging historical data for predictive insights.
- Key inference tasks (filtering, prediction, smoothing, decoding) are optimized through the recursive application of probabilistic methods, with DBNs generalizing on HMMs and Kalman Filters.