L11 - Decision Making with Markov Models

0.0(0)
studied byStudied by 0 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/19

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

20 Terms

1
New cards

Reliability Engineering:

The discipline to develop methods and tools to evaluate and demonstrate the reliability, maintainability, availability, and safety of components, equipment, and systems.

Reliability Engineering: Field that develops methods/tools to check and prove reliability, maintainability, availability, and safety of systems.

2
New cards

Reliability:

Probability that the required function will be provided under given conditions for a given time interval.

Reliability: Probability that a system performs its required function under given conditions for a given time.

3
New cards

Safety:

Ability of the item to cause neither injury to persons, nor significant material damage or other unacceptable consequences.

Safety: Ability to avoid harm to people or significant material damage.

4
New cards

Reliability Engineering – Common failure distributions

  • Exponential distribution:

    • For constant failure rates (random failures, e.g., electronics).

    • Formula: F(t)=1−e−λt

    • λ = failure rate.

  • Weibull distribution:

    • Flexible, models different failure behaviors (matches parts of bathtub curve).

5
New cards

Reliability calculation for multiple components

  • Series configuration:

    • Example: Hydraulic circuit.

    • If one component fails → whole system fails.

    • Logical OR.

    • Example: Two components each with 0.90 reliability → System failure probability = 0.19.

  • Parallel configuration:

    • Example: Aircraft engines.

    • Redundancy improves reliability.

    • Logical AND.

    • Example: Two components each with 0.90 reliability → System failure probability = 0.01.

<ul><li><p><strong>Series configuration</strong>:</p><ul><li><p>Example: Hydraulic circuit.</p></li><li><p>If one component fails → whole system fails.</p></li><li><p>Logical OR.</p></li><li><p>Example: Two components each with 0.90 reliability → System failure probability = 0.19.</p></li></ul></li><li><p><strong>Parallel configuration</strong>:</p><ul><li><p>Example: Aircraft engines.</p></li><li><p>Redundancy improves reliability.</p></li><li><p>Logical AND.</p></li><li><p>Example: Two components each with 0.90 reliability → System failure probability = 0.01.</p></li></ul></li></ul><p></p>
6
New cards

System reliability models (overview)

  • Reliability models combine component failure probabilities to estimate RAMS (Reliability, Availability, Maintainability, Safety).

  • Common models:

    • Part Count Method

    • Reliability Block Diagram

    • Fault Tree Analysis

    • Reliability Graph

    • Petri Nets

    • Markov Models/Processes

7
New cards
<p>Fault Tree Analysis (FTA)</p>

Fault Tree Analysis (FTA)

Tool for modeling failure dependencies in multi-component systems (tree structure).

8
New cards
<p><strong>Markov Models</strong></p><p><strong>State-based system representation</strong></p>

Markov Models

State-based system representation

  • A system can be in one of N states at any time (e.g., intact, slightly degraded, severely degraded, failed).

  • From the current state, it either:

    • Moves to another state, or

    • Stays in the same state.

  • Each state is directly observable.

  • The transition probabilities (numbers on arrows) show the likelihood of moving between states at each time step.

9
New cards
<p><strong>Markov Models </strong></p><p><strong>Deriving State transition probabilities</strong></p>

Markov Models

Deriving State transition probabilities

  • Shows how transition probabilities are calculated from observed data.

  • Example: 12 time series of state transitions are recorded.

  • For each state, we count how many times the system moved to other states → Then calculate percentages.

  • Table example: From State 1, 97% of the time it stays in State 1, 2% goes to State 2, 1% to State 3.

<ul><li><p>Shows how <strong>transition probabilities</strong> are calculated from observed data.</p></li><li><p>Example: 12 time series of state transitions are recorded.</p></li><li><p>For each state, we count how many times the system moved to other states → Then calculate percentages.</p></li><li><p>Table example: From State 1, 97% of the time it stays in State 1, 2% goes to State 2, 1% to State 3.</p></li></ul><p></p>
10
New cards

Markov Models

Predicting States into the future

  • Markov model tells how future states depend only on the current state (for first-order Markov).

  • Transition probability: aij

  • If you know the initial state probability, you can model the whole process.

  • The time spent in a state follows an exponential distribution, with formula: Pi(d)
    where aii is the probability of staying in the same state.

11
New cards

Hidden Markov Models

  • In HMM, states are often not directly observed.

  • Only sensor measurements are available.

  • Observations (e.g., vibrations, temperature, running status) give clues about the hidden state.

  • Extension of Markov Model:

    • Observations are probabilistic functions of states.

    • True states are hidden.

12
New cards

Hidden Markov Models

Observation based state probability

  • Extension of Markov Model:

    • Observations are probabilistic functions of states.

    • True states are hidden.

  • HMM defined by:

  • A: State transition matrix (aij​)

  • π: Initial state probabilities

B: Emission matrix (bjk = P(vk∣Sj) → Probability of observation vk given state Sj

  • N: Number of states

  • Full parameter set: λ=(A,B,π,N)

<ul><li><p><strong>Extension of Markov Model</strong>:</p><ul><li><p>Observations are probabilistic functions of states.</p></li><li><p>True states are hidden.</p></li></ul></li><li><p>HMM defined by:</p></li></ul><ul><li><p><strong>A</strong>: State transition matrix (a<sub>ij​</sub>)</p></li><li><p><strong>π</strong>: Initial state probabilities</p></li></ul><p>B: Emission matrix (b<sub>jk </sub>= P(v<sub>k</sub>∣S<sub>j</sub>) → Probability of observation v<sub>k </sub>given state S<sub>j</sub></p><ul><li><p><strong>N</strong>: Number of states</p></li></ul><ul><li><p>Full parameter set: λ=(A,B,π,N)</p></li></ul><p></p>
13
New cards

Three main training problems for HMMs:

  1. Find P(O∣λ)P(O | \lambda)P(O∣λ): Probability of observation sequence (Forward step).

  2. Find best state sequence QQQ for observations (Backward step).

  3. Adjust parameters (A,B,π,N) to maximize P(O∣λ) (Baum–Welch algorithm).

  • Forward–Backward algorithm is the core method.

14
New cards

HMMs are widely used in:

  • Speech recognition (25%)

  • Human activity recognition (25%)

  • Bioinformatics (19%)

  • Musicology (9%)

  • Tool wear monitoring (9%)

  • Data processing like handwriting recognition (7%)

  • Network analysis (6%)

Example: Speech recognition, gesture recognition, gene sequencing, predicting melodies.

15
New cards

Hidden Semi Markov Models
Time-dependent presence probability

  • In a standard HMM, time spent in a state follows an exponential distribution:
    Pi(d)
    where aii​ is the probability of staying in the same state.

  • In HsMM, instead of a fixed self-transition probability, we use a duration density Pi(d) that can be adjusted.

  • Other distributions (Gaussian, Gamma, etc.) can replace exponential.

  • HsMM parameters: λ=(A,B,π,D,N)

    • D: Duration matrix containing Pi(d) for each state.

<ul><li><p>In a standard <strong>HMM</strong>, time spent in a state follows an <strong>exponential distribution</strong>:<br>P<sub>i</sub>(d)<br>where a<sub>ii</sub>​ is the probability of staying in the same state.</p></li><li><p>In <strong>HsMM</strong>, instead of a fixed self-transition probability, we use a <strong>duration density</strong> P<sub>i</sub>(d) that can be adjusted.</p></li><li><p>Other distributions (Gaussian, Gamma, etc.) can replace exponential.</p></li><li><p>HsMM parameters: λ=(A,B,π,D,N)</p><ul><li><p><strong>D</strong>: Duration matrix containing P<sub>i</sub>(d) for each state.</p></li></ul></li></ul><p></p>
16
New cards
<p>Required inputs for AG</p>

Required inputs for AG

  • Engineering expertise needed:

    • Failure modes & mechanisms knowledge.

    • Suitable failure indicators.

    • Proper sensors & acquisition setup.

  • Data required:

    • Run-to-failure data, labeled anomalies.

  • Scope & user analysis:

    • Understand user goals, connected systems, automation level.

17
New cards

Epistemic (structural) Uncertainty:

Epistemic uncertainty is due to a lack of knowledge about the behaviour of the system that is conceptually resolvable.

  • Comes from lack of knowledge about system behavior.

  • Can be reduced with more study and expert judgment.

18
New cards

Aleatoric (statistical) uncertainty:

Aleatory uncertainty arises because of natural, unpredictable variation in the performance of the system.

  • Comes from natural, unpredictable variations.

  • Cannot be reduced by expert knowledge.

  • Also called irreducible uncertainty.

19
New cards

Visualization formats

Circular bar chart:

  • Pro: Shows degradation and failure probability.

  • Con: Only shows current time, not RUL; uneven bar length may cause misperception.

Component model:

  • Pro: Clear mapping to components.

  • Con: Only shows current time; high visualization effort.

Network diagram:

  • Pro: Many systems/components; shows degradation & probabilities.

  • Con: Only normalized data; comparability issues due to radial layout.

Line chart:

  • Pro: Shows history and trends; can combine prognosis & diagnosis.

  • Con: Becomes cluttered with many systems.

20
New cards

Key Findings

  • Reliability engineering uses statistical analysis of past failure data; data analytics helps to understand real system health.

  • Failure rates of single components can be combined statistically to model complex systems.

  • Reliability engineering and PHM have differences, similarities, pros, and cons — PHM especially adds feedback through data analysis.

  • In Markov Models, state-to-state interactions are described by transition probabilities.

  • Hidden Markov Models extend this by linking states to probabilistic observations.

  • Machine learning & prognostics can support operators’ decision-making via automated onboard data processing.

  • Even with low structural uncertainty (large databases), statistical uncertainty must be considered.

  • PHM system requirements differ for different users and can be met via visualization techniques and specialized IT infrastructure.

<ul><li><p>Reliability engineering uses statistical analysis of past failure data; data analytics helps to understand real system health.</p></li><li><p>Failure rates of single components can be combined statistically to model complex systems.</p></li><li><p>Reliability engineering and PHM have differences, similarities, pros, and cons — PHM especially adds feedback through data analysis.</p></li><li><p>In Markov Models, state-to-state interactions are described by transition probabilities.</p></li><li><p>Hidden Markov Models extend this by linking states to probabilistic observations.</p></li><li><p>Machine learning &amp; prognostics can support operators’ decision-making via automated onboard data processing.</p></li><li><p>Even with low structural uncertainty (large databases), <strong>statistical uncertainty</strong> must be considered.</p></li><li><p>PHM system requirements differ for different users and can be met via visualization techniques and specialized IT infrastructure.</p></li></ul><p></p>