AI 5.2

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/56

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 5:24 PM on 4/15/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

57 Terms

1
New cards

What is a random variable?

A random variable is a variable whose possible values are numerical outcomes of a random phenomenon.

2
New cards

What is probability?

Probability is a number between 0 and 1 that measures how likely an event is to occur.

3
New cards

What is a joint probability distribution?

A joint probability distribution gives the probability of every combination of values for a set of random variables.

4
New cards

Define conditional probability.

Conditional probability measures the probability of an event given that another event has occurred. It is written as P(A|B).

5
New cards

State Bayes' Rule in plain text.

P(A|B) = P(B|A) * P(A) / P(B)

6
New cards

What is marginalization?

Marginalization is the process of summing (or integrating) over unwanted variables to obtain the probability distribution of a subset.

7
New cards

What is a Bayesian network?

A Bayesian network is a graphical model that represents probabilistic relationships among a set of variables using a directed acyclic graph and conditional probability tables.

8
New cards

What are the two components of a Bayesian network?

A directed acyclic graph (DAG) and conditional probability tables (CPTs) attached to each node.

9
New cards

How is the joint distribution represented in a Bayesian network?

P(X1, X2, …, Xn) = product over i of P(Xi | Parents(Xi))

10
New cards

In the simple Rain/Wet Grass example, what does P(R=1)=0.4 mean?

The prior probability that it rains on any given day is 40%.

11
New cards

In the Rain/Wet Grass example, what does P(W=1|R=1)=0.9 mean?

The probability that the grass gets wet given that it is raining is 90%.

12
New cards

In the Rain/Wet Grass example, what does P(W=1|R=0)=0.2 mean?

The probability that the grass gets wet given that it is not raining (e.g., sprinkler on) is 20%.

13
New cards

What is inference in a Bayesian network?

Inference is the process of answering questions about the underlying probability distribution given some observations.

14
New cards

What is a diagnosis question in inference?

A diagnosis question asks for the probability of a cause given an observed effect. Example: P(Rain | Wet Grass).

15
New cards

What is a prediction question in inference?

A prediction question asks for the probability of an effect given a cause. Example: P(Wet Grass | Rain).

16
New cards

Give one application of inference in Bayesian networks.

Classification: finding the most probable class label given data.

17
New cards

Give another application of inference in Bayesian networks.

Decision Making: combining probabilities with utilities to choose an action.

18
New cards

In the simple Rain/Wet Grass example, what is P(W=1)?

P(W=1) is the marginal probability that the grass is wet, calculated as P(W=1|R=1)P(R=1) + P(W=1|R=0)P(R=0).

19
New cards

In the simple Rain/Wet Grass example, what was the calculated value of P(W=1)?

0.48

20
New cards

In the simple Rain/Wet Grass example, what is the diagnosis result P(R=1|W=1)?

0.75

21
New cards

Interpret the result P(R=1|W=1)=0.75 in the simple example.

Knowing that the grass is wet increases the probability that it rained from the prior 0.4 to 0.75.

22
New cards

What are evidence variables?

Evidence variables are the variables whose values we know (observed).

23
New cards

What are query variables?

Query variables are the variables whose probability distribution we want to find.

24
New cards

What are non-evidence (hidden) variables?

Hidden variables are all other variables that are neither observed nor queried but must be accounted for in the joint distribution.

25
New cards

What is an unconditional probability query?

An unconditional probability query asks for the probability of a given value assignment for a subset of variables, e.g., P(W=1).

26
New cards

What is a conditional probability query?

A conditional probability query asks for the probability of query variables X given evidence about variables E, e.g., P(W|C=1).

27
New cards

What is Maximum a Posteriori (MAP) inference?

MAP inference finds the most likely assignment of values to the query variables X given evidence E = e.

28
New cards

Write the formula for MAP inference in plain text.

MAP(X | E=e) = argmax_x P(X=x | E=e)

29
New cards

What does the symbol α represent in exact inference?

α is a normalizing constant equal to 1 / P(E), which ensures the posterior distribution sums to 1.

30
New cards

Write the general formula for exact inference in plain text.

P(X | E) = α * sum over Y of P(X, E, Y)

31
New cards

In the full Wet Grass example, what are the four variables?

Cloudy (C), Sprinkler (S), Rain (R), Wet Grass (W).

32
New cards

Write the factorization for the full Wet Grass example in plain text.

P(C,S,R,W) = P(C) * P(S|C) * P(R|C) * P(W|S,R)

33
New cards

In the full Wet Grass example, if C=1, what is P(S=1|C=1)?

0.1

34
New cards

In the full Wet Grass example, if C=1, what is P(S=0|C=1)?

0.9

35
New cards

In the full Wet Grass example, if C=1, what is P(R=1|C=1)?

0.8

36
New cards

In the full Wet Grass example, if C=1, what is P(R=0|C=1)?

0.2

37
New cards

In the full Wet Grass example, what is P(W=1 | S=0, R=0)?

0.0

38
New cards

In the full Wet Grass example, what is P(W=1 | S=0, R=1)?

0.9

39
New cards

In the full Wet Grass example, what is P(W=1 | S=1, R=0)?

0.9

40
New cards

In the full Wet Grass example, what is P(W=1 | S=1, R=1)?

0.99

41
New cards

In the full Wet Grass example, what is the calculated value of P(W=1 | C=1)?

0.7452

42
New cards

What is the purpose of marginalization?

To eliminate variables from a joint distribution to obtain the distribution of a subset.

43
New cards

For discrete random variables, how is marginalization performed?

By summing over the unwanted variable.

44
New cards

For continuous random variables, how is marginalization performed?

By integrating over the unwanted variable.

45
New cards

What does the joint probability table on slide 15 illustrate?

It illustrates how to obtain marginal probabilities by summing rows or columns.

46
New cards

Name one advanced method for exact inference.

Variable Elimination.

47
New cards

Name another advanced method for exact inference.

Clustering Algorithm (Junction Tree Algorithm).

48
New cards

Name a third advanced method for exact inference.

Sum-Product Algorithm (Belief Propagation).

49
New cards

What is the key idea of Variable Elimination?

It systematically sums out variables one by one, reusing intermediate results to reduce computation.

50
New cards

What is the key idea of the Clustering Algorithm?

It groups variables into clusters to turn the network into a tree, then performs inference efficiently.

51
New cards

What is the key idea of the Sum-Product Algorithm?

It propagates messages along the graph to compute marginal probabilities, especially effective in tree-structured networks.

52
New cards

Write the formula for Bayes' Rule as in the summary.

P(A | B) = P(B | A) * P(A) / P(B)

53
New cards

Write the formula for marginalization of discrete variables as in the summary.

P(X=x) = sum over y of P(X=x, Y=y)

54
New cards

Write the formula for the joint distribution in a Bayesian network as in the summary.

P(X1, X2, …, Xn) = product over i of P(Xi | Parents(Xi))

55
New cards

Write the formula for posterior probability (exact inference) as in the summary.

P(X | E) = α * sum over Y of P(X, E, Y) where α = 1 / P(E)

56
New cards

Write the formula for MAP inference as in the summary.

MAP(X | E=e) = argmax_x P(X=x | E=e)

57
New cards

Write the formula for P(W=1 | C=1) from the Wet Grass example as in the summary.

P(W=1 | C=1) = sum over S,R of P(S | C=1) * P(R | C=1) * P(W=1 | S, R)