Key Concepts in Artificial Intelligence and Markov Decision Processes

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/14

flashcard set

Earn XP

Description and Tags

This set of flashcards covers key vocabulary and concepts related to Artificial Intelligence, specifically focused on Markov Decision Processes and decision-making frameworks.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

15 Terms

1
New cards

Markov Decision Process (MDP)

A sequential decision problem in a fully observable, stochastic environment with a Markovian transition model and additive rewards.

2
New cards

Utility Function (U(s))

A function that assigns a value reflecting how desirable a state is, guiding preference in decision making.

3
New cards

Maximum Expected Utility (MEU)

The principle that an agent should choose the action with the highest expected utility from available actions.

4
New cards

Transition Model (P(s’ | s, a))

The probability of ending up in a new state s’ given the agent was in state s and took action a.

5
New cards

Policy (π)

A strategy that specifies the action to take in each state defined by the agent's intended actions.

6
New cards

Rewards (R(s, a, s’))

The feedback received by an agent after transitioning from state s to s’ via action a, which can be positive or negative.

7
New cards

Expected Utility

The average utility of potential outcomes of an action, weighted by the probabilities of those outcomes occurring.

8
New cards

Discount Factor (𝛾)

A value between 0 and 1 used to prioritize immediate rewards over future rewards in the context of decision making.

9
New cards

Value Iteration

An algorithm used to compute the optimal policy by iteratively updating the utility of states based on expected utilities.

10
New cards

Policy Iteration

An algorithm that finds an optimal policy through repeated evaluation and improvement of policy utilities.

11
New cards

Bellman Equation

Expresses the relationship between the utility of a state and the expected rewards plus the utility of neighboring states.

12
New cards

Q-function

Also known as action-utility function; it estimates the expected utility of taking a specific action in a given state.

13
New cards

Environment History

A sequence of states and actions that an agent follows during its interaction with the environment.

14
New cards

Convergence

The state of utilities becoming stable over iterations in value or policy iteration processes.

15
New cards

Non-terminal State

A state that does not terminate the decision-making process and has associated negative rewards in certain contexts.