Reinforcement Learning - Vocabulary Flashcards (Video Notes)

0.0(0)
studied byStudied by 0 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/60

flashcard set

Earn XP

Description and Tags

Vocabulary-style flashcards covering key RL concepts from the notes.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

61 Terms

1
New cards

Reinforcement Learning (RL)

A framework for solving control/decision problems by an agent learning from interaction with an environment to maximize rewards.

2
New cards

Agent

An intelligent program that learns to make decisions by moving through an environment and taking actions.

3
New cards

Environment

The world in which the agent operates, providing states, rewards, and transitions.

4
New cards

State

A representation of the current situation returned by the environment; typically a feature-vector.

5
New cards

Action

A possible decision the agent can take in a given state.

6
New cards

Reward

Numeric feedback from the environment evaluating the agent's action; can be positive or negative.

7
New cards

Policy

A rule mapping states to actions; can be deterministic or stochastic and aims to maximize cumulative rewards.

8
New cards

Value Function

An estimate of the expected return (cumulative reward) from a state or state-action pair.

9
New cards

State-Value Function (V(s))

Expected return starting from state s and following a given policy.

10
New cards

Action-Value Function (Q(s,a))

Expected return starting from state s, taking action a, then following a policy.

11
New cards

Model of the Environment

A representation of environment dynamics used to predict next state and reward for planning.

12
New cards

Model-Based RL

RL that uses an explicit model of the environment for planning and optimization.

13
New cards

Model-Free RL

RL that learns from interactions without building an explicit model of the environment.

14
New cards

Immediate Reinforcement Learning (IRL)

RL where evaluation (reward) happens immediately after taking an action.

15
New cards

Bandit Problem

A simple RL scenario with one-step decisions and rewards but no state transitions.

16
New cards

Multi-Armed Bandit (MAB)

A bandit problem with multiple arms, each with an unknown reward distribution, to maximize cumulative reward.

17
New cards

Exploration

Trying new actions to gather information about rewards.

18
New cards

Exploitation

Choosing the best-known action to maximize reward.

19
New cards

ε-Greedy

Policy that mostly exploits the best-known action while exploring randomly with probability ε.

20
New cards

Upper Confidence Bound (UCB)

Balances exploration and exploitation by adding a confidence bound to estimated values.

21
New cards

Thompson Sampling

Bayesian method for bandits that samples actions from posterior reward distributions.

22
New cards

PAC (Probably Approximately Correct)

Framework that guarantees near-optimal learning with high probability within finite samples.

23
New cards

PAC-MDP

PAC framework applied to MDPs, guaranteeing near-optimal policy with high probability.

24
New cards

Regret

Difference between the reward of the optimal policy and the reward achieved by the algorithm.

25
New cards

Bandit Optimality

Policy that minimizes cumulative regret or maximizes cumulative reward in bandits.

26
New cards

Value-Based Methods

RL methods that learn value functions (V or Q) to guide action selection.

27
New cards

Q-Learning

Model-free, off-policy algorithm that learns the action-value function Q(s,a).

28
New cards

SARSA

On-policy algorithm updating Q-values based on the action actually taken.

29
New cards

Deep Q-Network (DQN)

Neural-network approximation of the Q-function for large or continuous state spaces.

30
New cards

Policy Gradient

Directly optimizing a parameterized policy by gradient ascent on expected return.

31
New cards

REINFORCE

Basic policy gradient algorithm updating policy parameters to increase expected reward.

32
New cards

TRPO (Trust Region Policy Optimization)

Policy optimization method enforcing updates within a trust region for stability.

33
New cards

PPO (Proximal Policy Optimization)

Practical policy optimization using a surrogate objective with clipping to limit updates.

34
New cards

Policy Representation

How the policy is represented (e.g., table or neural network) mapping states to actions.

35
New cards

Policy-Based Reinforcement Learning

Learning the policy directly without relying on a value function.

36
New cards

Deterministic Policy

Policy that selects a single action for each state.

37
New cards

Stochastic Policy

Policy that assigns probabilities to multiple actions.

38
New cards

On-Policy

Decision policy used to generate data and update the same policy (e.g., SARSA).

39
New cards

Off-Policy

Decision policy different from the policy used to generate data (e.g., Q-learning).

40
New cards

Policy Update Stability

Techniques (e.g., TRPO/PPO) to keep updates from destabilizing learning.

41
New cards

Policy Gradient vs Value-Based

Policy gradient directly optimizes the policy; value-based learns value functions to derive a policy.

42
New cards

Policy Optimization

Adjusting policy parameters to maximize expected return.

43
New cards

Immediate Reward Example

Instant feedback example such as ad clicks—reward observed right after action.

44
New cards

Credit Assignment

Determining which action caused a reward; easier with immediate rewards.

45
New cards

Reward Signal

The goal-defining numerical feedback from the environment.

46
New cards

State-Action Value

Q(s,a); the value of taking action a in state s and following a policy thereafter.

47
New cards

Planning

Deciding on actions by considering possible future states using a model.

48
New cards

Exploration-Exploitation Trade-off

Balancing learning new information vs. using known good actions.

49
New cards

Optimal Policy

Policy that maximizes expected cumulative reward.

50
New cards

Cumulative Reward

Total reward accumulated over time under a policy.

51
New cards

Action Space

Set of all possible actions; can be discrete or continuous.

52
New cards

Continuous Action Spaces

Action spaces that are continuous and high-dimensional, often favored by policy-based methods.

53
New cards

Environment Dynamics (p(s'|s,a))

Probability of next state given current state s and action a.

54
New cards

Model-Based Planning

Using a learned model to predict future states/rewards for planning.

55
New cards

Model-Free Learning

Learning from trial-and-error without modeling environment dynamics.

56
New cards

Greedy Policy

Policy that always selects the currently best-known action.

57
New cards

Stability in Learning

Maintaining reliable progress during iterative policy/value updates.

58
New cards

Credit Assignment Problem

Challenge of identifying which action led to a reward, alleviated by immediate rewards.

59
New cards

Environment-Observer Interaction

The loop where the agent observes a state, takes an action, receives a reward, and moves to a new state.

60
New cards

Feature-Vector (State Representation)

Numeric vector describing relevant aspects of the current state.

61
New cards

Discounting (Note: not covered in provided notes)

Omitted to maintain alignment with notes; not included in this set.