Artificial Intelligence — Module 1: Introduction & Intelligent Agents (Vocabulary)

0.0(0)
studied byStudied by 0 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/24

flashcard set

Earn XP

Description and Tags

Vocabulary flashcards covering key AI concepts from the notes.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

25 Terms

1
New cards

Artificial Intelligence

The study of making computers perform tasks that currently require human intelligence.

2
New cards

Agent

An entity that perceives its environment through sensors and acts on it through actuators, capable of autonomous action and goal pursuit.

3
New cards

Environment

The surroundings with which an agent interacts; the world perceived by the agent and manipulated through actions.

4
New cards

Percept

The agent’s perceptual inputs at a given instant.

5
New cards

Percept Sequence

The complete history of everything the agent has perceived to date.

6
New cards

Agent Function

A map from the percept sequence to an action.

7
New cards

Rational Agent

An agent that behaves to achieve the best expected outcome given its percepts and knowledge.

8
New cards

Performance Measure

The criteria that determine how successful an agent is.

9
New cards

PEAS

Performance Measure, Environment, Actuators, and Sensors—the four components used to describe a task environment.

10
New cards

Task Environment

The environment in which an agent operates described by PEAS.

11
New cards

Fully Observable

An environment where the agent’s sensors provide complete information about the state at each point in time.

12
New cards

Partially Observable

An environment where some aspects of the state are hidden or obscured by noisy or incomplete sensor data.

13
New cards

Deterministic

An environment where the next state is completely determined by the current state and the agent’s action.

14
New cards

Stochastic

An environment where outcomes are probabilistic; the next state is not determined with certainty.

15
New cards

Episodic

A task environment where experiences are divided into atomic episodes; each episode is independent of previous ones.

16
New cards

Sequential

An environment where current decisions affect future decisions and outcomes.

17
New cards

Static

An environment that does not change while the agent deliberates.

18
New cards

Dynamic

An environment that can change while the agent is deliberating.

19
New cards

Discrete

An environment with discrete states, percepts, and actions (distinct, separate values).

20
New cards

Continuous

An environment with continuous states, percepts, and actions (no discrete steps).

21
New cards

Known

An environment where the outcomes for all actions are given or predictable.

22
New cards

Unknown

An environment whose dynamics are not known in advance, requiring the agent to learn.

23
New cards

Simple Reflex Agent

An agent that selects actions based only on the current percept, using condition–action rules; works best when the environment is fully observable.

24
New cards

Model-Based Reflex Agent

An agent that maintains an internal state and a model of the world to handle partial observability and track unseen aspects.

25
New cards

Goal-Based Agent

An agent that selects actions to achieve specific goals, enabling flexible behavior by adding goal information.