Artificial Intelligence: A Modern Approach Chapter 2 Intelligent Agents

0.0(0)
studied byStudied by 1 person
0.0(0)
call with kaiCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/38

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 12:12 PM on 3/5/25
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

39 Terms

1
New cards

Agent

Anything that can be view as perceiving its environment through sensors and acting upon that environment through actuators.

2
New cards

Percept

An agent's perceptual inputs at any given instant

3
New cards

Percept Sequence

The complete history of everything the agent has ever perceived.

4
New cards

Agent Function

A mapping of a given percept sequence to an action.

5
New cards

Agent Program

The internal implementation of an agent's agent function.

6
New cards

Performance Measure

The evaluation of the desirability of any given sequence of environment states.

7
New cards

Rationality

The performance measure that defines the criterion of success.

The agent's prior knowledge of the environment.

The actions that the agent can perform.

The agent's percept sequence to date.

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

8
New cards

Information gathering

Doing actions in order to modify future percepts.

9
New cards

Learning

As the agent gains experience, its initial configuration could be modified or augmented

10
New cards

Autonomy

The ability to learn how to compensate for partial or incorrect prior knowledge

11
New cards

Task environments

The problems to which rational agents are the solution. PEAS (Performance Measures, Environment, Actuators, Sensors).

12
New cards

Environment

The external forces or actors acting on, changing, or causing a problem.

13
New cards

Actuators

Control of actions, production of results

14
New cards

Sensors

Receiving input from all sources necessary to solve the problem.

15
New cards

Observable

If an agent's sensors give it access to the complete state of the environment as each point in time, then we say that the task environment is fully observable. (Fully/Partially)

16
New cards

Single/Multi-agent

Number of agents in an environment as well as the manner in which they interact (competitive, cooperative).

17
New cards

Stochastic/Deterministic

If the next state of the environment is completely determined by the current state and the action executed by the agent, then is is deterministic.

18
New cards

Non-deterministic

An environment in which actions are characterized by their possible outcomes, but no probabilities are attached to them.

19
New cards

Uncertain

A partially observable or stochastic environment.

20
New cards

Episodic/Sequential

Experience can be divided into atomic episodes. In each episode the agent receives a percept and performs an action. The next episode does not depend on the previous one.

21
New cards

Static/Dynamic

If the environment can change while the agent is deliberating then it is dynamic.

22
New cards

Semidynamic

The environment does not change but an agent's performance does.

23
New cards

Discrete/Continuous

If an agent's action and percepts and the state can have an infinite number of values at any given time then the environment is continuous.

24
New cards

Known/unknown

In a known environment, the outcomes for all actions are given.

25
New cards

Simple reflex agent

Selects actions on the basis of the current percpt, ignoring the rest of the percept history.

26
New cards

Model-based reflex agents

An agent that keeps track of unobservable aspects of the current state using the percpet to develop a model.

27
New cards

Goa-based agent

An agent that combines a world model with a goal that describes desirable states to make decisions.

28
New cards

Utility

The measure of desirability

29
New cards

Utility Function

An internalization of an agent's performance measure

30
New cards

Expected Utility

The expected value of an action's outcome in a partially unobservable or stochastic environment.

31
New cards

Learning Agent

An agent that can compensate for partial or incorrect knowledge.

32
New cards

Learning Element

Responsible for making improvements

33
New cards

Performance Element

Responsible for selecting external actions, it takes in percpets and decides on actions.

34
New cards

Critic

Responsible for determining the how the agent is doing and modifying the performance element.

35
New cards

Problem Generator

It is responsible for suggesting actions that will lead to new and informative experiences.

36
New cards

Atomic representation

Each state of the world is indivisible, it has no internal structure.

37
New cards

Factored representation

Splits up each state into a fixed set of variable or attributes, each of which can have a value.

38
New cards

Structured representation

each state consists of objects which may have attributes of the their own and relationships to other objects

39
New cards

Expressiveness

Complexity of learning and reasoning increases with expressiveness. The Conciseness of states increases with expressiveness. More expressive languages can capture at least as much information as less expressive ones but more concisely and complexly.