Send a link to your students to track their progress
61 Terms
1
New cards
An agent is anything that can be viewed as perceiving its environment through ________ and acting upon that environment through ________
sensors, actuators
2
New cards
A ___________ might have cameras and infrared range finders for sensors and various motors for actuators
robotic agent
3
New cards
A ___________ receives keystrokes, file contents, and network packets as sensory inputs and acts on the environment by displaying on the screen, writing files, and sending network packets.
software agent
4
New cards
Percept
Agent's perceptual inputs at any given instant
5
New cards
Percept Sequence
The complete history of everything the agent has ever perceived
6
New cards
An agent's choice of action at any given instant can depend on the entire _________ observed to date, but not on anything it has not perceived
percept sequence
7
New cards
Agent function
abstract mathematical description - maps any given percept sequence to an action
8
New cards
Agent program
concrete implementation of agent function, running within some physical system
9
New cards
What does it mean to do the right thing?
When an agent is plunked down into an environment, it generates a sequence of actions according to the percept it receives. This sequence of actions causes the environment to go through a sequence of states. If the sequence is desirable, then the agent has performed well.
10
New cards
Performance Measure
Evaluates actions for any given sequence of environment states
11
New cards
It is better to design performance measures according to
what one actually wants in the environment rather than how one thinks the agent should behave
12
New cards
What is rational at a given time depends on
1. Performance Measure that defines criterion of success 2. Agent's prior knowledge of the environment 3. Actions the agent can perform 4. The agent's percept sequence to date
13
New cards
Definition of rational agent
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure given the evidence provided by the percept sequence and whatever builtin knowledge the agent has.
14
New cards
Omniscient agent
knows actual outcomes of actions and can act accordingly but is impossible in reality
15
New cards
An agent can be omniscient in a small finite and closed system
-Performing actions in order to modify future percepts - An agent must explore the unknown environment - learn as much as possible from what is perceives - Exploration taken from the initially unknown environment
18
New cards
Autonomy
The agent does not depend on prior knowledge of its designer and rather depends on its own percepts
19
New cards
Task Environment - PEAS
Problems to which rational agents are the solution Performance Measure Environment Actuators Sensors
20
New cards
Properties of task environment
1. Fully observable vs partially observable 2. Single agent vs multi agent 3. Dynamic vs static 4. Episodic vs sequential 5. Discrete vs continuous 6. Known vs unknown 7. Deterministic vs stochastic
21
New cards
Fully observable
Agent's sensors give it access to the complete state of the environment at each point in time
22
New cards
Fully observable environments are convienient because
the agent need not maintain any internal state to keep record of the world
23
New cards
Partially Observable
Noisy and inaccurate sensors, parts of the state are simply missing from the sensor data
24
New cards
If an agent has no sensors at all, the environment is
unobservable
25
New cards
Deterministic
If the next state of the environment is completely determined by the current state and action executed by the agent
26
New cards
Stochastic
randomly determined
27
New cards
An environment is _________ if it is partially observable or non-deterministic
uncertain
28
New cards
Non-deterministic environment
Actions are determined by possible outcomes but no probabilities attached to them Associated with performance measure that require the agent to succeed for all possible actions
29
New cards
Episodic Environment
episodic task environment is divided into atomic episodes In each episode the agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the actions taken in previous episodes.
30
New cards
Sequential Environment
Current decision could affect all future decisions
31
New cards
Dynamic Environment
If the environment can change when the agent is deliberating
32
New cards
Static Environment
Not dynamic environment
33
New cards
Semi-Dynamic Environment
If the environment itself does not change with the passage of time but the agent's performance score does
34
New cards
Discrete vs Continuous applies to
- state of the environment - the way time is handled - percepts and actions of the agent
35
New cards
Known environment
Outcome for all actions are given
36
New cards
Unknown environment
The agent has to learn how it works in order to make good decisions
37
New cards
Agent function input
entire percept history
38
New cards
Agent program input
current percept because nothing else is available from the environment
39
New cards
If the agent's actions need to depend on some or all of the percept sequence, then the agent will have to _________ .
Selects action based on current percept, ignores percept history i.e only when the environment is fully observable
42
New cards
Model-based reflex agents
1. We need some information on how the world evolves independently of the agent 2. We need some information about how the agent's action will affect the world This knowledge - model of the world. An agent that uses a model is a model based reflex agent.
43
New cards
The most effective way to handle ___________ is for the agent to keep track of the part of the world it can't see now
partial observability - the agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state.
44
New cards
Goal based agents
Sometimes it is not enough to know the current state of the environment to decide what to do. The agent needs some sort of goal information that describes situations that are desirable.
45
New cards
Goal based agent - definition
An agent keeps track of the world state as well as a set of goals it is trying to achieve and chooses an action that will eventually lead to the achievement of its goal
46
New cards
Model-based reflex agents - definition
Keeps track of the current state of the world using an internal model. It chooses an action in the same way as the reflex agent.
47
New cards
_____ and ____ are subfileds of AI devoted to finding action sequences that achieve agent goals
Search and Planning
48
New cards
Difference between reflex agent and goal based agent
A reflex agent brakes when it sees brake lights. A goal based agent could reason if the car in front has break lights on, it will slow down.
49
New cards
An agent's ________ is an internalization of performance measure
utility function
50
New cards
Utility based agents
try to maximize their own expected "happiness"
1. When there are conflicting goals, only some of which can be achieved, the utility function specifies the appropriate tradeoff 2. When there are several goals that the agent can aim for, none of which can be achieved with certainty, utility provides a way in which the likelihood of success can be weighed against the importance of the goals
51
New cards
A rational utility based agent chooses the action that maximizes the __________ of the action outcomes
expected utility
52
New cards
Utility-based agents - Definition
It uses a model of the world along with a utility function that measures its preferences among states of the world. Then it chooses the action that leads to be best expected utility where expected utility is computed by averaging overall possible outcome states weighted by the probability of the outcome
53
New cards
Representing States - Atomic representation
Each state of the world is indivisible, it has no internal structure
54
New cards
Representing States - Factored representation
Splits up each state into a fixed set of variables or attributes, each of which can have a value
55
New cards
Representing States - Structured representation
A state includes objects each of which may have attributes of its own as well as relationships to other objects