1/37
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
agent
anything that perceives its environment through sensors and acts upon that environment through actuators
percept
the agent's perceptual inputs at any given instant
percept sequence
complete history of everything the agent has ever perceived
external agent function
maps any given percept sequence to an action
internal agent function
the agent's program
performance measure
notion of desirability that evaluates any given sequence of environment states
rational agent
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
omniscient agent
knows the actual outcome of its actions and can act accordingly; but omniscience is impossible in reality
information gathering
doing actions in order to modify future percepts
exploration
example of information gathering
autonomy
learn what it can to compensate for partial or incorrect prior knowledge
task environment
problems to which rational agents are the solutions
PEAS
Performance, Environment, Actuators, Sensors
software agent (softbot)
software robots
stochastic
the environment is not determine by just the current state and the action executed by the agent
uncertain
environment that is not fully observable or not deterministic
nondeterministic
environment in which actions are characterized by their possible outcomes
episodic task
the agent's experience is divided into atomic episodes, in each episode the agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the actions taken in previous episodes
task environment categories
observable, agents, deterministic, episodic, static, discrete
environment class
environments are drawn from this that way the agent is not able to take advantage of a single environment's particular characteristics
environment generator
class that selects particular environments win which to run the agent
agent program
implements the agent function - the mapping from percepts to actions
architecture
some sort of computing device with physical sensors actuators (agent = architecture + program)
simple reflex agent
select actions on the basis of the current percept, ignoring the rest of the percept history
condition-action rule, situation-action rule, production
if car-in-front-is-braking then initiate-braking.
internal state
in model-based reflex agents the agent should maintain this sate that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state
model-based agent
an agent that tracks the world
goal-based agents
more flexible because the knowledge that supports its decisions is represented explicitly and can be modified
utility
how "happy" and "unhappy" is the state of getting to the end goal
utility function
internalization of the performance measure
expected utility
the utility the agent expects to derive, on average, given the probabilities and utilities of each outcome
learning element
responsible for making improvements
performance element
responsible for selecting external actions
critic
element that provides feedback on how the agent is doing, the learning element uses it
problem generator
part responsible for suggesting actions that will lead to new and informative experiences
factored representation
splits up each state into a fixed set of variables or attributes, each of which can have a value
structured representation
representation in which objects and their various and varying relationships can be described explicitly
expressiveness
axis along which atomic, factored, and structured representations lie is this axis increasing