1/82
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
robots that interact with the physical world
can interact with the world, e.g. pick up objects, drive around etc.
autonomy for living agents
the degree to which an agent determines its own goals
robots that interact with the social world
communicate with people using the same interaction modalities used by people
autonomy for robots
the degree to which there is no direct user control, but goals etc. can be pre-determined
cognitive architecture
a framework that allows a robot to solve any task that is within its (intended) abilities
cognitivist cognitive architecture
framework that proposes computational processes that act like a person, or act intelligent under some definition; this is based on human cognition and focusses on symbolic information processing
cognitive model
cognitive architecture + knowledge
cognitivism
learning theory that focusses on the processes involved in learning rather than on the observed behaviour
morphological computations
an agent can in its behaviour exploit the body's morphological properties and the dynamics of the interaction with the physical environment
a robot according to Mel Siegel
senses, thinks, acts and communicates
agent
a computer system that is capable of autonomous actions in its environment, in order to meet its delegated objectives; has contol over its internal states, and its interactions with the environment
criteria for intelligence (Woolridge and Jennings)
autonomy, social ability, reactivity, pro-activeness
intentional notions
attribution of attitudes such as beliefs, desires, hopes, fears etc.
intentional system
entity whose behaviour can be predicted by the method of attributing belief, desires and rational acumen
first-order intentional system
has beliefs and desires
second-order intentional system
has beliefs and desires, including intentional states of itself and other agents
intentional stance
describing behaviour in terms of mental properties
physical stance
describing behaviour through the laws of physics
design stance
describing behaviour through knowledge of the purpose of the system
abstract architecture
abstract models of agents and environments
the synthesis problem
given a task environment, automatically find an agent that can solve it
sound synthesis
if every agent it returns is a successful agent
complete synthesis
if it always returns an agent
utility of a run
what score it would achieve
success rate of an agent
sum of the utilities of all runs, weighted by their likelihood
optimal agent
maximizes the success rate
bounded optimality
when considering only agents implementable on a specific system
symbolic reasoning agents
agents as a type of knowledge-based system to which methods from old-fashioned AI are applied, containing an explicitly represented, symbolic model of the world
deductive reasoning agents
agents use a formal language to create formulas that describe facts or beliefs about the world, and act by deducting the appropriate action from the current state and the set of rules
logic reasoning agents
Agents have clear meanings and need to think not just about the next action, but also about sequences of actions and possible outcomes
practical reasoning agents
agents whose reasoning is directed towards actions/the process of finding out what to do
practical reasoning
weighing conflicting considerations for and against competing actions; reasoning directed towards actions
theoretical reasoning
reasoning directed towards beliefs
closed world assumption
facts that are not stated as true are false
the transduction problem
how to translate the real world into a symbolic description that is accurate and adequate, and ready in time to be useful
the frame problem
how to determine which statements/logical descriptions are necessary and sufficient for describing the actions, and whether something changes when an action is performed
the representation/reasoning problem
how to symbolically represent information about complex real-world entities and processes, and get agents to reason with this information in time for the results to be useful
intentions
desires to which the agent is committed
three roles of intentions (Bratman)
drive means-ends reasoning, provide constraints on options, influence beliefs
desires
goals/aims that can be conflicting; options for an agent
Woolridge's roles of intentions
drive means-ends reasoning, persist, constrain future deliberation, influence beliefs on which future practical reasoning is based
beliefs
current state of the world according to the agent
intention-belief inconsistency
having an intention which you believe you won't achieve
intention-belief incompleteness
having an intention without believing it will happen
means-ends reasoning
provide an agent with representations of goal/intention to achieve, actions it can perform, and the environment, then have it generate a plan to achieve the goal
proofs in deductive agents
if the statement, that remains after carrying out all actions, follows from the premisses, we have proven the original statement and the plan is thus valid given the initial state of the world
actions in the STRIPS planner have
a name, a pre-condition list, a delete list, an add list
deliberation
option generation and filtering; choosing between options and committing to some
blind commitment
continue to maintain an intention until it has been achieved
single-minded commitment
continue to maintain an intention until the agent believes that either the intention has been achieved, or it is no longer possible to achieve the intention
open-minded commitment
maintain an intention as long as it is still believed optimal
overcommitment to means
if an agent does not re-plan if things go wrong
overcommitment to ends
if an agent never reconsiders whether or not its intentions are still appropriate
bold agents
agents that never pause to reconsider intentions; they do well in environments that don't change much
cautious agents
agents that stop to reconsider after every action; they do well in environments that change a lot
procedural reasoning system
first BDI agent architecture by Georgeff et al
procedural reasoning agents
agents that are equipped with a plan library, and have explicit representations of beliefs, desires, intentions
BDI architecture
a reasoning model with explicit representations of beliefs, desires and intentions, that implements reasoning through deliberation followed by means-ends reasoning
reactive agents
agents that decide actions very quickly ("immediately" from sensor information) and are perceived as just reacting to the environment without reasoning about it
emergent behaviour
complex patterns can arise from interacting simple entities
biological cognition
essence of being and reacting allow for complex behaviours like problem solving, language, reason to emerge
embodied cognition
bodily interaction with the environment is primary to cognition
ecological niche
goals, world and sensorimotor possibilities
Umwelt
surrounding environment; what is perceived, what can be done, what is being tried to achieve?
affordances
perceivable action possibilities
reflexes
a relationship between a specific event (stimulus) and a simple involuntary response to that event
taxes
movement in relation to a stimulus at a particular direction
fixed action patterns
a sequence of rigid order; once started, continues until completion; triggered by a sign stimulus; done by all members of the species
sequencing of innate behaviour
behaviour coordination mechanisms through (self-created) environmental stimuli
equilibrium
concurrent behaviours balance out (indecision)
dominance
one of the concurrent behaviours wins
cancellation
some other behaviour than the concurrent behaviours takes over
subsumption architecture
hierarchy of task-accomplishing behaviours
traditional AI
tried to demonstrate sophisticated reasoning in impoverished domains, hoping to generalise to robust behaviour in more complex domains
nouvelle AI
tries to demonstrate less sophisticated tasks operating in noisy complex domains, hoping to generalise more complex tasks
Dynamic Field Theory
a neuro-inspired theory of sensorimotor cognition, and how to implement such cognition in robots; dynamics of sensory neurons drive decisions of motor neurons
PID control
control loop mechanism employing feedback
proportional control (P-term)
multiply the error signal by some constant, and the result determines what is sent to the controller
rate of change of an error (D-term)
take the derivative of the error signal and multiply it by a constant
critical dampening
decreases the error quickly, then corrects perfectly; overshoots by just a little but stays stabel
history of the error (I-term)
multiply the integral of an error over some time (its past) by a constant
Kalman filter
the best estimate of the current position can be obtained by predicting the position using the initial position and the time that has passed and combining this estimate with the noisy measurements of the sensors
Nog leren (9)
Je hebt een begin gemaakt met het leren van deze termen. Hou vol!