Introduction to Robotics | Quizlet

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/82

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

83 Terms

1
New cards

robots that interact with the physical world

can interact with the world, e.g. pick up objects, drive around etc.

2
New cards

autonomy for living agents

the degree to which an agent determines its own goals

3
New cards

robots that interact with the social world

communicate with people using the same interaction modalities used by people

4
New cards

autonomy for robots

the degree to which there is no direct user control, but goals etc. can be pre-determined

5
New cards

cognitive architecture

a framework that allows a robot to solve any task that is within its (intended) abilities

6
New cards

cognitivist cognitive architecture

framework that proposes computational processes that act like a person, or act intelligent under some definition; this is based on human cognition and focusses on symbolic information processing

7
New cards

cognitive model

cognitive architecture + knowledge

8
New cards

cognitivism

learning theory that focusses on the processes involved in learning rather than on the observed behaviour

9
New cards

morphological computations

an agent can in its behaviour exploit the body's morphological properties and the dynamics of the interaction with the physical environment

10
New cards

a robot according to Mel Siegel

senses, thinks, acts and communicates

11
New cards

agent

a computer system that is capable of autonomous actions in its environment, in order to meet its delegated objectives; has contol over its internal states, and its interactions with the environment

12
New cards

criteria for intelligence (Woolridge and Jennings)

autonomy, social ability, reactivity, pro-activeness

13
New cards

intentional notions

attribution of attitudes such as beliefs, desires, hopes, fears etc.

14
New cards

intentional system

entity whose behaviour can be predicted by the method of attributing belief, desires and rational acumen

15
New cards

first-order intentional system

has beliefs and desires

16
New cards

second-order intentional system

has beliefs and desires, including intentional states of itself and other agents

17
New cards

intentional stance

describing behaviour in terms of mental properties

18
New cards

physical stance

describing behaviour through the laws of physics

19
New cards

design stance

describing behaviour through knowledge of the purpose of the system

20
New cards

abstract architecture

abstract models of agents and environments

21
New cards

the synthesis problem

given a task environment, automatically find an agent that can solve it

22
New cards

sound synthesis

if every agent it returns is a successful agent

23
New cards

complete synthesis

if it always returns an agent

24
New cards

utility of a run

what score it would achieve

25
New cards

success rate of an agent

sum of the utilities of all runs, weighted by their likelihood

26
New cards

optimal agent

maximizes the success rate

27
New cards

bounded optimality

when considering only agents implementable on a specific system

28
New cards

symbolic reasoning agents

agents as a type of knowledge-based system to which methods from old-fashioned AI are applied, containing an explicitly represented, symbolic model of the world

29
New cards

deductive reasoning agents

agents use a formal language to create formulas that describe facts or beliefs about the world, and act by deducting the appropriate action from the current state and the set of rules

30
New cards

logic reasoning agents

Agents have clear meanings and need to think not just about the next action, but also about sequences of actions and possible outcomes

31
New cards

practical reasoning agents

agents whose reasoning is directed towards actions/the process of finding out what to do

32
New cards

practical reasoning

weighing conflicting considerations for and against competing actions; reasoning directed towards actions

33
New cards

theoretical reasoning

reasoning directed towards beliefs

34
New cards

closed world assumption

facts that are not stated as true are false

35
New cards

the transduction problem

how to translate the real world into a symbolic description that is accurate and adequate, and ready in time to be useful

36
New cards

the frame problem

how to determine which statements/logical descriptions are necessary and sufficient for describing the actions, and whether something changes when an action is performed

37
New cards

the representation/reasoning problem

how to symbolically represent information about complex real-world entities and processes, and get agents to reason with this information in time for the results to be useful

38
New cards

intentions

desires to which the agent is committed

39
New cards

three roles of intentions (Bratman)

drive means-ends reasoning, provide constraints on options, influence beliefs

40
New cards

desires

goals/aims that can be conflicting; options for an agent

41
New cards

Woolridge's roles of intentions

drive means-ends reasoning, persist, constrain future deliberation, influence beliefs on which future practical reasoning is based

42
New cards

beliefs

current state of the world according to the agent

43
New cards

intention-belief inconsistency

having an intention which you believe you won't achieve

44
New cards

intention-belief incompleteness

having an intention without believing it will happen

45
New cards

means-ends reasoning

provide an agent with representations of goal/intention to achieve, actions it can perform, and the environment, then have it generate a plan to achieve the goal

46
New cards

proofs in deductive agents

if the statement, that remains after carrying out all actions, follows from the premisses, we have proven the original statement and the plan is thus valid given the initial state of the world

47
New cards

actions in the STRIPS planner have

a name, a pre-condition list, a delete list, an add list

48
New cards

deliberation

option generation and filtering; choosing between options and committing to some

49
New cards

blind commitment

continue to maintain an intention until it has been achieved

50
New cards

single-minded commitment

continue to maintain an intention until the agent believes that either the intention has been achieved, or it is no longer possible to achieve the intention

51
New cards

open-minded commitment

maintain an intention as long as it is still believed optimal

52
New cards

overcommitment to means

if an agent does not re-plan if things go wrong

53
New cards

overcommitment to ends

if an agent never reconsiders whether or not its intentions are still appropriate

54
New cards

bold agents

agents that never pause to reconsider intentions; they do well in environments that don't change much

55
New cards

cautious agents

agents that stop to reconsider after every action; they do well in environments that change a lot

56
New cards

procedural reasoning system

first BDI agent architecture by Georgeff et al

57
New cards

procedural reasoning agents

agents that are equipped with a plan library, and have explicit representations of beliefs, desires, intentions

58
New cards

BDI architecture

a reasoning model with explicit representations of beliefs, desires and intentions, that implements reasoning through deliberation followed by means-ends reasoning

59
New cards

reactive agents

agents that decide actions very quickly ("immediately" from sensor information) and are perceived as just reacting to the environment without reasoning about it

60
New cards

emergent behaviour

complex patterns can arise from interacting simple entities

61
New cards

biological cognition

essence of being and reacting allow for complex behaviours like problem solving, language, reason to emerge

62
New cards

embodied cognition

bodily interaction with the environment is primary to cognition

63
New cards

ecological niche

goals, world and sensorimotor possibilities

64
New cards

Umwelt

surrounding environment; what is perceived, what can be done, what is being tried to achieve?

65
New cards

affordances

perceivable action possibilities

66
New cards

reflexes

a relationship between a specific event (stimulus) and a simple involuntary response to that event

67
New cards

taxes

movement in relation to a stimulus at a particular direction

68
New cards

fixed action patterns

a sequence of rigid order; once started, continues until completion; triggered by a sign stimulus; done by all members of the species

69
New cards

sequencing of innate behaviour

behaviour coordination mechanisms through (self-created) environmental stimuli

70
New cards

equilibrium

concurrent behaviours balance out (indecision)

71
New cards

dominance

one of the concurrent behaviours wins

72
New cards

cancellation

some other behaviour than the concurrent behaviours takes over

73
New cards

subsumption architecture

hierarchy of task-accomplishing behaviours

74
New cards

traditional AI

tried to demonstrate sophisticated reasoning in impoverished domains, hoping to generalise to robust behaviour in more complex domains

75
New cards

nouvelle AI

tries to demonstrate less sophisticated tasks operating in noisy complex domains, hoping to generalise more complex tasks

76
New cards

Dynamic Field Theory

a neuro-inspired theory of sensorimotor cognition, and how to implement such cognition in robots; dynamics of sensory neurons drive decisions of motor neurons

77
New cards

PID control

control loop mechanism employing feedback

78
New cards

proportional control (P-term)

multiply the error signal by some constant, and the result determines what is sent to the controller

79
New cards

rate of change of an error (D-term)

take the derivative of the error signal and multiply it by a constant

80
New cards

critical dampening

decreases the error quickly, then corrects perfectly; overshoots by just a little but stays stabel

81
New cards

history of the error (I-term)

multiply the integral of an error over some time (its past) by a constant

82
New cards

Kalman filter

the best estimate of the current position can be obtained by predicting the position using the initial position and the time that has passed and combining this estimate with the noisy measurements of the sensors

83
New cards

Nog leren (9)

Je hebt een begin gemaakt met het leren van deze termen. Hou vol!