COMP 5600 Exam 1

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/106

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

107 Terms

1
New cards

Turing Test

One method of determining the strength of artificial intelligence, in which a human tries to decide if the intelligence at the other end of a text chat is human.

2
New cards

Chinese Room Argument

An argument against the Turing test (and functionalism) that contends that computers are merely syntax-manipulating devices and have no grasp of semantics and the meaning of what they do.

3
New cards

Components of an AI System

An agent perceives its environment through sensors and acts on the environment through actuators.

Human: sensors are eyes, ears, actuators (effectors) are hands, legs, mouth.

Robot: sensors are cameras, sonar, lasers, ladar, bump, effectors are grippers, manipulators, motors

The agent's behavior is described by its

function that maps percept to action.

4
New cards

Agent

An entity (software, hardware, hybrid, ...) that can perceive its environment and act on this percept

5
New cards

Rational agent

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

6
New cards

PAGE method

Percepts, Actions, Goals, Environment

7
New cards

Percepts

Percepts refer to the sensory inputs or information that the AI agent receives from its environment through various sensors.

8
New cards

Actions

Actions represent the possible operations or behaviors that the AI agent can perform to interact with its environment or achieve its goals.

9
New cards

Goals

Goals are the objectives or desired outcomes that the AI agent aims to achieve within its environment.

10
New cards

Environment

The environment encompasses the external surroundings and conditions in which the AI agent operates and interacts.

11
New cards

Environment Properties

Fully observable vs. partially observable

Deterministic vs. stochastic / strategic

Episodic vs. sequential

Static vs. dynamic

Discrete vs. continuous

Single agent vs. multiagent

12
New cards

Fully observable vs partially observable

Environment sensors provide access to complete state of the relevant environment at each point in time

13
New cards

Deterministic vs. Stochastic

•Deterministic Environment: The next state of the environment is completely determined by the current state and the actions taken by the agent. Chess, with its deterministic moves and outcomes, represents a deterministic environment.

• Stochastic Environment: The outcomes are probabilistic, and the same action in the same state may lead to different results with certain probabilities. Backgammon, with its reliance on dice rolls introducing an element of chance, represents a stochastic environment.

14
New cards

Static vs dynamic

Dynamic, environment can change while agent is deliberating

15
New cards

Discrete vs continuous

• Discrete Environment: Both the state space and the action space are discrete, with a finite number of possible states and actions.

• Continuous Environment: The state and action spaces are continuous, with an infinite number of possible states and actions.

16
New cards

Single vs multi agent

How distinguish agent from environment?

if other's behavior maximizes its performance based on agent, then it is multi-agent

17
New cards

Agent Types

•Simple reflex agents

•Reflex agents with state

•Goal-based agents

•Utility-based agents

•Learning agent

18
New cards

Simple reflex agents

condition-action rules on current percept, environment must be fully observable

•Use simple "if then" rules

•Can be short sighted

Ex: Roomba

SimpleReflexAgent(percept)

state = InterpretInput(percept)

rule = RuleMatch(state, rules)

action = RuleAction(rule)

Return action

19
New cards

Reflex Agent With State

•Store previously-observed information

•Can reason about unobserved aspects of current state

ReflexAgentWithState(percept)

state = UpdateState(state,action,percept)

rule = RuleMatch(state, rules)

action = RuleAction(rule)

Return action

EXPLICIT MODELS OF THE ENVIRONMENT

•Blackbox models

•Factored models

•Logical models

•Probabilistic models

20
New cards

Goal-Based Agents

•Goal reflects desires of agents

•May project actions to see if consistent with goals

•Takes time, world may change during reasoning

•It is not always obvious what action to do now given a set of goals

•You woke up in the morning. You want to attend a class. What should your action be?

•Search (Find a path from the current state to goal state; execute the first op)

Planning (does the same for structured—non-blackbox state models)

21
New cards

Utility-Based Agents

•Evaluation function to measure the utility

•f(state) -> value

•Useful for evaluating competing goals

•Examples:

•Decision Theoretic Planning

Sequential Decision Problems

22
New cards

Learning Agents

•What can be learned?

•Any of the boxes representing the agent's knowledge

•action description, effect probabilities, causal relations in the world (and the probabilities of causation), utility models(sort of through credit assignment), sensor data interpretation models

•What feedback is available?

•Supervised, unsupervised, "reinforcement" learning

•Credit assignment problem

•What prior knowledge is available?

•agent's head is a blankslate or pre-existing knowledge

23
New cards

Explicit knowledge

Interference-focused

•Assume that the model is known

•Representations for states and actions exist

•Focus only on the inference

•No learning yet.

•Good for known (domain-specific) environments (chess, Go, etc.)

•Many classical AI has followed this approach

24
New cards

Tacit knowledge

•Assume agent has no prior knowledge

•Learn models and representations for state and actions

•Reflex agents!

•Focus only on learning

•Very good for domains (NLP, Vision, Robotics) with no explicit knowledge

•Recent AI research has focused on this approach

25
New cards

System 1

•Assume agent has no prior knowledge

•Learn models and representations for state and actions

•Reflex agents!

•Focus only on learning

•Very good for domains (NLP, Vision, Robotics) with no explicit knowledge

•Recent AI research has focused on this approach

26
New cards

System 2

•Everything else that requires more focused attention and complex decision-making skills

•Examples:

•Find the voice of the speaker in a noisy room (Cocktail party problem)

•Remember the name of the person you are meeting after a long time

•Park in a crowded lot

•Check for logical/factual correctness in a technical document

•Everything requires focus for good performance. Less attention brings poor performance. Requires integration with System 1 functions.

27
New cards

Static environment

Unchanging surroundings in which one navigates

28
New cards

Dynamic environment

an environment in which the rate of change is fast

29
New cards

Deterministic Environment

When the next state of the environment is completely and only dependent on the current action of the agent.

30
New cards

Episodic Environment

When the agent's current action is independent of its past actions.

31
New cards

representational dimensions

Dimensions used to classify domains

Deterministic vs stochastic

Static vs sequential

Representation schemes (States vs propositions vs relates)

32
New cards

Deterministic domains

operates on a pre-defined set of rules and algorithms, producing the same output for the same input, making it highly predictable and consistent

33
New cards

Stochastic domain

The introduction of randomness or probability into algorithms and models. Using probability

34
New cards

Static Domain

The agent needs to take a single action

35
New cards

Sequential domain

The agent needs to take a sequence of actions

• navigate through an environment to reach a goal state

36
New cards

Explicit states

Domain modeled with explicit states

States can be described in terms of objects and relationships

37
New cards

Feature-based representation

States can be described in terms of objects and relationships

38
New cards

Relations-based representation

There is a proposition for each relationship on each tuple of objects

39
New cards

Other dimensions of representational complexity

Flat vs. hierarchical representation

• Knowledge given vs. knowledge learned from experience

• Goals vs. complex preferences

• Single-agent vs. multi-agent

• Perfect rationality vs. bounded rationality

40
New cards

Flat representation

Representation via single level of abstraction

41
New cards

Hierarchical representation

Multiple levels of abstraction

42
New cards

Knowledge given

Providing an agent with a model of the world once

43
New cards

Knowledge learned

The agent can learn how the world works based on experience provided.

44
New cards

Goal based agent

An agent may have a goal that it wants to achieve

E.g., there is some state or set of states of the world that the agent wants to be in

• E.g., there is some proposition or set of propositions that the agent wants to make

true

45
New cards

Preference based agent

An agent may have preferences

• E.g., a preference/utility function describes how happy the agent is in each state of

the world

• Agent's task is to reach a state which makes it as happy as possible

• Preferences can be complex

• E.g., diagnostic assistant faces multi-objective problem

• Life expectancy, suffering, risk of side effects, costs, ...

46
New cards

Perfect rationality

The agent can derive what the best course of action is (making a 'perfect' plan)

47
New cards

Bounded rationality

The agent must make good decisions based on its

perceptual, computational and memory limitations

48
New cards

Uninformed Search

Start State: Possible state(s) that your agent can start from

• Goal State(s): Target state(s) for your agent to reach

• Path: A sequence of states connected by the sequence of actions.

• Fron2er: The set of paths which could be explored next by your agent

• Predecessor: Any node that is higher up on the tree than the one that

is being considered.

• Successor: Any node that is a child of a node or a child or a child of a

node, etc.

• Solution: A path from the initial to the goal state.

49
New cards

Depth First Search

Depth-first search treats the fronEer as a stack

• It always selects one of the last elements added to the fronEer.

• If the list of paths on the fronEer is [p1, p2, . . .]

• First, p1 is selected.

• Paths that extend p1 are added to the front of the stack (in front of p2).

• p2 is only selected when all paths from p1 have been explored

Incomplete, not optimal, O(b^m) TC, O(bm) SC

50
New cards

Breadth First Search

Depth-first search treats the frontier as a queue

• It always selects one of the earlier elements added to the frontier.

• If the list of paths on the frontier is [p1, p2, . . .]

• First, p1 is selected.

• Its neighbors are added to the end of the queue (after p1).

• p2 is selected next

Complete, Optimal if path cost is a non- decreasing function of the depth of the node, O(b^d) TC, O(b^d) SC

51
New cards

Uniform Cost Search

Use a "Priority Queue" to order nodes on the Frontier list, sorted by

path cost

• Let g(n) = cost of path from start node s to current node n

• Sort nodes by increasing value of g

Called Dijkstra's Algorithm in the algorithms literature

Complete, Optimal, O(b^([C*episilon]) TC, O(b^([C/episilon]) SC

Epsilon = Best goal path

52
New cards

Iterative Deepening Search

Requires modification to DFS search algorithm:

• do DFS to depth 1 and treat all children of the start node as leaves

• if no solution found, do DFS to depth 2

• repeat by increasing "depth bound" until a solution found

• Start node is at depth 0

Complete, Optimal if path cost is a non- decreasing function of the depth of the node, O(b^d) TC, O(bd) SC

53
New cards

Informed search

Blind search algorithms do not take into account the goal until they are at a goal node.

• Rigid procedure with no knowledge of the cost of a given node to the goal.

• Often there is extra knowledge that can be used to guide the search:

• an estimate of the distance/cost from node n to a goal node.

• This estimate is called a search heuristic.

• Informed searches use domain knowledge to guide selection of the best path to continue searching

54
New cards

Search Heuristic

h(n), an estimate of the cost of the optimal path from node n to a goal node

uses domain-specific info. in some way

• is computable from the current state description

• it estimates

• the "goodness" of node n

• how close node n is to a goal

• the cost of minimal cost path from node n to a goal state

55
New cards

Best-First Search

Sort nodes in the Frontier list by increasing values of an evaluation function, f(n), that incorporates domain-specific information

• This is a generic way of referring to the class of informed search methods

56
New cards

Greedy Best-First Search

• Use as an evaluation function, f(n) = h(n), sorting nodes in the Frontier list by increasing values of f

• Selects the node to expand that is believed to be closest (i.e., smallest f value) to a goal node

57
New cards

Beam Search

• Use an evaluation function f(n) = h(n) as in greedy best-first search,

but restrict the maximum size of the Frontier list to a constant, k

• Only keep k best nodes as candidates for expansion, and throw away

the rest

• More space efficient than Greedy Search, but may throw away a node

on a solution path

• Not complete

• Not optimal/admissible

58
New cards

Algorithm A Search

• Use as an evaluation function f(n) = g(n) + h(n), where g(n) is minimal

cost path from start to current node n (as defined in Uniform Cost

Search)

• The g term adds a "breadth-first component" to the evaluation

function

• Nodes on the Frontier are ranked by the estimated cost of a solution,

where g(n) is the cost from the start node to node n, and h(n) is the

estimated cost from node n to a goal

• Not complete

• Not optimal/admissible

• Algorithm A never expands E because h(E) = ∞

59
New cards

Algorithm A* Search

• Use the same evaluation function used by Algorithm A, except add

the constraint that for all nodes n in the search space, h(n) ≤ h*(n),

where h*(n) is the true cost of the minimal cost path from n to a goal

• The cost to the nearest goal is never over-estimated

• Complete

• Optimal / Admissible

Downsides

• A* can use lots of memory: O(number of states)

• For really big search spaces, A* will run out of memory

60
New cards

Admissible heuristic

A heuristic that never overestimates the cost to reach the goal.

61
New cards

Backtracking Search

• Depth-first search algorithm

• Goes down one variable at a time

• In a dead end, back up to last variable whose value can be changed without

violating any constraints, and change it

• If you backed up to the root and tried all values, then there are no solutions

• Algorithm is complete

• Will find a solution if one exists

• Will expand the entire (finite) search space if necessary

• Depth-limited search with limit = n

62
New cards

Forward checking

• At start, for each variable, record the current set of all possible legal values for it

• When you assign a value to a variable in the search, update the set of legal values for all unassigned variables. Backtrack immediately if you empty a variable's set of possible values.

• What happens if we do DFS with the order of assignments as B tried first, then G, then R?

63
New cards

Advanced Search

• Search over possible states to find the "best" one!

• Not necessarily the optimal one

• How?

• Different algorithms like: Hill Climbing, Simulated Annealing, Genetic Algorithms

64
New cards

Hill climbing

• Very simple idea:

• Start from some state s

• Move to a neighbor t with better score. Repeat

• Question: what's a neighbor?

• Defined by you!

• The neighborhood of a state is the set of neighbors

• Also called 'move set'

• Similar to successor function

65
New cards

Anneal:

To subject (glass or metal) to a process of heating and slow cooling in order to toughen and reduce brittleness.

66
New cards

Simulated Annealing

Pick initial state s, randomly pick t in neighbors (s), IF f(t) better THEN accept s <- t, ELSE t is worse than s accept s <- t

Goto 2 until bored

67
New cards

Genetic algorithms

Inspired by natural evolution: Living things evolved into more

successful organisms

• offspring exhibit some traits of each parent

• hereditary traits are determined by genes

• genetic instructions are contained in chromosomes

• chromosomes are strands of DNA

• DNA is composed of base pairs (A,C,G,T), when in meaningful combinations,

encode hereditary traits

68
New cards

Knowledge base

A set of representations of facts believed true

Each representation is called a sentence

Sentences are expressed in a knowledge representation language

69
New cards

Logics

Formal languages for representing information so that conclusions can be drawn

70
New cards

Syntax

Defines the sentences in the language

71
New cards

Semantics

Define the sentence's meaning

72
New cards

Entailment

One thing follows from another

73
New cards

Model

Formally structured worlds w.r.t which truth can be evaluated

74
New cards

Soundness

when a deductive argument is valid and all the premises are actually true

75
New cards

Completeness

Whenever some sentence entails from a Knowledge base, it is also true that some sentence can be derived from that knowledge base

76
New cards

Knowledge representation

Defined by

syntax and semantics

Syntax: defines all possible sequences of symbols that constitute sentences of the language

Semantics: determines facts in the world to which the sentences refer

77
New cards

Ontology

The study of what there is—an inventory of what exists. An ontological commitment is a commitment to an existence claim.

78
New cards

Epistemology

A major branch of philosophy that concerns the forms, nature, and preconditions of knowledge.

79
New cards

Valid sentence/Tautology

Sentence that's True under all interpretations, no matter what the world is actually like or what the semantics is.

80
New cards

Inconsistent sentence/contradiction

A sentence that's False under all interpretations. The world is never like what it describes

81
New cards

P entails Q

written P |= Q, means that whenever P is True, so is Q

82
New cards

P implies Q

P -> Q

If P then Q

Equivalent to Not(P) Or Q

83
New cards

P iff Q

P <-> Q

P if and only if Q

Equivalent to (P -> Q) ^ (Q -> P)

84
New cards

Resolution

A valid inference rule producing a new clause implied by two clauses containing complementary literals

85
New cards

Proof

A sequence of sentences, where each is a premise (i.e., a given) or is derived from earlier sentences in the proof by an inference rule

86
New cards

Theorem

Last sentence of a proof that we want to prove

87
New cards

Pros of Propositional logic

•Simple KR language good for many problems

•Lays foundation for higher logics (e.g., FOL)

•Reasoning is decidable, though NP complete; efficient techniques exist for many problems

88
New cards

Cons of Propositional logic

•Not expressive enough for most problems

•Even when it is, it can be very “unconcise”

•Hard to identify individuals (e.g., Mary, 3)

•Can’t directly represent properties of individuals or relations between them (e.g., “Bill height tall”)

•Generalizations, patterns, regularities hard to represent (e.g., “all triangles have 3 sides”)

First-Order Logic (FOL) represents this information via relations, variables & quantifiers

89
New cards

First Order Logic

Models the world in terms of

•Objects, which are things with individual identities

•Properties of objects that distinguish them from others

•Relations that hold among sets of objects

•Functions, a subset of relations where there is only one "value" for any given "input"

90
New cards

Quantifiers

Universal and Existential

91
New cards

Term

A constant or variable symbol, or n-place function of n terms

92
New cards

Ground terms

Have no variables in them

93
New cards

Atomic sentences

Which are either true or false

are an n-place predicate of n terms, e.g.:

•green(Kermit)

•between(Philadelphia, Baltimore, DC)

•loves(X, mother(X))

94
New cards

Complex sentences

•are formed from atomic sentences connected by logical connectives:

Not(P), P and Q, P or Q, P implies Q, P iff Q

where P and Q are sentences

95
New cards

Unary predicates

Typically encode a type

Ex:

•Dolphin(flipper): flipper is a kind of dolphin

•Green(kermit): kermit is a kind of green thing

•Integer(x): x is a kind of integer

96
New cards

Non unary predicates

Typically encode relations

Ex:

•Loves(john, mary)

•Greater_than(2, 1)

•Between(newYork, philadelphia, baltimore)

•hasName(John, "John Smith")

97
New cards

Well-formed formula (wff)

A sentence with no free variables; all variables are bound by either a universal or existential quantifier

98
New cards

Universal quantification

•forAll(x)P(x) means P holds for all values of x in domain associated with variable

•E.g., ForAll(x) dolphin(x) implies mammal(x)

99
New cards

Existential quantification

•Exists(x)P(x) means P holds for some value of x in domain associated with variable

•E.g., Exists(x) mammal(x) and lays_eggs(x)

•This lets us make a statement about some object without identifying it

100
New cards

Axioms

Facts and rules that capture (important) facts & concepts in a domain; axioms are used to prove theorems

•Mathematicians dislike unnecessary (dependent) axioms, i.e. ones that can be derived from others

•Dependent axioms can make reasoning faster, however

•Choosing a good set of axioms is a design problem