1/106
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Turing Test
One method of determining the strength of artificial intelligence, in which a human tries to decide if the intelligence at the other end of a text chat is human.
Chinese Room Argument
An argument against the Turing test (and functionalism) that contends that computers are merely syntax-manipulating devices and have no grasp of semantics and the meaning of what they do.
Components of an AI System
An agent perceives its environment through sensors and acts on the environment through actuators.
Human: sensors are eyes, ears, actuators (effectors) are hands, legs, mouth.
Robot: sensors are cameras, sonar, lasers, ladar, bump, effectors are grippers, manipulators, motors
The agent's behavior is described by its
function that maps percept to action.
Agent
An entity (software, hardware, hybrid, ...) that can perceive its environment and act on this percept
Rational agent
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
PAGE method
Percepts, Actions, Goals, Environment
Percepts
Percepts refer to the sensory inputs or information that the AI agent receives from its environment through various sensors.
Actions
Actions represent the possible operations or behaviors that the AI agent can perform to interact with its environment or achieve its goals.
Goals
Goals are the objectives or desired outcomes that the AI agent aims to achieve within its environment.
Environment
The environment encompasses the external surroundings and conditions in which the AI agent operates and interacts.
Environment Properties
Fully observable vs. partially observable
Deterministic vs. stochastic / strategic
Episodic vs. sequential
Static vs. dynamic
Discrete vs. continuous
Single agent vs. multiagent
Fully observable vs partially observable
Environment sensors provide access to complete state of the relevant environment at each point in time
Deterministic vs. Stochastic
•Deterministic Environment: The next state of the environment is completely determined by the current state and the actions taken by the agent. Chess, with its deterministic moves and outcomes, represents a deterministic environment.
• Stochastic Environment: The outcomes are probabilistic, and the same action in the same state may lead to different results with certain probabilities. Backgammon, with its reliance on dice rolls introducing an element of chance, represents a stochastic environment.
Static vs dynamic
Dynamic, environment can change while agent is deliberating
Discrete vs continuous
• Discrete Environment: Both the state space and the action space are discrete, with a finite number of possible states and actions.
• Continuous Environment: The state and action spaces are continuous, with an infinite number of possible states and actions.
Single vs multi agent
How distinguish agent from environment?
if other's behavior maximizes its performance based on agent, then it is multi-agent
Agent Types
•Simple reflex agents
•Reflex agents with state
•Goal-based agents
•Utility-based agents
•Learning agent
Simple reflex agents
condition-action rules on current percept, environment must be fully observable
•Use simple "if then" rules
•Can be short sighted
Ex: Roomba
SimpleReflexAgent(percept)
state = InterpretInput(percept)
rule = RuleMatch(state, rules)
action = RuleAction(rule)
Return action
Reflex Agent With State
•Store previously-observed information
•Can reason about unobserved aspects of current state
ReflexAgentWithState(percept)
state = UpdateState(state,action,percept)
rule = RuleMatch(state, rules)
action = RuleAction(rule)
Return action
EXPLICIT MODELS OF THE ENVIRONMENT
•Blackbox models
•Factored models
•Logical models
•Probabilistic models
Goal-Based Agents
•Goal reflects desires of agents
•May project actions to see if consistent with goals
•Takes time, world may change during reasoning
•It is not always obvious what action to do now given a set of goals
•You woke up in the morning. You want to attend a class. What should your action be?
•Search (Find a path from the current state to goal state; execute the first op)
Planning (does the same for structured—non-blackbox state models)
Utility-Based Agents
•Evaluation function to measure the utility
•f(state) -> value
•Useful for evaluating competing goals
•Examples:
•Decision Theoretic Planning
Sequential Decision Problems
Learning Agents
•What can be learned?
•Any of the boxes representing the agent's knowledge
•action description, effect probabilities, causal relations in the world (and the probabilities of causation), utility models(sort of through credit assignment), sensor data interpretation models
•What feedback is available?
•Supervised, unsupervised, "reinforcement" learning
•Credit assignment problem
•What prior knowledge is available?
•agent's head is a blankslate or pre-existing knowledge
Explicit knowledge
Interference-focused
•Assume that the model is known
•Representations for states and actions exist
•Focus only on the inference
•No learning yet.
•Good for known (domain-specific) environments (chess, Go, etc.)
•Many classical AI has followed this approach
Tacit knowledge
•Assume agent has no prior knowledge
•Learn models and representations for state and actions
•Reflex agents!
•Focus only on learning
•Very good for domains (NLP, Vision, Robotics) with no explicit knowledge
•Recent AI research has focused on this approach
System 1
•Assume agent has no prior knowledge
•Learn models and representations for state and actions
•Reflex agents!
•Focus only on learning
•Very good for domains (NLP, Vision, Robotics) with no explicit knowledge
•Recent AI research has focused on this approach
System 2
•Everything else that requires more focused attention and complex decision-making skills
•Examples:
•Find the voice of the speaker in a noisy room (Cocktail party problem)
•Remember the name of the person you are meeting after a long time
•Park in a crowded lot
•Check for logical/factual correctness in a technical document
•Everything requires focus for good performance. Less attention brings poor performance. Requires integration with System 1 functions.
Static environment
Unchanging surroundings in which one navigates
Dynamic environment
an environment in which the rate of change is fast
Deterministic Environment
When the next state of the environment is completely and only dependent on the current action of the agent.
Episodic Environment
When the agent's current action is independent of its past actions.
representational dimensions
Dimensions used to classify domains
Deterministic vs stochastic
Static vs sequential
Representation schemes (States vs propositions vs relates)
Deterministic domains
operates on a pre-defined set of rules and algorithms, producing the same output for the same input, making it highly predictable and consistent
Stochastic domain
The introduction of randomness or probability into algorithms and models. Using probability
Static Domain
The agent needs to take a single action
Sequential domain
The agent needs to take a sequence of actions
• navigate through an environment to reach a goal state
Explicit states
Domain modeled with explicit states
States can be described in terms of objects and relationships
Feature-based representation
States can be described in terms of objects and relationships
Relations-based representation
There is a proposition for each relationship on each tuple of objects
Other dimensions of representational complexity
Flat vs. hierarchical representation
• Knowledge given vs. knowledge learned from experience
• Goals vs. complex preferences
• Single-agent vs. multi-agent
• Perfect rationality vs. bounded rationality
Flat representation
Representation via single level of abstraction
Hierarchical representation
Multiple levels of abstraction
Knowledge given
Providing an agent with a model of the world once
Knowledge learned
The agent can learn how the world works based on experience provided.
Goal based agent
An agent may have a goal that it wants to achieve
E.g., there is some state or set of states of the world that the agent wants to be in
• E.g., there is some proposition or set of propositions that the agent wants to make
true
Preference based agent
An agent may have preferences
• E.g., a preference/utility function describes how happy the agent is in each state of
the world
• Agent's task is to reach a state which makes it as happy as possible
• Preferences can be complex
• E.g., diagnostic assistant faces multi-objective problem
• Life expectancy, suffering, risk of side effects, costs, ...
Perfect rationality
The agent can derive what the best course of action is (making a 'perfect' plan)
Bounded rationality
The agent must make good decisions based on its
perceptual, computational and memory limitations
Uninformed Search
Start State: Possible state(s) that your agent can start from
• Goal State(s): Target state(s) for your agent to reach
• Path: A sequence of states connected by the sequence of actions.
• Fron2er: The set of paths which could be explored next by your agent
• Predecessor: Any node that is higher up on the tree than the one that
is being considered.
• Successor: Any node that is a child of a node or a child or a child of a
node, etc.
• Solution: A path from the initial to the goal state.
Depth First Search
Depth-first search treats the fronEer as a stack
• It always selects one of the last elements added to the fronEer.
• If the list of paths on the fronEer is [p1, p2, . . .]
• First, p1 is selected.
• Paths that extend p1 are added to the front of the stack (in front of p2).
• p2 is only selected when all paths from p1 have been explored
Incomplete, not optimal, O(b^m) TC, O(bm) SC
Breadth First Search
Depth-first search treats the frontier as a queue
• It always selects one of the earlier elements added to the frontier.
• If the list of paths on the frontier is [p1, p2, . . .]
• First, p1 is selected.
• Its neighbors are added to the end of the queue (after p1).
• p2 is selected next
Complete, Optimal if path cost is a non- decreasing function of the depth of the node, O(b^d) TC, O(b^d) SC
Uniform Cost Search
Use a "Priority Queue" to order nodes on the Frontier list, sorted by
path cost
• Let g(n) = cost of path from start node s to current node n
• Sort nodes by increasing value of g
Called Dijkstra's Algorithm in the algorithms literature
Complete, Optimal, O(b^([C*episilon]) TC, O(b^([C/episilon]) SC
Epsilon = Best goal path
Iterative Deepening Search
Requires modification to DFS search algorithm:
• do DFS to depth 1 and treat all children of the start node as leaves
• if no solution found, do DFS to depth 2
• repeat by increasing "depth bound" until a solution found
• Start node is at depth 0
Complete, Optimal if path cost is a non- decreasing function of the depth of the node, O(b^d) TC, O(bd) SC
Informed search
Blind search algorithms do not take into account the goal until they are at a goal node.
• Rigid procedure with no knowledge of the cost of a given node to the goal.
• Often there is extra knowledge that can be used to guide the search:
• an estimate of the distance/cost from node n to a goal node.
• This estimate is called a search heuristic.
• Informed searches use domain knowledge to guide selection of the best path to continue searching
Search Heuristic
h(n), an estimate of the cost of the optimal path from node n to a goal node
uses domain-specific info. in some way
• is computable from the current state description
• it estimates
• the "goodness" of node n
• how close node n is to a goal
• the cost of minimal cost path from node n to a goal state
Best-First Search
Sort nodes in the Frontier list by increasing values of an evaluation function, f(n), that incorporates domain-specific information
• This is a generic way of referring to the class of informed search methods
Greedy Best-First Search
• Use as an evaluation function, f(n) = h(n), sorting nodes in the Frontier list by increasing values of f
• Selects the node to expand that is believed to be closest (i.e., smallest f value) to a goal node
Beam Search
• Use an evaluation function f(n) = h(n) as in greedy best-first search,
but restrict the maximum size of the Frontier list to a constant, k
• Only keep k best nodes as candidates for expansion, and throw away
the rest
• More space efficient than Greedy Search, but may throw away a node
on a solution path
• Not complete
• Not optimal/admissible
Algorithm A Search
• Use as an evaluation function f(n) = g(n) + h(n), where g(n) is minimal
cost path from start to current node n (as defined in Uniform Cost
Search)
• The g term adds a "breadth-first component" to the evaluation
function
• Nodes on the Frontier are ranked by the estimated cost of a solution,
where g(n) is the cost from the start node to node n, and h(n) is the
estimated cost from node n to a goal
• Not complete
• Not optimal/admissible
• Algorithm A never expands E because h(E) = ∞
Algorithm A* Search
• Use the same evaluation function used by Algorithm A, except add
the constraint that for all nodes n in the search space, h(n) ≤ h*(n),
where h*(n) is the true cost of the minimal cost path from n to a goal
• The cost to the nearest goal is never over-estimated
• Complete
• Optimal / Admissible
Downsides
• A* can use lots of memory: O(number of states)
• For really big search spaces, A* will run out of memory
Admissible heuristic
A heuristic that never overestimates the cost to reach the goal.
Backtracking Search
• Depth-first search algorithm
• Goes down one variable at a time
• In a dead end, back up to last variable whose value can be changed without
violating any constraints, and change it
• If you backed up to the root and tried all values, then there are no solutions
• Algorithm is complete
• Will find a solution if one exists
• Will expand the entire (finite) search space if necessary
• Depth-limited search with limit = n
Forward checking
• At start, for each variable, record the current set of all possible legal values for it
• When you assign a value to a variable in the search, update the set of legal values for all unassigned variables. Backtrack immediately if you empty a variable's set of possible values.
• What happens if we do DFS with the order of assignments as B tried first, then G, then R?
Advanced Search
• Search over possible states to find the "best" one!
• Not necessarily the optimal one
• How?
• Different algorithms like: Hill Climbing, Simulated Annealing, Genetic Algorithms
Hill climbing
• Very simple idea:
• Start from some state s
• Move to a neighbor t with better score. Repeat
• Question: what's a neighbor?
• Defined by you!
• The neighborhood of a state is the set of neighbors
• Also called 'move set'
• Similar to successor function
Anneal:
To subject (glass or metal) to a process of heating and slow cooling in order to toughen and reduce brittleness.
Simulated Annealing
Pick initial state s, randomly pick t in neighbors (s), IF f(t) better THEN accept s <- t, ELSE t is worse than s accept s <- t
Goto 2 until bored
Genetic algorithms
Inspired by natural evolution: Living things evolved into more
successful organisms
• offspring exhibit some traits of each parent
• hereditary traits are determined by genes
• genetic instructions are contained in chromosomes
• chromosomes are strands of DNA
• DNA is composed of base pairs (A,C,G,T), when in meaningful combinations,
encode hereditary traits
Knowledge base
A set of representations of facts believed true
Each representation is called a sentence
Sentences are expressed in a knowledge representation language
Logics
Formal languages for representing information so that conclusions can be drawn
Syntax
Defines the sentences in the language
Semantics
Define the sentence's meaning
Entailment
One thing follows from another
Model
Formally structured worlds w.r.t which truth can be evaluated
Soundness
when a deductive argument is valid and all the premises are actually true
Completeness
Whenever some sentence entails from a Knowledge base, it is also true that some sentence can be derived from that knowledge base
Knowledge representation
Defined by
syntax and semantics
Syntax: defines all possible sequences of symbols that constitute sentences of the language
Semantics: determines facts in the world to which the sentences refer
Ontology
The study of what there is—an inventory of what exists. An ontological commitment is a commitment to an existence claim.
Epistemology
A major branch of philosophy that concerns the forms, nature, and preconditions of knowledge.
Valid sentence/Tautology
Sentence that's True under all interpretations, no matter what the world is actually like or what the semantics is.
Inconsistent sentence/contradiction
A sentence that's False under all interpretations. The world is never like what it describes
P entails Q
written P |= Q, means that whenever P is True, so is Q
P implies Q
P -> Q
If P then Q
Equivalent to Not(P) Or Q
P iff Q
P <-> Q
P if and only if Q
Equivalent to (P -> Q) ^ (Q -> P)
Resolution
A valid inference rule producing a new clause implied by two clauses containing complementary literals
Proof
A sequence of sentences, where each is a premise (i.e., a given) or is derived from earlier sentences in the proof by an inference rule
Theorem
Last sentence of a proof that we want to prove
Pros of Propositional logic
•Simple KR language good for many problems
•Lays foundation for higher logics (e.g., FOL)
•Reasoning is decidable, though NP complete; efficient techniques exist for many problems
Cons of Propositional logic
•Not expressive enough for most problems
•Even when it is, it can be very “unconcise”
•Hard to identify individuals (e.g., Mary, 3)
•Can’t directly represent properties of individuals or relations between them (e.g., “Bill height tall”)
•Generalizations, patterns, regularities hard to represent (e.g., “all triangles have 3 sides”)
First-Order Logic (FOL) represents this information via relations, variables & quantifiers
First Order Logic
Models the world in terms of
•Objects, which are things with individual identities
•Properties of objects that distinguish them from others
•Relations that hold among sets of objects
•Functions, a subset of relations where there is only one "value" for any given "input"
Quantifiers
Universal and Existential
Term
A constant or variable symbol, or n-place function of n terms
Ground terms
Have no variables in them
Atomic sentences
Which are either true or false
are an n-place predicate of n terms, e.g.:
•green(Kermit)
•between(Philadelphia, Baltimore, DC)
•loves(X, mother(X))
Complex sentences
•are formed from atomic sentences connected by logical connectives:
Not(P), P and Q, P or Q, P implies Q, P iff Q
where P and Q are sentences
Unary predicates
Typically encode a type
Ex:
•Dolphin(flipper): flipper is a kind of dolphin
•Green(kermit): kermit is a kind of green thing
•Integer(x): x is a kind of integer
Non unary predicates
Typically encode relations
Ex:
•Loves(john, mary)
•Greater_than(2, 1)
•Between(newYork, philadelphia, baltimore)
•hasName(John, "John Smith")
Well-formed formula (wff)
A sentence with no free variables; all variables are bound by either a universal or existential quantifier
Universal quantification
•forAll(x)P(x) means P holds for all values of x in domain associated with variable
•E.g., ForAll(x) dolphin(x) implies mammal(x)
Existential quantification
•Exists(x)P(x) means P holds for some value of x in domain associated with variable
•E.g., Exists(x) mammal(x) and lays_eggs(x)
•This lets us make a statement about some object without identifying it
Axioms
Facts and rules that capture (important) facts & concepts in a domain; axioms are used to prove theorems
•Mathematicians dislike unnecessary (dependent) axioms, i.e. ones that can be derived from others
•Dependent axioms can make reasoning faster, however
•Choosing a good set of axioms is a design problem