FAI - 03 - Problem Solving by Searching - Blind Search
Introduction to Problem Solving by Searching
Agenda
- Introduction
- Problem-solving agents
- Problem types
- Problem formulation
- Example problems
- Basic search algorithms
Declarative Knowledge and Search
- Declarative knowledge involves representing knowledge as statements or facts, often using formal languages.
- It creates alternatives by allowing the AI to consider different pieces of knowledge and how to use them.
- Search is a major approach in AI to explore these alternatives.
- Search is a key component in various AI fields, including:
- Knowledge representation
- Planning
- Reasoning
- Learning
- Agent systems
- Robotics
- Perception
- Natural language processing
- Expert Systems
- Constraint satisfaction
Advantages of Declarative Knowledge
- Declarative knowledge facilitates the creation of alternatives, allowing for multiple options in problem-solving.
- Alternatives are encoded as different sets of facts, rules, or logical statements.
- AI systems can explore and evaluate these alternatives to make decisions.
Example: Recommendation System
- A recommendation system uses declarative knowledge to suggest movies based on user preferences.
- Facts:
- Movie A is a comedy.
- Movie B is a drama.
- Movie C is a thriller.
- Rules:
- If a user likes comedies, recommend Movie A.
- If a user likes dramas, recommend Movie B.
- If a user likes thrillers, recommend Movie C.
Problem-Solving Agents
Problem-solving agents follow a cycle:
- Perception: Observe the environment or receive input.
- Problem Formulation: Define the initial state, goal state, and possible actions.
- Search/Planning: Search for actions to transform the current state into the goal state using algorithms and heuristics.
- Execution: Execute the chosen plan, interacting with the environment.
- Evaluation: Evaluate the outcome; determine if further actions are needed or if the problem is solved.
Simple Problem-Solving Agent Algorithm
- s_0 \leftarrow sense/read \; initial \; state
- GOAL? \leftarrow select/read \; goal \; test
- Succ \leftarrow read \; successor \; function
- solution \leftarrow search(s_0, GOAL?, Succ)
- perform(solution)
State Space
- Each state is an abstract representation of possible worlds, sharing crucial properties but differing in unimportant details.
- Example: In assembly planning, a state does not define the exact absolute position of each part.
- The state space is discrete and can be finite or infinite.
Successor Function
- Represents all feasible actions in each state.
- Returns the successor states and their costs.
- The successor function is a "black box" with unknown content.
- Example: In assembly planning, it may involve collision, stability, and grasping considerations.
Path Cost
- Arc cost is a positive number measuring the "cost" of performing an action.
- Example: 1 in the 8-puzzle example or the expected time to merge two sub-assemblies.
- For any given problem, the cost c of an arc always verifies: c \geq \epsilon > 0, where \epsilon is a constant (small positive number).
- This condition ensures that if a path becomes arbitrarily long, its cost also becomes arbitrarily large.
Data Structure of a Node
- PARENT-NODE
- STATE
- Depth: Length of the path from the root to node N (depth of the root = 0).
- Path-Cost
- Action (Right Action)
- Bookkeeping: Expanded (yes), Children
Node Expansion
- The expansion of a node N in the search tree involves:
- Evaluating the successor function on STATE(N).
- Generating a child of N for each state returned by the function.
Note: Node generation is not the same as node expansion.
Goal State
- The goal state can be:
- Explicitly described (e.g., specific arrangement of tiles).
- Partially described (e.g., some tiles in specific positions).
- Defined by a condition (e.g., the sum of every row, column, and diagonal equals 30).
Puzzles and Games in AI
- Puzzles and games requiring exploration of alternatives have been considered a challenge for human intelligence throughout history:
- Chess originated in Persia and India about 4000 years ago.
- Checkers appear in 3600-year-old Egyptian paintings.
- Go originated in China over 3000 years ago.
- AI uses games to design and test algorithms.
(N^2 - 1)-Puzzle
- A common example used to illustrate search algorithms.
- Involves arranging numbered tiles in a grid.
Example: 8-Puzzle
- State: Any arrangement of 8 numbered tiles and an empty tile on a 3x3 board.
- Initial State: A given starting configuration of tiles.
- Goal State: A specific desired configuration of tiles.
8-Puzzle: Successor Function
- SUCC(state) \rightarrow subset \; of \; states
- The successor function provides knowledge about the 8-puzzle game.
- It doesn't dictate which outcome to use or which state to apply it to, necessitating search.
Fringe (Frontier) of Search Tree
- The fringe is the set of all search nodes that haven’t been expanded yet.
15-Puzzle
- Introduced in 1878 by Sam Loyd.
- Loyd offered $1,000 to the first person who could solve a specific problem.
The 14-15 Puzzle
- No one ever won the prize.
State Space Size of the (N^2 - 1)-Puzzle
- 8-puzzle: 9! = 362,880 states
- 15-puzzle: 16! \approx 2.09 \times 10^{13} states
- 24-puzzle: 25! \approx 10^{25} states
- Only half of these states are reachable from any given state.
What is the Actual State Space?
- Is it:
- a) The set of all states (e.g., 16! states for the 15-puzzle)?
- b) The set of all states reachable from a given initial state (e.g., 16!/2 states for the 15-puzzle)?
- In general, the answer is a), because one does not know in advance which states are reachable.
- A fast test to check if a state is reachable is useful because search techniques can be inefficient when there is no solution.
Searching the State Space
- It is often not feasible (or too expensive) to build a complete representation of the state graph.
8-, 15-, 24-Puzzles: Search Limitations
- Example performance with 100 million states/sec:
- 8-puzzle: 362,880 states → 0.036 sec
- 15-puzzle: 2.09 \times 10^{13} states → ~55 hours
- 24-puzzle: 10^{25} states → > 10^9 years
Searching the State Space: Efficient Exploration
- A problem solver must construct a solution by exploring only a small portion of the graph.
Representing a Problem: The Formalism
To formulate a problem as a search problem:
- State Space: Define the state space over which to perform the search.
- Actions or State Space Transitions: Formulate actions that allow movement between different states.
- Initial or Start State and Goal: Identify the initial conditions and the desired goal or condition.
- Heuristics: Formulate heuristics to help guide the search process.
Example: Romania
- State space: The various cities you could be located in, ignoring the low-level details of driving.
- Actions: Driving between neighboring cities.
- Initial state: In Arad.
- Desired condition (Goal): Be in Bucharest.
- Solution: The route, the sequence of cities to travel through to get to Bucharest.
State-Space Problem Formulation
A problem is defined by five items:
- Initial state: e.g., "at Arad"
- Actions Actions(s): set of actions available in state s
- Transition model Results(s,a): state that results from action a in state s
- Alternative: successor function S(x) = set of action–state pairs – e.g., S(Arad) = {
, … }
- Alternative: successor function S(x) = set of action–state pairs – e.g., S(Arad) = {
- Goal test: (or goal state) e.g., x = "at Bucharest” , Checkmate(x)
- Path cost (additive):
- e.g., sum of distances, number of actions executed, etc.
- c(x,a,y) is the step cost, assumed to be ≥0
- A solution is a sequence of actions leading from the initial state to a goal state
Selecting a State Space
- Real world is complex; abstraction is necessary.
- Selecting the correct abstraction and the resulting state space is difficult.
- Abstract states ↔ real-world states.
- Abstract operators ↔ sequences or real-world actions
- Example: going from city i to city j costs L_{ij} ↔ actually drive from city i to j
- Abstract solution ↔ set of real actions to take in the real world to solve the problem
Last Lecture: Environment Types
The environment types largely determine the agent design:
| Environment | Accessible | Deterministic | Episodic | Static | Discrete |
|---|---|---|---|---|---|
| Operating System | Yes | Yes | No | No | Yes |
| Virtual Reality | Yes | Yes | Yes/No | No | Yes/No |
| Office Environment | No | No | No | No | No |
| Mars | No | Semi | No | Semi | No |
Problem Types
Classifying the environment:
- Static / Dynamic: Whether the environment changes during problem-solving.
- Deterministic / Stochastic: Whether the outcome of actions is predictable.
- Observable / Partially Observable / Unobservable: The degree to which the agent can perceive the environment.
- Discrete / Continuous: Whether the possibilities in the environment are countable.
Problem Types and Solutions
- Deterministic, fully observable: single-state problem; solution is a sequence.
- Non-observable: sensorless problem (multiple-state problem); solution is a sequence.
- Nondeterministic or partially observable: contingency problem, percepts provide more information; solution is a tree or policy.
- Unknown state space: exploration problem.
Problem Types: Detailed Description
- Single-state problem: deterministic, accessible; the agent knows everything about the world.
- Multiple-state problem: deterministic, inaccessible; the agent must assume states while working towards the goal state.
- Contingency problem: nondeterministic, inaccessible; must use sensors during execution.
- Exploration problem: unknown state space; involves discovering and learning about the environment while taking actions.
Example: Vacuum World
- Simplified world: 2 locations, each may or may not contain dirt; each may or may not contain a vacuuming agent.
- Goal of agent: clean up the dirt.
Last Time: Problem-Solving
- Problem-solving involves:
- Goal formulation
- Problem formulation (states, operators)
- Search for solution
- Problem formulation includes:
- Initial state
- Operators
- Goal test
- Path cost
- Problem types:
- Single state: accessible and deterministic environment
- Multiple state: inaccessible and deterministic environment
- Contingency: inaccessible and nondeterministic environment
- Exploration: unknown state-space
The Formalism: Solving the Problem
- Once the problem has been formulated as a state space search, various algorithms can be utilized to solve the problem:
- A solution is a sequence of actions/moves that transform the current state into a state where the desired condition holds.
Example: The 8-Puzzle
- Rule: Can slide a tile into the blank spot or move the blank spot around.
Example: The 8-Puzzle - State Space Details
- State space: the different configurations of the tiles.
- Actions: moving the blank up, down, left, right; not every action can be performed in every state.
- Initial state: a given starting state.
- Desired condition (Goal): a state where the tiles are all in the desired positions.
- Solution: a sequence of moves of the blank that transform the initial state to a goal state.
Example: The 8-Puzzle - State Space Properties
- There are 9! different configurations of the tiles, but the state space is divided into two disjoint parts.
- All four actions are possible only when the blank is in the middle.
- The goal condition can be satisfied by a single state or multiple states.
Example: Vacuum World - States
- Single-state, start in #5 Solution? [Right,Suck]
- Sensorless, start in {1,2,3,4,5,6,7,8}, e.g., Right goes to {2,4,6,8} Solution?
Example: Vacuum World - Sensorless
- Sensorless, start in {1,2,3,4,5,6,7,8}, e.g., Right goes to {2,4,6,8} Solution: [Right,Suck,Left,Suck]
Example: Vacuum World - Contingency
- Nondeterministic: Suck may dirty a clean carpet
- Partially observable: location, dirt at current location.
- Percept: [L, Clean], i.e., start in #5 or #7 Solution? [Right, if dirt then Suck]
Single-State Problem Formulation
A problem is defined by four items:
- initial state e.g., "at Arad"
- actions or successor function S(x) = set of action–state pairs
- e.g., S(Arad) = {
, … }
- e.g., S(Arad) = {
- goal test, can be
- explicit, e.g., x = "at Bucharest"
- implicit, e.g., Checkmate(x)
- path cost (additive)
- e.g., sum of distances, number of actions executed, etc.
- c(x,a,y) is the step cost, assumed to be ≥ 0
- A solution is a sequence of actions leading from the initial state to a goal state
Selecting a State Space
- Real world is complex → state space must be abstracted for problem solving
- (Abstract) state = set of real states
- (Abstract) action = complex combination of real actions e.g., "Arad → Zerind" represents a complex set of possible routes, detours, rest stops, etc.
- For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind"
- (Abstract) solution =
- set of real paths that are solutions in the real world
*Each abstract action should be "easier" than the original problem
- set of real paths that are solutions in the real world
Vacuum World State Space Graph
- states? integer dirt and robot location
- actions? Left, Right, Suck
- goal test? no dirt at all locations
- path cost? 1 per action
Example: The 8-Puzzle - Details
- states? locations of tiles
- actions? move blank left, right, up, down
- goal test? = goal state (given)
- path cost? 1 per move
- Note: optimal solution of n-Puzzle family is NP-hard
Ex: 8-Queens
Goal: Place as many queens as possible on the chess board without capture
states?
* any arrangement of n<=8 queens
* or arrangements of n<=8 queens in leftmost n columns, 1 per column, such that no queen attacks any other.
* initial state? no queens on the board
actions?
* add queen to any empty square
* or add queen to leftmost empty square such that it is not attacked by other queens.
- goal test? 8 queens on the board, none attacked.
- path cost? 1 per move (not relevant…)
Example: Robotic Assembly
- states?: real-valued coordinates of robot joint angles parts of the object to be assembled
- actions?: continuous motions of robot joints
- goal test?: complete assembly
- path cost?: time to execute
Architectures for Intelligence
- Search? Determine how to achieve some goal; “what to do”
- Logic & inference?
- Reason about what to do
- Encode knowledge / “expert” systems
- Know what to do
- Learning? learn what to do
- Modern view: complex & multi-faceted
What is Search?
- Search is a class of techniques for systematically finding or constructing solutions to problems.
- Example technique: generate-and-test.
- Example problem: Combination lock.
- Generate a possible solution.
- Test the solution.
- If solution found THEN done ELSE return to step 1.
Why is Search Interesting?
Many (all?) AI problems can be formulated as search problems!
- Examples:
- Path planning
- Games
- Natural Language Processing
- Machine learning
- …
Applications of Search
Search plays a key role in many applications, e.g.:
- Route finding: airline travel, networks
- Package/mail distribution
- Pipe routing, VLSI routing
- Comparison and classification of protein folds
- Pharmaceutical drug design
- Design of protein-like molecules
- Video games
Search Problems
- A search problem consists of:
- A state space
- For each state, a set Actions(s) of allowable actions
- A transition model Result(s,a)
- A step cost function c(s,a,s’)
- A start state and a goal test
- A solution is a sequence of actions (a plan) which transforms the start state to a goal state
What's in a State Space?
- Problem: Pathing
- State representation: (x,y) location
- Actions: NSEW
- Transition model: update location
- Goal test: is (x,y)=END
- Problem: Eat-All-Dots
- State representation: {(x,y), dot booleans}
- Actions: NSEW
- Transition model: update location and possibly a dot boolean
- Goal test: dots all false
- The real world state includes every last detail of the environment
- A search state abstracts away details not needed to solve the problem
State Space Sizes?
- World state:
- Agent positions: 120
- Food count: 30
- Ghost positions: 12
- Agent facing: NSEW
- How many
- World states? 120x(2^{30})x(12^2)x4
- States for pathing? 120
- States for eat-all-dots? 120x(2^{30})
Blind vs. Heuristic Strategies
- Blind (or un-informed) strategies do not exploit state descriptions to order FRINGE. They only exploit the positions of the nodes in the search tree
- Heuristic (or informed) strategies exploit state descriptions to order FRINGE (the most “promising” nodes are placed at the beginning of FRINGE)
Uninformed (Blind) Search Strategies
- Uninformed search strategies use only the information available in the problem definition
- Breadth-first search
- Depth-first search
- Depth-limited search
- Iterative deepening search
Search
- Formulate “what to do” as a search problem
- Solution tells the agent what to do
- If no solution in the current search space?
- Find space that does contain a solution (use search!)
- Solve original problem in new search space
- Many powerful extensions to these ideas
- Constraint satisfaction; planning; game playing; …
- Human problem-solving often looks like search
Why Search Can Be Difficult
- At the start of the search, the search algorithm does not know
- the size of the tree
- the shape of the tree
- the depth of the goal states
- How big can a search tree be?
- say there is a constant branching factor b
- and one goal exists at depth d
- search tree which includes a goal can have b^d different branches in the tree (worst case)
- Examples:
- b = 2, d = 10: b^d = 2^{10} = 1024
- b = 10, d = 10: b^d = 10^{10} = 10,000,000,000
Uninformed Search Strategies
- These are strategies that adopt a fixed rule for selecting the next state to be expanded.
- The rule does not change irrespective of the search problem being solved.
- These strategies do not take into account any domain-specific information about the particular search problem.
- Uninformed search techniques:
- Breadth-First, Uniform-Cost, Depth-First, Depth-Limited, and Iterative-Deepening search
Tree Search Algorithms
Basic idea:
- offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)
Search Strategy-Based on Fringe
- The fringe is the set of all search nodes that haven’t been expanded yet
- The fringe is implemented as a priority queue
- INSERT(node,FRINGE)
- REMOVE(FRINGE)
- The ordering of the nodes in FRINGE defines the search strategy
Implementation: General Tree Search
function TREE-SEARCH( problem, fringe) returns a solution, or failure
fringe INSERT (MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
if fringe is empty then return failure.
node ← REMOVE-FRONT (fringe)
if GOAL-TEST[problem] (STATE[node]) then return SOLUTION(node)
fringe INSERT ALL (EXPAND(node, problem), fringe)
function EXPAND (node, problem) returns a set of nodes
successors the empty set
for each action, result in SUCCESSOR-FN[problem](STATE[node]) do
s a new NODE
PARENT-NODE[s] ← node; ACTION[s] ← action; STATE[s] ← result
PATH-COST[s] ← PATH-COST[node] + STEP-COST(node, action, s)
DEPTH[s] ← DEPTH[node] + 1
add s to successors
return successors
Implementation: States vs. Nodes
- A state is a (representation of) a physical configuration
- A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth
- The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.
Search Strategies Evaluation
Search algorithms are commonly evaluated according to the following four criteria:
Completeness: does it always find a solution if one exists?
Time complexity: how long does it take as the function of the number of nodes?
Space complexity: how much memory does it require?
Optimality: does it guarantee the least-cost solution?
Time and space complexity are measured in terms of:
- b – max branching factor of the search tree
- d – depth of the least-cost solution
- m – max depth of the search tree (may be infinity)
Time Complexity
If a goal node is found on depth d of the tree, all nodes up till that depth are created
Thus: O(b^d)
General Tree Search
function TREE-SEARCH( problem, fringe) returns a solution, or failure
fringe ← INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
if fringe is empty then return failure
node← REMOVE-FRONT(fringe)
//Goal test after pop
if GOAL-TEST[problem] (STATE[node]) then return SOLUTION(node)
fringe ← INSERT ALL (EXPAND (node, problem), fringe)
function EXPAND(node, problem) returns a set of nodes
successors ←the empty set
for each action, result in SUCCESSOR-FN[problem] (STATE[node]) do
s ← a new NODE
PARENT-NODE[s] ←node; ACTION[s] ← action; STATE[s] ← result
PATH-COST[s] ← PATH-COST[node] + STEP-COST(node, action, s)
DEPTH[s] ← DEPTH[node] + 1
adds to successors
return successors
General Graph Search
function GRAPH-SEARCH ( problem, fringe) returns a solution, or failure
closed ← an empty set
fringe ← INSERT (MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
if fringe is empty then return failure
node ← REMOVE-FRONT (fringe)
//Goal test after pop
if GOAL-TEST [problem] (STATE[node]) then return SOLUTION(node)
if STATE[node] is not in closed then
add STATE[node] to closed
fringe ← INSERT ALL (EXPAND(node, problem), fringe)
Breadth-First Search
Expand shallowest unexpanded node.
Implementation:
- fringe is a FIFO queue (new successors go at the end)
Properties of Breadth-First Search
- Complete? Yes (if b is finite)
- Time? 1 + b + b^2 + b^3 + … + b^d + b(b^d - 1) = O(b^{d+1})
- Space? O(b^{d+1}) (keeps every node in memory)
- Optimal? Yes (if cost = 1 per step)
Space is the bigger problem (more than time)
Uniform-Cost Search (UCS)
*Breadth-first is only optimal if path cost is a non-decreasing function of depth, i.e., f(d) ≥ f(d-1); e.g., constant step cost, as in the 8-puzzle.
*Can we guarantee optimality for variable positive step costs ≥ ε?
(Why ≥ ε? To avoid infinite paths w/ step costs 1, \frac{1}{2}, \frac{1}{4}, …)
Uniform-cost Search: Expand node with smallest path cost g(n).
- Frontier is a priority queue, i.e., new successors are merged into the queue sorted by g(n).
- Can remove successors already on the queue w/higher g(n).
- Saves memory, costs time; another space-time trade-off.
- Goal-Test when node is popped off queue.
Uniform-Cost Search (Continued)
- Implementation: Frontier = queue ordered by path cost Equivalent to breadth-first if all step costs all equal.
- Complete? Yes, if b is finite and step cost ≥ ε> 0. (otherwise it can get stuck in infinite loops)
- Time? # of nodes with path cost ≤ cost of optimal solution. O(b^{\lfloor1+C*/ε\rfloor}) ≈O(b^{d+1})
- Space? # of nodes with path cost ≤ cost of optimal solution. O(b^{\lfloor1+C*/ε\rfloor}) ≈O(b^{d+1}).
- Optimal? Yes, for any step cost ≥ ε >0.
Uniform-Cost Search: Proofs
Proof of Completeness:
Assume (1) finite max branching factor = b; (2) min step cost ≥ ε > 0; (3) cost to optimal goal = C. Then a node at depth \lfloor 1+C/ε \rfloor must have a path cost > C. There are O( b^{\lfloor 1+C/ε \rfloor} ) such nodes, so a goal will be found.
Proof of Optimality (given completeness):
Suppose that UCS is not optimal. Then there must be an (optimal) goal state with a path cost smaller than the found (suboptimal) goal state (invoking completeness). However, this is impossible because UCS would have expanded that node first, by definition. Contradiction.
Uniform-Cost Search: Example (Search Tree Version)
Steps labeled w/cost:
| Nodes | g(n) |
|---|---|
| S | 0 |
| B | 5 |
| C | 15 |
| A | 1 |
| G | 11 / 10 |
Order of node expansion: S, A, B, G
Path found: S -> B -> G
Cost of path found: 10
Uniform-Cost Search: Example (Virtual Queue Version)
Steps labeled w/cost:
Expanded nodes: S/g=0, A/g=1, B/g=5, G/g=10
Depths-First Search
Expand deepest unexpanded node.
Implementation:
- fringe = LIFO queue (push successors onto the front)
Properties of Depth-First Search
- Complete? No: fails in infinite-depth spaces, spaces with loops
- Modify to avoid repeated states along path → complete in finite spaces
- Time? O(b^m) terrible if m is much larger than d
- but if solutions are dense, may be much faster than breadth-first
- Space? O(bm), i.e., linear space!
- Optimal? No
Depth-Limited Search
- Depth-first search with depth limit L, i.e., nodes at depth L have no successors
Depth Limited Search DLS(OPEN, Successors, Goal?)
/* Call with OPEN = {<START>} */
WHILE(OPEN not EMPTY) {
n= select first node from OPEN
Curr = terminal state of n
If(Goal?(Curr)) return n
If Depth(n) < D //Don't add successors if Depth(n) = D
Else {
OPEN = (OPEN– {n}) Us ∈ Successors(Curr)<n,s>
OPEN = OPEN – {n}
}
CutOffOccured = TRUE.
return FAIL
Iterative Deepening Search
function ITERATIVE-DEEPENING-SEARCH (problem) returns a solution, or failure
inputs: problem, a problem
for depth0 to ∞ do
result DEPTH-LIMITED-SEARCH(problem, depth)
if result cutoff then return result
Iterative Deepening Search - Node Generation
- Number of nodes generated in a depth-limited search to depth d with branching factor b:
- N_{DLS} = b^0 + b^1 + b^2 + … + b^{d-2} + b^{d-1} + b^d
- Number of nodes generated in an iterative deepening search to depth d with branching factor b:
- $$N_{IDS} = (d+1)b^0 + d b^1 + (d-1)b^2 + … + 3b