Artificial Intelligence: Solving Problems by Searching
Introduction to AI and Problem Solving
- AI encompasses automation, AI logic, pattern recognition, and machine learning (ML).
- Applications include self-driving cars, speech recognition (e.g., Alexa), ATMs, reasoning, semantics, classification, train reservations, online shopping, travel planning, deep learning, representation, language, data science, vending machines, harvesting machines, and autopilot systems.
- Problem Solving in AI involves techniques like efficient algorithms, heuristics, and root cause analysis to achieve desired solutions.
- Goal of AI: To solve problems akin to human problem-solving capabilities.
*Businesses exist to solve human problems. - AI techniques are used to automate systems, improving resource and time efficiency.
- Methods for Solving Problems: Searching algorithms, evolutionary computations, and knowledge representation.
- Examples of AI Problem Solving: Games (Chess, Sudoku), Traveling Salesman Problem, Water Jug Problem, Tower of Hanoi.
- Steps: Define the problem statement, then generate solutions that satisfy the set conditions.
Lesson Outcomes
- Explain key concepts of problem-solving in AI, including problem spaces and states.
- Distinguish between uninformed and informed search algorithms and how they are applied.
- Implement basic search algorithms to solve a given problem using pseudocode or programming.
What is Searching?
- Searching: The process of finding a solution.
- Examples:
- Website: Finding answers to questions.
- City: Finding the correct path.
- Chess: Finding the next best move.
- Robot Navigation: A general version of route finding in continuous space (3D).
- Search Algorithms: A core area in AI and a commonly used technique for problem solving.
Search Problems in AI
- A problem-solving task where an agent seeks a sequence of actions to move from an initial state to a goal state.
- Search: The process of exploring possible states and actions to find this sequence.
Components of a Search Problem
- State Space: All possible states the system can be in, each representing a specific configuration.
- Initial State: The starting point of the system.
- Goal State: The desired state or condition the system aims to reach.
- Operators/Actions: Actions that transition the system from one state to another (building blocks of the search process).
- Transition Model: Describes how the system moves from one state to another when an action is applied.
- Path Cost: The cost associated with a specific path; the goal is to find the path with the minimum cost.
Searching Process
- Repeat until a solution is found or the state space is exhausted:
- Check the current state.
- Execute allowable actions to find successor states.
- Pick one of the new states.
- Check if the new state is a solution state. If not, the new state becomes the current state and the process repeats.
Search Algorithms
- Uninformed/Blind Search:
- Breadth-first search
- Uniform-cost search
- Depth-first search
- Depth-limited search
- Iterative deepening depth-first search
- Bidirectional search
- Informed Search
- Best-First Search
- A* Search
Vacuum World Problem
- States: Defined by agent location and dirt locations.
- Possible World States: N∗2N (where N is the number of locations). In a two-location scenario: 2∗22=8.
- Initial State: Can be any state.
- Actions: Left (L), Right (R), Suck (S).
- Transition Model: Actions have expected effects, except:
- Moving left in the leftmost square.
- Moving right in the rightmost square.
- Sucking in a clean square (no effect).
- Goal Test: Check if all squares are clean.
- Path Cost: Each step costs 1; the path cost is the number of steps.
Robotic Lawn Mower Problem
- Creates a map of the garden and systematically tackles the task.
- Uses specialized sensors to avoid obstacles and can stop when it rains.
Problem Solving Steps:
- Define the problem.
- Create a search space with states and actions.
- Employ search algorithms.
- Consider heuristics and optimization criteria.
- Adapt to a dynamic environment, possibly with a learning mechanism.
Detailed Breakdown
- Problem Definition:
- Objective: Efficiently mow a lawn.
- Constraints: Avoid obstacles, stay within boundaries, optimize for time and energy.
- Search Space:
- Represents all possible states and actions for the mower.
- States: Positions and orientations of the mower.
- Actions: Movements in different directions.
- State Space:
- Subset of the search space including all possible states.
- Includes current position, orientation.
- Operators/Actions:
- Steps to transition from one state to another.
- Actions: Moving forward, turning left/right, adjusting speed.
- Solution:
- A sequence of actions to mow the entire lawn.
- Search Algorithms:
- Examples: Depth-first search, breadth-first search, A* search.
- Heuristics:
- Rules to narrow down the search space.
- Examples: Prioritizing unexplored areas, avoiding previously mowed regions, navigating around obstacles.
- Optimization:
- Finding the best solution based on criteria.
- Examples: Minimizing time, maximizing battery life, ensuring even grass cutting.
- Dynamic Environment:
- Adapting to new obstacles or changes in the lawn.
- Learning:
- Improving performance over time using machine learning algorithms.
Problem-Solving Agents
- Goal-based agents find sequences of actions leading to desirable states.
- Uninformed algorithms use only the problem definition without extra information or heuristics.
Goal-Based Agent
- Percepts: What the world is like now (Sensors).
- Actions: What action should be done now (Actuators).
- Environment, Goals, State are considered.
- How the world evolves and what my actions do.
- What it will be like if I do action A.
Function of Simple Problem-Solving Agent
Function Simple-Problem-Solving-Agent( percept ) returns action
Inputs: percept
Static: seq, state, goal, problem
state <- UPDATE-STATE( state, percept )
if seq is empty then
goal <- FORMULATE-GOAL( state )
problem <- FORMULATE-PROBLEM( state, goal )
seq <- SEARCH( problem )
action <- RECOMMENDATION ( seq )
seq <- REMAINDER( seq )
return action
Assumptions of Goal-Based Agents
- Static: The plan remains the same.
- Observable: Agent knows the initial state.
- Discrete: Agent can enumerate the choices.
- Deterministic: Agent can plan a sequence of actions leading to an intermediate state.
- The agent carries out its plans with its eyes closed, certain of what’s going on (Open loop system).
Well-Defined Problems and Solutions
- Problem components:
- Initial state
- Actions and Successor Function
- Goal test
- Path cost
Example: Water Pouring
- Problem: Given a 4-gallon bucket and a 3-gallon bucket, measure exactly 2 gallons into one bucket.
- Constraints: No markings on the buckets; must fill each bucket completely.
- Initial state: Buckets are empty (0, 0).
- Goal state: One bucket has 2 gallons (x, 2) or (2, x).
- Path cost: 1 per unit step.
Actions and Successor Function
- Fill a bucket:
- (x, y) -> (3, y)
- (x, y) -> (x, 4)
- Empty a bucket:
- (x, y) -> (0, y)
- (x, y) -> (x, 0)
- Pour contents of one bucket into another:
- (x, y) -> (0, x+y) or (x+y−4,4)
- (x, y) -> (x+y, 0) or (3,x+y−3)
Other Toy Examples
- Jug Fill
- Black White Marbles
- Row Boat Problem
- Sliding Blocks
- Triangle Tee
Example: Map Planning
- Initial State: e.g., “At Arad.”
- Successor Function: Set of action-state pairs e.g., S(Arad) = {(Arad->Zerind, Zerind), …}.
- Goal Test: e.g., x = “at Bucharest.”
- Path Cost: Sum of the distances traveled.
Searching for Solutions
- Search through a state space using a search tree.
- The search tree is generated with an initial state and successor functions that define the state space.
Key Definitions
- State: A representation of a physical configuration.
- Node: A data structure in a search tree that includes parent, children, depth, and path cost.
- States do not inherently have children, depth, or path cost; these are properties of nodes in the search tree.
- The EXPAND function creates new nodes, using the SUCCESSOR function to create corresponding states.
Tree Search Algorithm
function TREE-SEARCH( problem, strategy) returns a solution, or failure
initialize the search tree using the initial state of problem
loop do
if there are no candidates for expansion then return failure
choose a leaf node for expansion according to strategy
if the node contains a goal state then return the corresponding solution
else expand the node and add the resulting nodes to the search tree
Detailed Tree Search Algorithm
function TREE-SEARCH( problem, fringe) returns a solution, or failure
fringe <- INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
if fringe is empty then return failure
node <- REMOVE-FRONT (fringe)
if GOAL-TEST [problem] applied to STATE(node) succeeds return node
fringe <- INSERT_ALL (EXPAND (node, problem), fringe)
function EXPAND(node, problem) returns a set of nodes
successors <- the empty set
for each action, result in SUCCESSOR-FN[problem] (STATE[node]) do
s <- a new NODE
PARENT-NODE[s] <- node; ACTION[s] <- action; STATE[s] <- result
PATH-COST[s] <- PATH-COST[node] + STEP-COST (node, action, s)
DEPTH[s] <- DEPTH[node] + 1
add s to successors
return successors
- Use only information available in the problem definition (blind searching).
- Types:
- Breadth-first search
- Uniform-cost search
- Depth-first search
- Depth-limited search
- Iterative deepening search
- Completeness: Will a solution always be found if one exists?
- Time Complexity: How long to find the solution (number of nodes searched).
- Space Complexity: How much memory is needed (maximum number of nodes stored).
- Optimality: Will the optimal (least cost) solution be found?
Time and Space Complexity Metrics
- b – maximum branching factor of the search tree
- m – maximum depth of the state space
- d – depth of the least-cost solution
Breadth-First Search
- Expand the shallowest unexpanded node.
- Place all new successors at the end of a FIFO queue.
- Complete: Yes, if b is finite.
- Time: 1+b+b2+…+bd+b(bd−1)=O(bd+1), exponential in d.
- Space: O(bd+1) (keeps every node in memory).
- Optimal: Yes (if cost is 1 per step); not optimal in general.
- The memory requirements are a bigger problem for breadth-first search than is execution time. Exponential-complexity search problems cannot be solved by uniformed methods for any but the smallest instances.
- Expand the least-cost unexpanded node.
- The FIFO queue is ordered by cost.
- Equivalent to breadth-first search if all step costs are equal.
- Complete: Yes, if the cost is greater than some threshold \text{step cost} >= \epsilon.
- Time Complexity: O(b⌈C<em>/ϵ⌉) where C</em> is the cost of the optimal solution.
- Space Complexity: O(b⌈C∗/ϵ⌉)
- Optimal: Yes; nodes are expanded in increasing order.
Depth-First Search
- Expand the deepest unexpanded node.
- Unexplored successors are placed on a stack.
- Complete: No (fails in infinite-depth spaces or spaces with loops). Modify to avoid repeated spaces along path -> Yes in finite spaces.
- Time: O(bm). Not great if m is much larger than d. But if the solutions are dense, this may be faster than breadth-first search.
- Space: O(bm), linear space.
- Optimal: No.
Depth-Limited Search
- A variation of depth-first search with a depth limit l.
- Nodes at depth l have no successors.
- Same as depth-first search if l=∞.
- Can terminate for failure and cutoff.
- Complete: Yes, if l < d.
- Time: O(bl).
- Space: O(bl).
- Optimal: No if l > d.
Iterative Deepening Search
- Uses depth-first search iteratively.
- Finds the best depth limit by gradually increasing it: 0, 1, 2, … until a goal is found.
- Complete: Yes.
- Time: O(bd).
- Space: O(bd).
- Optimal: Yes, if step cost = 1. Can be modified to explore the uniform cost tree.
- Faster than Breadth-First Search (BFS) even though Iterative Deepening Search (IDS) generates repeated states. BFS generates nodes up to level d+1. IDS only generates nodes up to level d.
- In general, iterative deepening search is the preferred uninformed search method when there is a large search space and the depth of the solution is not known.
Avoiding Repeated States
- Expanding states that have already been encountered and expanded before wastes time.
- Failure to detect repeated states can turn a linear problem into an exponential one.
- Sometimes repeated states are unavoidable.
- Problems where the actions are reversible.
- Route finding
- Sliding blocks puzzles
Graph Search Algorithm
function GRAPH-SEARCH( problem, fringe) returns a solution, or failure
closed <- an empty set
fringe <- INSERT (MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
if fringe is empty then return failure
node <- REMOVE-FRONT (fringe)
if GOAL-TEST [problem](STATE[node]) then return node
if STATE[node] is not in closed then
add STATE[node] to closed
fringe <- INSERT_ALL (EXPAND (node, problem), fringe)
end