Lecture Notes on Problem Solving, Search Strategies, and Algorithms in AI
Introduction to Problem Solving
- Problem solving is the main goal of artificial intelligence, involving the identification of solutions through computational methods.
- The process consists of providing actions or operations that transition from initial states to goal states, adhering to set constraints or conditions.
- A systematic approach using a search algorithm is utilized.
Steps for Problem Solving (Russel & Norvig, 2021)
- Formulate the Problem: Define the initial state, goal states, and rules governing transitions.
- Generate Solution Alternatives: Develop possible solutions by exploring various paths.
- Evaluate Solution Alternatives: Assess the generated solutions based on criteria like cost or feasibility.
- Choose a Solution: Select the most suitable solution from the evaluated alternatives.
- Implement the Solution: Execute the chosen solution.
- Test and Evaluate the Solution: Verify if the solution leads to the desired goal and make adjustments if necessary.
Example Problem
- Scenario: Travel from Arad to Bucharest.
- Goal: Be in Bucharest.
- Problem Formulation:
- States: Various cities (e.g., Arad, Sibiu, Fagaras).
- Actions: Driving between cities.
- Possible Solution: Sequence of cities to travel: Arad → Sibiu → Fagaras → Bucharest.
Problem Types
- Deterministic, Fully Observable: Agent knows its state; solution is a known sequence.
- Non-Observable: Agent lacks information about its state; requires a solution that is generative.
- Nondeterministic/Partially Observable: Knowledge must be derived as new information becomes available; involves interleaving search and execution.
- Unknown State Space: The problem requires exploration to obtain information about states.
Problem Formulation
- Defines a state space which includes:
- Initial States: Where the problem starts.
- Goal States: Desired outcomes.
- Rules/Behaviors: Define how states transition.
Single State Problem Formulation
A problem is characterized by five components:
- Initial State (s): Starting point of the problem (e.g., Arad).
- Actions (ACTIONS(s)): Available transitions from a state.
- Transition Model (RESULT(s, a)): How actions modify the current state.
- Goal Test: Checks if a state is a goal (can be explicit or implicit).
- Path Cost: Numeric cost associated with paths (should be non-negative).
State Space
- Comprises states, actions, and transition models, with real-world complexities handled through abstraction.
- Abstract State and Abstract Action: Condensed representations that encapsulate multiple real-world instances (e.g., multiple routes between two cities).
State Space Search Strategies
Data Driven Search
- Forward Chaining: Begins with known facts and progresses towards a goal.
Goal Driven Search
- Backward Chaining: Starts from the goal and works backward to determine actions that can fulfill that goal.
Tree-Based Search Algorithms
- Simulate the exploration of the state space by branching to generate successors from already explored states. Key components include the state, parent node, action taken, and path cost.
Evaluating Search Strategies
- Evaluated based on:
- Completeness: Does it reliably find a solution?
- Time Complexity: Duration to reach a solution.
- Space Complexity: Memory usage during search.
- Optimality: Are the solutions created the least costly?
- Complexity evaluated through branching factor (b) and depth of solution (d).
Uninformed Search Strategies
- Strategies operate without additional problem-specific knowledge:
- Breadth First Search (BFS): Expands the shallowest unexpanded node first.
- Depth First Search (DFS): Expands the deepest unexpanded node.
- Depth Limited Search: Similar to DFS but with a depth limit.
- Iterative Deepening Search: Combines BFS and DFS by gradually increasing depth limits.
- Uniform Cost Search: Expands the least cost unexpanded node and is equivalent to BFS when step costs are uniform.
Advantages of Algorithms
- BFS guarantees a solution (if one exists) and does not get lost in the search.
- DFS optimizes memory usage by exploring fewer nodes, but might not cover the entire search space, possibly missing solutions.
Informed Search Strategies
- Utilizes heuristics to efficiently direct the search towards promising states. Strategies include:
- Best First Search: Expands the most promising node based on an evaluation function.
- Greedy Best First: Focuses solely on the closest node to the goal based on heuristic (h(n)).
- A*: Considers both step cost to reach a node (g(n)) and heuristic cost to the goal (h(n)), optimizing for the best total cost path (f(n) = g(n) + h(n)).
Local and Global Search
- Local Search: Explores nearby solutions iteratively looking for improvements.
- Global Search: Looks for the best possible solution through systematic exploration of the solution space.
Hill Climbing and Simulated Annealing
- Hill Climbing: Moves towards better neighboring solutions iteratively.
- Simulated Annealing: Begins with higher acceptance rates for inferior solutions, allowing for exploration and later cooling down acceptance rates to refine the search.
Genetic Algorithms
- Evolves a population of solutions through selection, crossover, and mutation processes, generating better solutions over iterations.
Tabu Search
- Implements a tabu list to avoid revisiting solutions, improving search efficiency.
Practice Exercise
- Task: Use Greedy Best First and A* Search strategies to navigate from Oradea to Bucharest, illustrated with a simplified road map showing distances between cities.