Artificial.Intelligence.A.Modern.Approach.4th.Edition.Peter.Norvig. Stuart.Russell.Pearson.-76-117

Chapter 3: Solving Problems by Searching

Overview of Problem Solving Agents

  • Problem-Solving Agent: An agent that plans a sequence of actions to achieve a goal when the immediate action is not obvious.

  • Search: The computational process that the agent undertakes to find the appropriate action sequence.

  • Atomic Representations: Problem-solving agents consider states of the world as wholes with no visible internal structure.

  • Planning Agents: Use factored or structured representations of states (discussed in Chapters 7 and 11).

  • Search Algorithms: Various search algorithms are introduced, focusing on simple environments. They can either be:

    • Informed Algorithms: Have estimates on how far they are from the goal.

    • Uninformed Algorithms: Lack information about goal proximity.

Problem-Solving Process

  1. Goal Formulation:

    • The agent sets a goal, such as reaching Bucharest.

    • Goals help limit objectives and relevant actions.

  2. Problem Formulation:

    • The agent describes states and actions to reach the goal, creating an abstract model.

  3. Search:

    • Simulate actions in the model and search for sequences leading to the goal. A sequence is termed as a solution (e.g., Arad → Sibiu → Fagaras → Bucharest).

  4. Execution:

    • The agent executes actions from the solution consecutively.

    • In fully known, observable environments, the solution is fixed, allowing the agent to ignore percepts during action execution.

Search Problems and Solutions

  • Problem Components:

    • State Space: Possible states the environment could be in.

    • Initial State: The starting point for the agent (e.g., Arad).

    • Goal States: A set of ultimate states the agent should aim to reach (e.g., Bucharest).

    • Actions: Actions applicable based on the current state.

    • Transition Model: Describes the result of the actions taken.

    • Action Cost Function: Quantifies the cost associated with actions taken.

  • Optimal Solution: The solution with the least path cost, where all action costs are positive.

Problem Formulation and Abstraction

  • Abstraction: The process of ignoring irrelevant details to simplify the problem representation (e.g., focusing only on city connections).

  • A good abstraction balances detail with ease of solution execution.

Example Problems

  • Standardized Problems: Ideal for testing various search algorithms (e.g., Grid World).

    • Attributes: Defined by states, actions, transition models, and costs.

  • Real-World Problems: Involves complex environments like airline route planning, where states include details like location and time.

Search Algorithms

  • Best-First Search: A general approach selecting nodes based on a heuristic estimate.

    • Breadth-First Search: Explores nodes level-by-level and guarantees finding the shallowest solution.

    • Uniform-Cost Search: Expands nodes based on cumulative cost, prioritizing least cost first.

    • Depth-First Search: Expands the deepest node first without ensuring optimal cost but requires less memory.

    • Bidirectional Search: Searches from both the start and goal states simultaneously, potentially expanding fewer nodes.

Heuristic Functions

  1. Definition: Estimates the cost to reach a goal from a given state.

  2. Greedy Best-First Search: Uses a heuristic to pursue the cheapest looking path without guaranteeing optimality.

  3. A*: A commonly used algorithm that combines path cost and heuristic estimation for optimal cost identification.

  4. Admissible Heuristic: Never overestimates the cost to reach a goal and ensures A* is optimal.

  5. Pattern Database Heuristics: Store optimal costs for subproblem configurations for more efficient searches.

  6. Landmark Heuristics: Precompute distances to landmarks to improve heuristic estimations.

Learning Search Optimization

  • Metalevel Learning: Techniques that allow agents to adjust their searching strategies based on previous searches, improving efficiency.

  • Learning Heuristics: Involves using past solutions to devise new heuristics, which may be approximate.