Artificial Intelligence: A Modern Approach Chapter 3: Solving Problems By Searching

0.0(0)
studied byStudied by 0 people
full-widthCall with Kai
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/61

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

62 Terms

1
New cards

Atomic Representation

States of world consider whole with no internal structure visible to the problem solving algorithm.

2
New cards

Problem Solving Agent

An agent that uses atomic representation to solve problems.

3
New cards

Planning agent

An goal-based agent that uses factored or structures representations.

4
New cards

Uninformed Search

Algorithms that are given no information about the problem other than its definition.

5
New cards

Informed Search

Algorithms that are given some guidance on where to look for solutions or knows if a state is more promising than another. uses problem-specific knowledge beyond the definition of the problem

6
New cards

Goal formulation

Based on the current situation and the agent's performance measure. The first step of problem solving.

7
New cards

Goal

A desired outcome that is used to limit what an agent can and will do.

8
New cards

Problem Formulation

The process of deciding what actions and states to consider, given a goal.

9
New cards

Search

The process of looking for a sequence of actions that reaches the goal.

10
New cards

Solution

The output in the form of a action sequence of a search algorithm that takes a problem as input. A path from the initial state to the goal state by actions through the state space.

11
New cards

Execution

Performing the action sequence recommended by a solution.

12
New cards

Open-loop

Ignoring the percepts received from an environment because the agent already knows what will happen based on the solution.

13
New cards

Problem

Consists of Five Components:

1. Initial State

2. Actions

3. Transition Model

4. Path Cost

14
New cards

Initial State

The state the agent starts in.

15
New cards

Actions

Possible moves that an agent can make in its current state. They are applicable given the state

16
New cards

Transition Model

A description of what each action does.

17
New cards

Successor

The result states reachable from a given state by performing an action.

18
New cards

State Space

Initial state, Actions, Transition Model. Represented by a graph. A path though this is a sequence of states connected by a sequence of actions.

19
New cards

Goal Test

Determines if a given state is a goal state.

20
New cards

Path cost

A function that assigns a numeric cost to each path. Step cost is the cost of moving from a state by an action to its successor.

21
New cards

Optimal Solution

The path with the lowest path cost among all solutions.

22
New cards

Abstraction

Removing details from a representation.

23
New cards

Search Tree

A graph with the initial state as the root, actions as branches, and successor states as nodes.

24
New cards

Expanding

Applying Legal actions to the current state.

25
New cards

Generating

Creating a new set of states by performing an action on a given node.

26
New cards

Parent Node

The predecessor(s) of a given node

27
New cards

Child Node

The successor(s) of a given node

28
New cards

Leaf Node

A node with no children

29
New cards

Frontier

The set of all leaf node available for expansion at a given point. As known as open list.

30
New cards

Search Strategy

The the search algorithm's decision of which node to next.

31
New cards

Repeated State

A state that is exactly the same as a state previously expanded. Lead to by a loopy path.

32
New cards

Redundant Paths

More than one way to get from one state to another.

33
New cards

Explored Set

The set of all states that have been expanded. AKA closed list.

34
New cards

Completeness

Is the algorithm guaranteed to find a solution when there is one.

35
New cards

Optimality

Does the strategy find the optimal solution?

36
New cards

Time Complexity

How long does it take to find a solution?

37
New cards

Space Complexity

How much memory is needed to perform the search?

38
New cards

Branching Factor

The maximum number of successors of any node.

39
New cards

Depth

The number of steps along the path from any node.

40
New cards

Total Cost

The path cost plus the search cost which is the time complexity but can including the space somplexity.

41
New cards

Breadth-First Search

A strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors etc. Complete if goal is at finite depth. Optimal if path cost is non decreasing. Not practical considering time and space.

42
New cards

Uniform-cost search

Expands the node with the lowest path cost. Done by storing the frontier as a priority queue. Goal test applied during expansion not generation. Expands nodes in order of optimality. Complete as long as every step cost is greater than some small positive constant.

43
New cards

Depth-first search

Expands the deepest node. Implemented using a LIFO queue. Not optimal. Low space complexity.

44
New cards

Backtracking Search

A depth-first search that only generates one successor at a time.

45
New cards

Depth-limited search

A depth first search with a predetermined depth limit. Incomplete if limit is less than depth. Not optimal if limit is greater than depth.

46
New cards

Diameter

The number of steps to reach any other state from the current state.

47
New cards

Iterative Deepening Search

A depth limited search that gradually increases the limit. Generates states multiple times.

48
New cards

Bidirectional Search

Two searches: one forward form the root and one backward form the goal. Goal test replaced with frontier intersect test. Requires a method of computing predecessors.

49
New cards

Best-first search

Expands nodes base on evaluation function. Use priority queue with estimated cost.

50
New cards

Heuristic Function

Estimated cost of the cheapest path form the state as node to a goal state.

51
New cards

Greedy Best First Search

Expands the node that is closest to the goal. Incomplete even if finite

52
New cards

A* Search

Evaluates nodes by combining cost to get to node and the estimated cost to get from node to goal. Estimated cost of getting to goal through node. Complete and optimal. Tree is optimal if admissible. Graph is optimal if consistent

53
New cards

Admissible Heuristic

A heuristic that never overestimates the cost to reach the goal.

54
New cards

Consistency

AKA monotonicity. The cost of every successor of a node by an action is not great than the step cost from the successor plus the cost of reaching the goal from the successor.

55
New cards

Pruned

Eliminating possibilities from consideration without having to examine them

56
New cards

Iterative Deepening A*

Uses f-cost as limit. Each iteration uses smallest f-cost that exceeds last iteration

57
New cards

Recursive best-first search

uses f-limit variable to keep track of best alternative path from any ancestor of the current node. Replaces f-value with best f-value of nodes children.

58
New cards

SMA*

Drops the node with the highest f-value when memory is full. Backs up forgotten node f-value to its parent.

59
New cards

relaxed Problem

A problem with fewer restrictions on the actions.

60
New cards

Pattern Database

Storing the exact solution cost all possible solutions to subproblem instance.

61
New cards

disjoint pattern database

The combination of two or more subproblem pattern databases which includes all of one and all the solutions that involve moves of the others.

62
New cards

Subproblem

Problem definitions with a more general goal than the original problem definitions

Explore top flashcards