420 midterm (if you even care)

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/25

flashcard set

Earn XP

Description and Tags

god save me from this suffering called TAKING CLASSES AT A&M

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

26 Terms

1
New cards

Uninformed Search

WEAK (Beta) because it doesn’t know how far it is from goal (like BFS, DFS, Iterative Deepening, Uniform Cost).

  • DFS: Expand children of children before siblings.

  • BFS: Expand children of children after siblings.

  • Frontier: Nodes that have been expanded but not explored yet. (BFS: Stored in queue (FIFO). DFS: Stored in stack (LIFO).

2
New cards

Computational Complexity of BFS

Time complexity: O(bd+1)

Space complexity: O(bd+1)

Completeness: Yes, it will find goal if it exists

Optimality: Yes, it will find goal within minimum path cost but only if all operators have the same cost

3
New cards

Computational Complexity of DFS

Time complexity: O(bm)

Space complexity: O(bm)

Completeness: No, it can get stuck infinitely going down one branch with infinite depth

Optimality: No, because it isn’t complete

4
New cards

Iterative Deepening

BFS/DFS Top 40 Hits. Maintaining a linear frontier size (DFS) while searching level by level (BFS).

  • Depth limited search: DFS down to depth 1. Then to depth 2, 3, so on…until goal is found.

  • Complete and Optimal (just like Pranav’s mom)

  • Time complexity: O(bd+1)

  • Space complexity: O(bd)

Trade off: More time for less memory usage.

5
New cards

Uniform Cost

Shortest path not necessarily cheapest path (number of steps doesn’t always mean low sum of operator costs).

  • Use of priority queue as frontier sorted in order of increasing path cost.

  • Complete and Optimal.

  • Time complexity: O(b1+C*/e), C* is shortest path cost, e is minimum cost at each step.

  • Space complexity: O(b1+C*/e)

6
New cards

Heuristic/A*

h(n) function that describes how close a node is to the goal node. A* uses priority queue sorted by f(n) = g(n) + h(n) = pathcost + heuristic.

  • h(n) should be admissible: should never over-estimate the true distance to goal.

  • Time complexity: O(bmaxerr * pathlen to goal)

7
New cards

Hill Climbing

Lvl. 1 Iterative Improvement Search. O(1) explores one path at a time.

  • Generates successors, picks the best one.

  • Rinse and repeat until neighbors < current.

  • Issues: Can get stuck in local maxima, plateau, or ridge.

Ways to improve Hill Climbing (besides random restart):

  • First-Choice: Keep generating successors until better state than current state is found.

  • Stochastic (~Simulated Annealing): Pick random better successor (not necessarily “best”).

  • Memory (~Beam): Keep memory of previous states so you can do backtracking.

8
New cards

Beam Search

Lvl. 2 Iterative Improvement Search. Hill Climbing: Memory DLC.

  • Keep track of K (10-100) best previous nodes (More space-efficient than Greedy, which keeps track of all nodes).

  • Issues: While it is capable of some backtracking, it can still miss the global max getting stuck in long plateaus.

9
New cards

Simulated Annealing

Lvl. 3 Iterative Improvement Search.

  • Pick next child randomly.

  • Always accept better states, and accept worse states probabilistically: evalue(curr)-value(next)/T

  • T determines how many backward steps we allow

  • Start with high T to explore many local maxima, then gradually lower T, become more selective over time and climb best hill.

10
New cards

Genetic Algorithms

Lvl. 4 Iterative Improvement Search. Alternative to non-linear regression for numeric problems: represent as syntax trees and swap subtrees.

  • Maintain a population of multiple candidate states.

  • Generate successors by combining parts of existing candidate states.

  • Mutations: Making random changes to a state (low frequency) and good mutations will be passed down.

  • Issues: Loss of diversity (snowflakes smh), so after so many iterations the changes aren’t as significant.

11
New cards

Minimax

Lvl.1 Game Search. Used to determine one move at a time to maximize score for one of the players (u1(s)), and minimize score for the other (u2(s)).

Time Complexity: O(bm), Space complexity: O(bm), where b = branching factor and m = max depth.

  • Alpha-Beta pruning: Technique used to prevent examining every possible state in minimax for complex games (like chess), (α, β).

  • Board Evaluation functions: Used to assign scores to nodes based on expected probability of winning at that state.

12
New cards

Expect Minimax

For games with randomness (dice, cards). Place chance nodes in between min/max nodes. Chance nodes scores are weighted sum / children, based on “expected outcome”

13
New cards

Forward Pruning

Prune moves that appear to be bad moves (faster, but risks pruning a good move down the line).

14
New cards

Late move reduction

Assumes move ordering was done well, so that moves later in queue don’t get expanded as much.

15
New cards

Monte Carlo Tree Search

Lvl. 2 Game Search. Occasionally allows bad choices to remove uncertainty. More resilient against missteps than alpha-beta pruning, but it can fail in situations where there is an “obvious move” because it can miss it by mere chance.

  • Select: Pick leaf. (Function which balances exploitation (selecting best moves so far) and exploration (selecting less good moves to remove them))

  • Expand: Make a child (uncertain value)

  • Simulate: Play game starting at that child to see outcome (use heuristic to bias better moves)

  • Back-propagation: Update win/total ratio for nodes along path.

16
New cards

Constraint Satisfaction Problems

Framework:

  • Variables

  • Domains

  • Constraints

  • Solution: Needs to be both consistent (variable assignments do not violate constraints) and complete (all variables assigned a value)

17
New cards

Basic Search CSP

Similar to DFS except all goals occur at the fringe, and as soon as a constraint is violated prune that subtree and start again for the next domain value.

18
New cards

Forward checking

When a var is assigned, remove conflicting value assignments for other variables. Similar to MRV except that MRV is passive in recalculating number of consistent values at each iteration and FC allows immediate backtracking when a variable’s domain becomes empty.

19
New cards

Constraint propagation

Reducing legal moves for a variable so that we’re left with fewer choices when assigning values to the next variable.

20
New cards

AC-3

Constraint propagation as a graph algorithm basically. It returns an identical CSP problem but with each variable having a smaller domain (faster).

Algorithm:

  • Put all arcs in a queue (each binary constraint makes 2 arcs)

  • Pop arc from queue (Xi, Xj)

  • Make Xi arc consistent with Xj → If this changes domain Xi : Di, add all of Xi’s neighboring arcs (Xk) to queue as (Xk, Xi). Otherwise move on to next arc.

  • If Di empty, return failure

Time complexity: O(cd*d2) = O(n2d3)

21
New cards

Maintaining Arc Consistency (MAC)

Wrapper algorithm around AC-3 that iteratively makes another choice and calls AC-3 if some vars still have multiple values in their domains

22
New cards

Path consistency

Arc consistency DLC. When you have binary constraints, check consistency for 3 variables instead of 2 (because it doesn’t work with 2).

23
New cards

K consistency

Even more generalization for path and arc consistency. Takes more processing time because we check consistency for k nodes.

24
New cards

MRV (Minimum Remaining Values)

Start with variable with fewer domain choices (that way backtracking happens sooner).

25
New cards

LCV (Least Constraining Value)

Pick value for a variable based on how many choices it leaves for the rest of the variables.

26
New cards

Min Conflicts Heuristic

Approach is to select a value that results in minimum number of conflicts with other variables. IF YOU FLIP THIS FLASHCARD YOUR MOM’S A HOE. Usually has plateaus, which can be combated somehow.