CMPT375 Test 1

0.0(0)
Studied by 1 person
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/76

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 1:51 AM on 4/26/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

77 Terms

1
New cards

What is a transposition in game playing AI

A transposition occurs when the same board position is reached through different sequences of moves.

2
New cards

What is a transposition table?

A cache that stores previously evaluated board positions so the AI can reuse results instead of recomputing them

3
New cards

Why do transposition tables turn a game tree into a graph?

Because different move orders can lead to the same board position, creating shared nodes instead of separate branches.

4
New cards

What problem do transposition tables solve in minimax search?

They prevent repeated evaluation of identical board positions, saving time by trading memory for speed.

5
New cards

Why is comparing entire boards directly impractical?

Full board comparisons are too slow when performed millions of times during deep search.

6
New cards

How do AI engines efficiently recognize identical board positions?

By computing a hash (fingerprint) of the board and using it as a key in a transposition table.

7
New cards

What is the main tradeoff of using transposition tables?

Increased memory usage in exchange for significantly faster search.

8
New cards

How do transposition tables interact with alpha-beta pruning?

They reduce repeated work and can dramatically increase effective search depth, especially when combined with good move ordering.

9
New cards

Summarize transposition tables in one sentence.

A transposition table caches evaluated board positions so identical states reached by different move orders are only evaluated once.

10
New cards

What is scouting in game-playing AI?

A variant of alpha-beta pruning that uses a zero-width search window to test whether a guessed minimax value is too high or too low.

11
New cards

What does it mean that scouting uses a zero-width window?

Alpha and beta are set equal (α = β), so the search only checks whether the true value is above or below the guess.

12
New cards

What question does a scouting search answer?

Whether the true minimax value is ≥ the guessed value or ≤ the guessed value.

13
New cards

What is a fail-high result in scouting?

The search finds a value greater than the guess, meaning the guess was too low.

14
New cards

What is a fail-low result in scouting?

The search finds all values below the guess, meaning the guess was too high.

15
New cards

Why is scouting usually very fast?

The zero-width window causes extremely aggressive pruning, stopping the search as soon as the guess is proven wrong.

16
New cards

How does iterative deepening help scouting?

he best value from a previous depth provides a strong initial guess for the next scouting search.

17
New cards

How does scouting differ from standard alpha-beta pruning?

Standard alpha-beta searches with a wide window to find exact values, while scouting uses a zero-width window to test whether a guessed value is too high or too low.

18
New cards

What is Memory-enhanced Test Driver (MTD)?

An algorithm that repeatedly applies scouting (zero-width alpha-beta searches) while using a transposition table to efficiently converge on the true minimax value.

19
New cards

What problem does MTD solve compared to basic scouting?

It automates the guessing process and avoids repeated work by storing results in a transposition table.

20
New cards

What type of alpha-beta window does MTD use?

A zero-width window (α = β), the same as scouting

21
New cards

How does MTD adjust its guess?

A fail-high result increases the guess; a fail-low result decreases the guess, narrowing bounds until convergence.

22
New cards

Why is MTD described as “memory-enhanced”?

Because it relies on a transposition table to reuse results from previous searches and avoid recomputation.

23
New cards

What does MTD compute first: an exact value or bounds?

Bounds (upper and lower), which are tightened over repeated searches until the exact minimax value is found.

24
New cards

Why does MTD require transposition tables to be practical?

Without memory, repeated zero-width searches would redo the same work and negate performance gains.

25
New cards

Can minimax with alpha-beta pruning be parallelized?

Yes, but only to a limited extent; benefits drop quickly due to pruning and synchronization overhead.

26
New cards

Where does parallelization work best in minimax?

Near the root of the tree, where different branches (moves) can be searched independently.

27
New cards

How can hardware help speed up minimax besides parallel tree search?

By accelerating evaluation functions using specialized hardware or vectorized computation.

28
New cards

Why does alpha-beta pruning reduce the effectiveness of parallel search?

Because one processor may discover a cutoff while others are still exploring branches that should have been pruned, resulting in wasted work.

29
New cards

What causes diminishing returns when adding more processors to minimax search?

Communication overhead, synchronization of alpha/beta bounds, and wasted computation due to pruning.

30
New cards

about how many processors can help significantly?

Roughly up to 10 processors; beyond that, overhead often outweighs benefits.

31
New cards

Why doesn’t massive parallelism solve exponential game-tree search

Because alpha-beta pruning relies on sequential best-first information, and parallel exploration increases redundant work and coordination costs.

32
New cards

What is a perfect information game?

A game where all players can see the entire game state at all times.

33
New cards

Give three examples of perfect information games.

Chess, Checkers, Go.

34
New cards

What is an imperfect information game?

A game where some information is hidden from one or more players.

35
New cards

Give three examples of imperfect information games.

Battleship, Bridge, Poker.

36
New cards

What is a deterministic game?

A game where the same move always results in the same outcome.

37
New cards

Give examples of deterministic games.

Chess, Checkers, Go.

38
New cards

What is a non-deterministic game?

A game where actions can have multiple outcomes due to chance.

39
New cards

What introduces non-determinism in games?

Random events such as dice rolls or card draws.

40
New cards

Give examples of non-deterministic games.

Backgammon, Ludo, Poker.

41
New cards

Why is classifying games important in AI?

Because different game types require different AI techniques (e.g., minimax vs probabilistic reasoning).

42
New cards

What defines a game with chance in AI?

A game where actions can have multiple possible outcomes due to randomness.

43
New cards

What is the key idea behind strategy abstraction?

Group moves that are very close into the same move

44
New cards

What is the main benefit of grouping similar moves?

A reduced branching factor.

45
New cards

Why did opponent modeling fail against human players?

Humans adapted their playing style to mislead the AI.

46
New cards

Why does minimax fail in Poker?

Minimax assumes perfect information and deterministic outcomes.

47
New cards

What weakness does opponent modeling introduce?

It can be exploited by adaptive opponents.

48
New cards

In one sentence, summarize Poker’s AI challenge.

Poker combines hidden information and randomness, requiring game-theoretic strategies.

49
New cards

Why is counterfactual regret not immediately useful in Poker AI?

Because it only becomes meaningful when averaged over many games.

50
New cards

Why is regret-based learning effective in imperfect-information games like Poker?

It converges toward optimal play without relying on hidden information.

51
New cards

How did researchers reduce the storage requirements?

Using integers, ordering data for compression, and distributing storage.

52
New cards

How a Monte Carlo tree search works?

Expand parts of the tree based on heuristics

Sort of a random sampling

Does not need an explicit evaluation function

Converges slowly to minimax solution

53
New cards

How did watson get it’s moves from a neural network?

– Learned from play

– First play from expert human players

– Further play against itself using reinforcement learning

54
New cards

Why did IBM choose Jeopardy as an AI challenge?

Because it requires natural language understanding, uncertainty reasoning, speed, confidence estimation, and strategy.

55
New cards

What makes Jeopardy fundamentally different from board games like chess or Go

It uses natural language clues rather than a structured game state.

56
New cards

Why was Jeopardy choosen for IBM

  • Regarded as hard, intelligent

  • Challenge due to English nuance, this is traditionally poor performance by computer systems.

57
New cards

What insight motivated Watson’s design?

Different algorithms perform well under different circumstances.

58
New cards

What was Watson’s core architectural idea?

Use many algorithms and select the best answer.

59
New cards

Why did Watson need custom algorithms?

Some clue types involve wordplay or structure.

60
New cards

Name two categories requiring custom algorithms for watson

Puns; Rhyme Time.

61
New cards

What are nested clues?

Clues that contain multiple constraints.

62
New cards

How does Watson handle nested clues?

By processing them in multiple passes.

63
New cards

what clues did Watson not do?

Picture, Video and Audio clues

64
New cards

What type of data sources did Watson prefer?

Encyclopedia-style summaries.

65
New cards

How did Watson choose which algorithm to trust?

By learning correlations from past games.

Clue and response known

Forms a training set

66
New cards

How did they speed up watson?

Convert data so nouns, verbs, etc already determined

Parallelize: 2880 processors

Copy all data into RAM

67
New cards

Could ChatGPT play a good game of Jeapardy?

Probably not

– Not designed for question / response system

– Not trained for specialized clue types (eg. Rhyme Time)

– Can get out of date from training set (but so could Watson)

– Can be a little too slow to ring in on time

Probably yes

– Can bring up relevant information, but may need a human or other algorithm to curate response

68
New cards

What is the Hierarchical Task Network?

  • Plan strats to follow rather then cards to play

  • Refine strats to sub-strats

  • When start small enough search entire tree to determine best card

69
New cards

What is Quiescent Positions?

  • Search terminates choosing an outcome not realizing it will change soon

  • Example

    • Capture knight not realizing will lose queen

70
New cards

What is the horizon problem?

  • Computer assumes an inevitable outcome can be avoided by stalling

  • Example

    • King trapped in the corner

71
New cards

What are the two problems discussed in class about evaluation functions?

  • Quiescent Positions

  • Horizon Problem

72
New cards

What is the best case alpha Beta pruning can reduce the exponent by?

  • Square root

73
New cards

When could alpha-beta pruning make minimax take longer?

When it does little or no pruning, because the program still pays extra overhead to check alpha and beta values.

74
New cards

When is alpha-beta pruning optimal?

When the best moves are searched first.

75
New cards

What is iterative deepening?

Iterative deepening is a search strategy where the program searches to a shallow depth first, such as 2 ply, saves the best move, then searches deeper, such as 4 ply, 6 ply, and so on. This helps it always have a good move ready if time runs out.

76
New cards

What is a window in alpha-beta pruning?

A window is the range between the current alpha and beta values. If a search result falls outside this range, that branch can be pruned.

77
New cards

What is an aspiration window?

An aspiration window is a narrower alpha-beta search window chosen around an expected score, often based on a previous search. If the guess is correct, search is faster; if not, the search must be redone with a wider window.