Intro to AI Midterms

0.0(0)
studied byStudied by 0 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/137

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

138 Terms

1
New cards

What are the foundational disciplines of AI?

Philosophy, Mathematics, Economics, Neuroscience, Psychology, Computer Engineering, Control Theory, and Linguistics.

2
New cards

What was the significance of the Dartmouth meeting in 1956?

It marked the formal inception of the field of Artificial Intelligence, led by John McCarthy.

3
New cards

What early AI program was developed by Newell and Simon?

The Logic Theorist and General Problem Solver.

4
New cards

What was the focus of AI research from 1966 to 1973?

limited progress due to challenges in computational complexity and informed introspection.

5
New cards

What are expert systems, and when did they emerge?

Knowledge-based systems that emerged between 1969 and 1986, such as DENDRAL and MYCIN.

6
New cards

What characterized the AI Winter from 1988 to 1993?

A decline in funding and interest in AI, leading to reduced research and development.

7
New cards

What advancements in AI occurred from 1986 onwards?

The resurgence of neural networks, probabilistic reasoning, machine learning, big data, and deep learning.

8
New cards

What are some applications of AI today?

Robotic vehicles, machine translation, speech recognition, recommender systems, game playing, and medical diagnosis.

9
New cards

What is the Turing Test?

A test for intelligent behavior where a system passes if an interrogator cannot distinguish its answers from a human's.

10
New cards

What is the difference between systems acting like humans and systems thinking like humans?

Acting like humans focuses on behavior, while thinking like humans involves understanding cognitive processes.

11
New cards

What is a rational agent in AI?

An agent that acts to achieve the best outcome based on its percepts and past experiences.

12
New cards

What is the agent function in AI?

A mapping from percepts to actions, defined as a = F(p), where p is the current percept.

13
New cards

What does PEAS stand for in task environment specification?

Performance Measure, Environment, Actuators, and Sensors.

14
New cards

What are the properties of environments in AI?

Fully vs. partially observable, single-agent vs. multi-agent, deterministic vs. stochastic, episodic vs. sequential, static vs. dynamic, discrete vs. continuous, known vs. unknown.

15
New cards

What is an example of a simple AI environment?

The Vacuum World, where an agent can perceive and act on two squares that may be dirty.

16
New cards

What is the main objective of a self-driving car in AI?

To reach its destination safely while considering trade-offs between trip progress and injury risk.

17
New cards

What is the significance of generative AI as of 2021?

It represents a new frontier in AI, focusing on creating content and solutions based on learned data.

18
New cards

What is the role of neural networks in contemporary AI?

They are used for deep learning and have become central to many AI applications since 1986.

19
New cards

What is the value alignment problem in AI?

The challenge of ensuring that AI objectives align with human values and objectives.

20
New cards

What is the importance of rational thinking in AI systems?

It allows systems to solve problems using precise laws of thought, such as syllogisms and probability.

21
New cards

What are some examples of AI in medicine?

Disease diagnosis, such as LYNA for metastatic breast cancer, and automated conversation through chatbots.

22
New cards

What does the term 'agent' refer to in AI?

Anything that perceives and acts on its environment.

23
New cards

What is the difference between deterministic and stochastic environments?

Deterministic environments have outcomes fully determined by current states and actions, while stochastic environments involve randomness.

24
New cards

What is an example of a rational behavior in AI?

An agent choosing the action that maximizes its performance measure based on its percepts.

25
New cards

What is the performance measure in the context of AI agents?

It refers to the criteria used to evaluate the success of an agent's actions, such as cleaning both rooms with the fewest steps.

26
New cards

What types of agents are identified in AI?

Types include Reflex Agent, Model-Based Reflex Agent, Goal-Based Agent, Utility-Based Agent, and Learning Agent.

27
New cards

What distinguishes a Reflex Agent from a Model-Based Reflex Agent?

A Reflex Agent responds to current percepts, while a Model-Based Reflex Agent incorporates a model of the world and maintains state based on percept history.

28
New cards

How does a Goal-Based Agent function?

It incorporates goals into its decision-making process, using knowledge of future consequences to determine actions.

29
New cards

What is the role of a Utility-Based Agent?

It selects actions that maximize the agent's 'happiness' based on a utility function that maps states to a measure of satisfaction.

30
New cards

What is the function of a Learning Agent?

It improves its performance over time by learning from feedback and experiences, adapting its actions based on past outcomes.

31
New cards

What does PEAS stand for in AI?

PEAS stands for Performance measure, Environment, Actuators, and Sensors, which specify the task environment of an agent.

32
New cards

What is the Vacuum World Problem in AI?

It involves an agent navigating between two locations (A and B) to clean dirt, with actions like moving and sucking dirt, aiming for a goal state where both locations are clean.

33
New cards

What is the difference between Breadth-First Search (BFS) and Depth-First Search (DFS)?

BFS explores all neighboring nodes at the current level before moving deeper, guaranteeing the shortest path, while DFS goes as deep as possible before backtracking, not guaranteeing the shortest path.

34
New cards

What is Depth-Limited Search?

It is a variant of DFS that restricts the depth of exploration to avoid infinite loops, but may miss optimal solutions if the limit is too low.

35
New cards

What is the advantage of Iterative Deepening Search?

It combines the benefits of BFS and DFS, finding the shortest path while using less memory than BFS.

36
New cards

What is Uniform Cost Search (UCS)?

UCS is a search algorithm that expands the least costly node first, ensuring the optimal path is found in terms of action cost.

37
New cards

What are the limitations of Depth-First Search?

DFS can get stuck in infinite loops if cycles are present and does not guarantee the shortest path, though it requires less memory than BFS.

38
New cards

What is the significance of state space graphs in search algorithms?

states and transitions of an agent's actions, helping visualize the search process and possible paths to the goal.

39
New cards

How does a Model-Based Reflex Agent manage state?

It maintains a model of the world that updates based on percept history, allowing it to determine the next action based on the current state.

40
New cards

What is the goal state in the context of the Vacuum World Problem?

The goal state is any configuration where both locations A and B are clean.

41
New cards

What is the action cost in the Vacuum World Problem?

The action cost is 1 unit for each action taken, with the total cost being the number of steps taken to reach the goal state.

42
New cards

What does the term 'static' mean in the context of environment properties?

'Static' means that the environment does not change while the agent is deliberating, in contrast to a dynamic environment where changes can occur.

43
New cards

What is a problem-solving agent?

An agent that operates in environments with a goal where the correct action is not immediately obvious, planning a sequence of actions to reach the goal.

44
New cards

What is a state in the context of problem solving?

A description of the environment, including the agent's interaction with it.

45
New cards

Define state space.

The set of all possible states an agent can be in within the environment.

46
New cards

What is a transition model?

A model that defines the result of an action, mapping state-action pairs to resulting states.

47
New cards

What are the steps in the problem-solving process?

  1. Goal formulation 2. Problem formulation 3. Search 4. Execution
48
New cards

What defines a search problem?

A search problem consists of a state space, initial state, goal state(s), actions, transition model, and action cost function.

49
New cards

What is the action cost function?

It defines the cost of applying an action, which may vary based on the action or the state.

50
New cards

.

51
New cards

What is the goal state in the Vacuum World problem?

Any state where both squares A and B are clean.

52
New cards

What does the search tree represent?

A structure where the root node is the initial state, and each subsequent node represents the result of applying actions to reach new states.

53
New cards

What is the significance of the initial state in a search tree?

It serves as the starting point from which all possible actions and resulting states are explored.

54
New cards

How is a node in a search tree expanded?

By applying the transition model for each valid action, generating children nodes with different path costs and depths.

55
New cards

What is the 8-Puzzle problem?

A problem where the states are the locations of numbered tiles, with the goal state being a specific arrangement of these tiles.

56
New cards

What is the goal of the Boat Puzzle?

To safely transport 3 missionaries and 3 cannibals across a river without violating safety rules.

57
New cards

What does restricting actions in problem-solving entail?

Eliminating actions that are invalid or useless based on the current state to reduce the branching factor.

58
New cards

What is the purpose of the transition model in problem-solving?

To define the effects of actions taken from a given state, determining the resulting state.

59
New cards

What is the significance of the action cost in problem-solving?

It allows for evaluating the efficiency of different action sequences based on their costs.

60
New cards

What is the goal state in the Number game example?

To reach a specific number, such as 5, starting from an initial number like 4.

61
New cards

What does the 8-Queens problem involve?

Positioning 8 queens on a chessboard so that no two queens can attack each other.

62
New cards

What is the role of a search algorithm in problem-solving?

To navigate the search tree and find a path to a goal state.

63
New cards

What is meant by 'uninformed search'?

A search strategy that does not use additional information about states to guide the search process.

64
New cards

What does a search tree's root node represent?

The initial state of the problem being solved.

65
New cards

What are the components of a problem definition?

States, initial state, actions, transition model, goal state, and action cost.

66
New cards

What is the significance of the state space in problem-solving?

It enumerates all possible states that can be reached from the initial state through valid actions.

67
New cards

What is the Vacuum World action sequence example?

An example sequence includes actions like 'suck dirt' and 'move left' to navigate the environment.

68
New cards

What is a transition model also known as?

A successor function.

69
New cards

What does solving a problem correspond to in search algorithms?

Finding a goal node.

70
New cards

What is the purpose of the frontier in tree-search algorithms?

To contain nodes that are yet to be explored.

71
New cards

What is the order of node expansion in a FIFO frontier?

Nodes are expanded in the order they were added, processing shallower nodes first.

72
New cards

What is the order of node expansion in a LIFO frontier?

Nodes are expanded in reverse order of their addition, going as deep as possible before backtracking.

73
New cards

What is the difference between reached nodes and expanded nodes?

Reached nodes are those that have been encountered, while expanded nodes are those that have been processed to generate child nodes.

74
New cards

What are uninformed search strategies?

Strategies that do not use additional information beyond states and successors.

75
New cards

What is the Breadth-First Search (BFS) algorithm?

An algorithm that processes nodes level by level, expanding shallower nodes before deeper ones.

76
New cards

What is the time complexity of Breadth-First Search?

O(b^d), where b is the branching factor and d is the depth of the shallowest goal node.

77
New cards

What is the Uniform-Cost Search (UCS) algorithm?

An algorithm that expands nodes based on the least path-cost, using a priority queue.

78
New cards

What is the optimality condition for Uniform-Cost Search?

It is complete and optimal as long as zero step costs are handled properly.

79
New cards

What is the Depth-First Search (DFS) algorithm?

An algorithm that goes as deep as possible into the tree before backtracking.

80
New cards

What is a limitation of Depth-First Search?

It is not complete and may not terminate.

81
New cards

What is the Depth-Limited Search?

A variant of DFS that imposes a limit on the depth to ensure termination.

82
New cards

What is Iterative Deepening Search?

A search strategy that combines the benefits of DFS and BFS by incrementally increasing the depth limit.

83
New cards

What is the time complexity of Iterative Deepening Search?

O(b^d), similar to BFS.

84
New cards

What is Bi-directional Search?

A strategy that runs two simultaneous searches, one from the initial state and one from the goal state.

85
New cards

What is the main advantage of Bi-directional Search?

It can be quicker than a full search by meeting in the middle.

86
New cards

What defines a problem in search algorithms?

Its states, initial state, actions, transition model, goal states, and action cost function.

87
New cards

What is the role of a search strategy?

To specify the order of node expansion based on the frontier queueing policy.

88
New cards

What is the difference between informed and uninformed search strategies?

Informed strategies use additional problem-specific information to expand more promising states.

89
New cards

What is the time complexity of Depth-First Search?

O(b^m), where m is the maximum path length in the state space.

90
New cards

What is the space complexity of Depth-First Search?

O(b^m), which can be large if the maximum path length is significant.

91
New cards

What is the purpose of a goal test in search algorithms?

To determine if the current node is the goal state.

92
New cards

What is the significance of node expansion in search algorithms?

It generates child nodes from the current node to explore further possibilities.

93
New cards

What is a priority queue in the context of search algorithms?

A data structure that allows nodes to be processed based on their priority, often used in UCS.

94
New cards

What does the term 'cumulative cost of actions' refer to?

The total cost incurred from the initial state to a given node in the search tree.

95
New cards

What does the Greedy algorithm use to determine paths?

The table of Straight Line distances.

96
New cards

What is the main feature of the A* algorithm?

It uses both the edge cost and the Straight Line distances.

97
New cards

What does UCS stand for in search algorithms?

Uniform Cost Search.

98
New cards

What is the primary goal of Uniform Cost Search (UCS)?

To find the shortest distance to the goal.

99
New cards

What is the significance of the Straight Line Distance (SLD) in search algorithms?

It may be equal to or less than the actual distance on the graph.

100
New cards

What is the sequence of visited nodes in UCS starting from node A?

A0, B2, C5, D7, E15.