Fundamentals of Artificial Intelligence Concepts

0.0(0)
studied byStudied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/85

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 5:46 PM on 5/1/25
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

86 Terms

1
New cards

What is not the fundamental property of a rational agent?

Inference capability

2
New cards

Which agent’s category is definable using the Turing test method?

Acting humanly

3
New cards

Which agent’s category can explain Aristotle’s philosophical approach?

Thinking rationally

4
New cards

General problem solver program is a type of:

Thinking humanly

5
New cards

Which of the following options is not required to understand the human brain working process:

6
New cards

Which of the following capability is not required to pass Turing test?

Computer Vision

7
New cards

Which of the following capability is not required to pass total Turing test?

Introspection

8
New cards

Cognitive Science is a type of:

Thinking humanly

9
New cards

A rational agent is one that only uses a logical approach to reasoning. 

False

10
New cards

To explain a reflexive reaction like withdrawing your hand from a hot stove, you would look to models of acting rationally.

False

11
New cards

“Acting appropriately when there is not enough time to do all the computation one might like.” It is a definition of _____________.

limited rationality

12
New cards

The problem of achieving agreement between our true presences and the objective we put into the machine is called the ___________.

value alignment problem

13
New cards

The theory of ____________ provides a basis for analyzing the tractability of problems.

NP-completeness

14
New cards

The _______________ showed that in any formal theory, there are necessarily true statements that have no proof within the theory.

incompleteness theorem

15
New cards

____________ is a branch of philosophy and mathematics that deals with the study of reasoning and inference through the use of formal systems.

Logic

16
New cards

_____________ which combines probability theory with utility theory, provides a framework for making decision under uncertainty.

Decision theory

17
New cards

Which agent can only make decision in a fully observable environment?

Simple reflex agent

18
New cards

What is not the difference between “performance measure” and “utility function”?

"Performance measure improves the performance of an agent, but the utility function is useful in designing the agent’s architecture.

19
New cards

What are the properties of the time-controlled chess game environment?

Semi-dynamic, deterministic, discrete, sequential

20
New cards

What are the properties of the medical diagnosis system environment?

Fully observable, non-deterministic, discrete

21
New cards

What is the best program to design an autonomous taxi driver?

Utility-based agent

22
New cards

Which choices is not correct about an agent relying only its prior knowledge?

Fully autonomous

23
New cards

Which one is not an element of a learning agent?

Knowledge element

24
New cards

A self-driving car navigating through a busy city is an example of non-observable environment.

False

25
New cards

Agent consists of ________.

Architecture and program

26
New cards

The more the agent uses his internal knowledge and pays less attention to his perception, the _____autonomous it is, and a completely autonomous system is _____ intelligent.

less, more

27
New cards

What type of agents are problem-solving agents?

Goal-based agent

28
New cards

What is the computational process for considering a sequence of actions that form a path to a goal state?

Search

29
New cards

Which of the following is NOT a main step in the problem-solving process?

Learning from experience

30
New cards

How is a search problem formally defined?

By specifying the initial state, actions and their costs, and goal test

31
New cards

What is the main difference between a graph and a tree in search problems?

Graphs can contain cycles, while trees cannot

32
New cards

When is graph search more effective than tree search?

When the state space contains repeated states

33
New cards

A ________ is a representation of a problem that includes the states, actions, and the transition model.

search problem

34
New cards

The branching factor of a search tree refers to the number of ________ from any given node. (use small letters to answer this question)

possible actions

35
New cards

The transition model defines how the world evolves in response to an action.

True

36
New cards

In a tree search, nodes can represent repeated states.

True

37
New cards

A state space includes all possible configurations of a problem that can be reached through a series of actions.

True

38
New cards

The performance of a problem-solving agent is solely measured by the speed of finding a solution.

False

39
New cards

Completeness

Whether the algorithm is guaranteed to find a solution when one exists.

40
New cards

Optimality

Whether the strategy finds the best possible solution

41
New cards

Time Complexity

How long it takes to find a solution.

42
New cards

Space Complexity

How much memory is required to perform the search.

43
New cards

Why is an abstract mathematical description used in problem-solving?

To omit irrelevant factors and focus only on what is necessary for finding solutions

44
New cards

Three admissible heuristic functions, h1(n), h2(n), and h3(n), satisfy the following relation: h1(n)>h2(n)>h3(n). Which of these heuristics will require the least amount of time to find the solution using the A* algorithm?

ℎ 1 ( 𝑛 ) is the most informed and will result in fewer explored nodes

45
New cards

Which of the following statements is true about RBFS (Recursive Best-First Search)?

It is more efficient than IDA*

46
New cards

You are tasked with solving the Robot Navigation Problem, where a robot must navigate through a grid from a starting point to a goal point while avoiding obstacles. The robot can move up, down, left, and right but cannot pass through grid cells that contain obstacles. Which of the following options are admissible heuristics to solve this problem?

Weighted Manhattan Distance (h(n)=weight×(∣x goal ​ −x n ​ ∣+∣y goal ​ −y n ​ ∣))

47
New cards

You are implementing an A* search algorithm for pathfinding in a weighted graph. Consider a heuristic function h(n) used to estimate the cost from a node n to the goal.

Which of the following statements correctly describes the implications of using a non-consistent heuristic h(n) in the A algorithm?*

  1. The algorithm may fail to find the optimal path to the goal, as it could overlook potentially shorter paths due to inflated heuristic values.

  2. The algorithm will always terminate without finding a solution, regardless of the graph's structure.

  3. The use of a non-consistent heuristic could result in increased computational time due to unnecessary expansions of nodes.

  4. A non-consistent heuristic guarantees that the solution found will be suboptimal but will always be reachable within a finite number of steps.

1 and 3

48
New cards
<p><span>what is the solution using A*?</span></p>

what is the solution using A*?

[S,B,A,C,D,G] with cost of 8

49
New cards
<p><span>Use the greedy search algorithm to solve the above problem.</span></p>

Use the greedy search algorithm to solve the above problem.

[S,A,G] with the cost of 13.

50
New cards

Which of the following algorithms is not complete?

DFS

51
New cards

A rational agent must always make decisions that are optimal given its current knowledge.

False

52
New cards

Limited Information

Agents may not have access to all relevant information, leading to suboptimal decisions.

53
New cards

Uncertainty

The world is often uncertain, and agents may have to make decisions based on probabilistic models, which can lead to unexpected outcomes.

54
New cards

A partially observable environment is one where the agent has the necessary information about the state of environment.

False

This means that even if the agent has the access to some of the information, it may still be missing crucial details that allow it to make fully informed decisions.

55
New cards

A table-driven agent is highly adaptable to new situations

False

56
New cards

In a semi-dynamic environment, the state of the environment can change while the agent is deliberating.

False

a semi-dynamic environment, the environment itself does not change, only the performance score is affected by the passage of time during deliberation

57
New cards

An autonomous agent relies entirely on pre-programmed rules without learning from its environment

False

58
New cards

Which of the following is a property of a goal based agent?

Plans actions based on future goals

59
New cards

Which of the following environments is most suitable for a utility-based agent?

Dynamic and partially observable

60
New cards

Which of the following is not an advantage of model-based reflex agents?

They only require current percepts to function

61
New cards

Which of the following is required for a simple reflex agent to operate effectively?

Predefined condition-action rules

62
New cards

Which of the following best describes the goal of "thinking like a human"?

Creating models that mimic human thought processes and decision-making

63
New cards

A chess-playing Al that evaluates possible moves and selects the best one based on logic and probability is an example of:

Acting rationally

64
New cards

An Al system that drives a car by observing the environment and making decisions to safely navigate traffic is an example of

Acting rationally

65
New cards

General Problem Solver (GPS) is best categorized under which Al approach

Thinking like a human

66
New cards

Which agent type would struggle in a non-deterministic environment?

Simple reflex agent

67
New cards

Which of the following is the primary objective of the Turing Test?

To assess a machine's ability to exhibit intelligent behavior indistinguishable from a human.

68
New cards

Which of the following best describes a situation involving a "known environment" in Al?

An Al agent playing a board game with predefined rules and a fixed board layout.

69
New cards

What is the environmental description for Playing soccer?

Partially observable, stochastic, sequential, dynamic, continuous, multi-agent.

70
New cards

What is an essential component for passing the Total Turing Test that is not required for the standard Turing Test?

computer vision

71
New cards

A ______ environment requires the agent to account for uncertainty in its decision-making process.

uncertain

72
New cards

An agent's _______ is the abstract mathematical representation of its behavior, while the _________ is the concrete implementation.

Element, Logic

73
New cards

A(n) ________agent learns from its environment and adapts its behavior over time, while it may still require human guidance.

learning

74
New cards

Observable

The agent's sensors give it access to the complete state of the environment at each point in time, for a fully observable environment. For a partially observable environment, sensors are noisy/inaccurate or parts of the state are missing

75
New cards

Deterministic

The next state of the environment is completely determined by the current state and the action executed by the agent. Otherwise, the environment is stochastic or nondeterministic

76
New cards

Static

The environment does not change while an agent is deliberating. If it can change, it is dynamic. If the environment doesn't change but the score does, it's semidynamic

77
New cards

Episodic

The agent's experience is divided into atomic episodes, where in each episode the agent receives a percept and performs a single action, and the next episode does not depend on actions taken in previous episodes. In contrast, in sequential environments, the current decision affects future decisions

78
New cards

Learning Element

Responsible for making improvements. It modifies the performance element based on feedback from the critic. It can change the agent's knowledge components and learns from what it perceives, such as observing successive states of the environment to learn how the world evolves and the results of its actions. It brings the components into closer agreement with available feedback information, improving overall performance. This aligns with the function of Acquiring new knowledge.

79
New cards

Performance Element

Responsible for selecting external actions. This component is what we previously considered to be the entire agent, taking in percepts and deciding on actions. It "keeps doing the actions that are best, given what it knows," unless directed otherwise by the problem generator. This aligns with the function of Selecting and executing actions

80
New cards

Critic

Tells the learning element how well the agent is doing with respect to a fixed performance standard. It provides feedback to the learning element. The critic is necessary because percepts alone may not indicate the agent's success. It conceptually sits outside the agent so the agent cannot modify the standard to fit its own behavior. This aligns with the function of Evaluating the agent's actions.

81
New cards

Problem Generator

Responsible for suggesting actions that will lead to new and informative experiences. Its job is to suggest exploratory actions, even if they are potentially suboptimal in the short term, to help the agent discover better actions in the long run. This aligns with the function of Suggesting actions or tasks.

82
New cards

Which of the following is true regarding the importance of time complexity vs. space complexity in search algorithms?

83
New cards

Assume that you place n rooks on a n*n board. They should not attack each other. What is the maximum size of the state space?

n!

84
New cards

Prove/disprove:

If h1(s) and h2(s) are two admissible A* heuristics, then their sum h(s) = h1(s)+h2(s) is also admissible.

admissible: h(s)<=h'(s)

So, if h1(s)<=h'(s) and h2(s) <h'(s) assuming that h'(s) is the same throughout, if h1=3,h2=4, and h'=5, then 3+4 <= 5, which is false, therefore h1(s) + h(2) is not admissible.

85
New cards

Match every search algorithm with its two characteristics.

Algorithm

1. IDA*

2. RBFS

Characteristics:

A. Uses limited memory by only storing the current exploring path.

B. Performs iterative deepening strategy by increasing path cost on each iteration.

C. Keeps track of the current path and the best alternative paths. but discards others.

D. Performs depth-first strategy by considering path costs or heuristic

estimates.

IDA*: B,D

RBFS: A, C

86
New cards