AI 7000 Final Exam

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/76

flashcard set

Earn XP

Description and Tags

Final Exam KSU

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

77 Terms

1
New cards

How do knowledge-based agents differ from the problem-solving agents discussed in earlier chapters?

They reason over an internal knowledge base to decide actions, rather than having limited, inflexible knowledge of actions and outcomes.

2
New cards

What is the central component of a knowledge-based agent?

Its knowledge base, or KB, which is a set of sentences representing assertions about the world.

3
New cards

In the context of a knowledge base, what does the 'TELL' operation do?

It adds new knowledge (sentences) to the knowledge base.

4
New cards

What is the purpose of the 'ASK' operation in a knowledge-based agent?

It queries the knowledge base to determine what is known, often to decide on an action.

5
New cards

The process of deriving new knowledge from existing knowledge in a KB is called _.

inference

6
New cards

What are the three main steps a knowledge-based agent program performs each time it is called?

  1. TELL the KB its perception, 2. ASK the KB for an action, 3. TELL the KB the chosen action.
7
New cards

What is the 'declarative approach' to building a knowledge-based agent?

It involves starting with an empty knowledge base and TELLing it sentences one by one until it knows how to operate.

8
New cards

In the Wumpus World, what perception is associated with squares adjacent to a pit?

The agent perceives a Breeze.

9
New cards

What perception indicates that gold is in the same square as the agent in Wumpus World?

The agent perceives a Glitter.

10
New cards

What are the properties of the Wumpus World environment in terms of observability, determinism, and static/dynamic nature?

It is partially observable, deterministic, sequential (not episodic), static, discrete, and single-agent.

11
New cards

In logic, what is the 'semantics' of a language?

It defines the 'meaning' of sentences by specifying their truth with respect to each possible world, or model.

12
New cards

What does it mean for a sentence $α$ to logically entail a sentence $β$ (written as $α |= β$)?

It means that in every model in which $α$ is true, $β$ is also true.

13
New cards

An inference algorithm that derives only entailed sentences is called _.

sound or truth-preserving

14
New cards

What property does a 'complete' inference algorithm have?

It can derive any sentence that is logically entailed by the knowledge base.

15
New cards

In propositional logic, what is an 'atomic sentence'?

It consists of a single proposition symbol that can be either true or false.

16
New cards

List the five common logical connectives used in propositional logic.

Negation ($¬$), conjunction ($∧$), disjunction ($∨$), implication ($⇒$), and biconditional ($⇔$).

17
New cards

In propositional logic, if P is 'Today is Wednesday', what does $P ⇒ Q$ mean when P is false?

The implication is true, regardless of the truth value of Q.

18
New cards

What is the primary limitation of propositional logic that first-order logic addresses?

Propositional logic has limited expressive power and cannot generalize facts, such as 'pits cause breezes in adjacent squares', without creating a sentence for each square.

19
New cards

What are the two standard quantifiers used in first-order logic?

The universal quantifier ($∀$, 'for all') and the existential quantifier ($∃$, 'for some').

20
New cards

In first-order logic, what is the purpose of a Predicate?

A Predicate stands for a relation or property of objects, such as Student(x).

21
New cards

How can the entailment $α |= β$ be tested using model checking?

By testing the unsatisfiability of the sentence $α ∧ ¬β$.

22
New cards

What is the core idea behind theorem proving as an inference method?

It applies rules of inference directly to sentences in the KB to construct a proof without consulting models.

23
New cards

What is the rule of inference known as Modus Ponens?

If the knowledge base contains $α$ and $α ⇒ β$, then $β$ can be inferred.

24
New cards

What is the And-Elimination rule of inference?

If the knowledge base contains $α ∧ β$, then both $α$ and $β$ can be inferred individually.

25
New cards

Describe the process of forward chaining for inference.

It starts with known facts and adds conclusions of implications to the KB whenever all premises are known, continuing until the query is derived.

26
New cards

Forward chaining is an example of what general concept of reasoning?

Data-driven reasoning.

27
New cards

Describe the process of backward chaining for inference.

It works backward from the query, finding implications that conclude the query and then trying to prove their premises.

28
New cards

Backward chaining is a form of _-directed reasoning.

goal

29
New cards

The Davis–Putnam algorithm (DPLL) is an efficient backtracking search for model checking that embodies what three improvements?

Early termination, the pure symbol heuristic, and the unit clause heuristic.

30
New cards

In the DPLL algorithm, what is a 'pure symbol'?

A symbol that appears with only one sign (either always positive or always negative) in all clauses of a sentence.

31
New cards

What is the definition of a 'unit clause' in the context of the DPLL algorithm?

A clause with just one literal, or a disjunction with one literal and many falses, which forces a specific value assignment.

32
New cards

What is the task of 'classical planning'?

Finding a sequence of actions to accomplish a goal in a discrete, deterministic, static, and fully observable environment.

33
New cards

What is PDDL?

Planning Domain Definition Language, a standard language used to describe automated planning problems.

34
New cards

In PDDL, how is a state represented?

As a conjunction of ground atomic fluents.

35
New cards

In PDDL, what is an 'action schema' composed of?

The action name, a list of variables, a precondition, and an effect.

36
New cards

What two components are added to a PDDL domain definition to specify a particular problem?

An initial state and a goal.

37
New cards

In the context of planning algorithms, what is 'forward search'?

A search that starts from the initial state and moves through the state space by applying applicable actions, looking for a goal state.

38
New cards

How does 'backward search' for planning work?

It starts at the goal and applies actions in reverse, searching for a sequence of steps that reaches the initial state.

39
New cards

The ignore-delete-lists heuristic simplifies a planning problem by removing all _ literals from action effects.

negative

40
New cards

What does the 'ignore-preconditions' heuristic do to simplify a planning problem?

It drops all preconditions from actions, making every action applicable in every state.

41
New cards

What is reinforcement learning (RL)?

A type of learning where an agent interacts with an environment, receiving rewards or punishments, and learns to maximize cumulative rewards.

42
New cards

In a Markov decision process (MDP), what does the transition model $P(s' | a, s)$ represent?

The probability of reaching state s' if action a is taken in state s.

43
New cards

What is a 'policy' ($π$) in reinforcement learning?

A function that specifies what action an agent should take for any given state.

44
New cards

What is an 'optimal policy' ($π*$)?

A policy that yields the highest expected utility.

45
New cards

In reinforcement learning, what is the purpose of the discount factor $γ$?

It describes an agent's preference for current rewards over future rewards, decaying the value of rewards based on how far in the future they are.

46
New cards

What is the main difference between model-based and model-free reinforcement learning?

Model-based RL uses a transition model of the environment to make decisions, while model-free RL learns how to behave directly without building a model.

47
New cards

Contrast passive and active reinforcement learning.

In passive RL, the agent executes a fixed policy to evaluate it, while in active RL, the agent decides which actions to take to learn an optimal policy.

48
New cards

What is 'direct utility estimation' in passive reinforcement learning?

A method where the utility of a state is estimated by averaging the observed reward-to-go from that state over all trials.

49
New cards

What is the core idea of temporal-difference (TD) learning?

It uses observed transitions to incrementally adjust the utility estimates of observed states based on the utility of the successor state.

50
New cards

The TD update rule uses the difference between the utility and the utility.

expected, observed

51
New cards

What is the fundamental tradeoff an active reinforcement learning agent must manage?

The tradeoff between exploitation (using known information to maximize reward) and exploration (trying new actions to gain information).

52
New cards

What does the action-utility function $Q(s, a)$ in Q-learning represent?

The expected total discounted reward if the agent takes action 'a' in state 's' and acts optimally thereafter.

53
New cards

How does a Q-learning agent select an action in a given state 's' once it has learned the Q-function?

It chooses the action 'a' that maximizes $Q(s, a)$.

54
New cards

What is supervised learning?

A learning paradigm where an agent learns a function that maps from input to output by observing a training set of input-output pairs.

55
New cards

What is the most common task in unsupervised learning?

Clustering, which involves detecting potentially useful groups of input examples without explicit labels.

56
New cards

In supervised learning, the function $h$ that a learning algorithm discovers is called a , and it is drawn from a space H.

hypothesis, hypothesis

57
New cards

Define 'bias' in the context of machine learning models.

Bias is the systematic error caused by a model making incorrect assumptions about the data.

58
New cards

Define 'variance' in the context of machine learning models.

Variance is the amount a model's prediction changes due to fluctuations in the training data.

59
New cards

What is the bias-variance tradeoff?

A choice between more complex, low-bias models that fit training data well and simpler, low-variance models that may generalize better.

60
New cards

What is the main difference between a regression problem and a classification problem?

Regression predicts a continuous valued output, while classification predicts a discrete valued output.

61
New cards

What is the objective of linear regression?

To find the weights of a linear function that best fits the data, typically by minimizing the sum of squared errors.

62
New cards

What is the purpose of the gradient descent algorithm?

To find the optimal parameters (weights) of a model by iteratively moving in the direction of the steepest descent of the loss function.

63
New cards

What type of function does logistic regression use to model the probability of a discrete outcome?

A logistic function, also known as a sigmoid function.

64
New cards

What is a decision tree in machine learning?

A model that uses a tree-like structure of if-else conditions to map a vector of attribute values to an output value.

65
New cards

The LEARN-DECISION-TREE algorithm uses a greedy strategy that always tests the most _ attribute first.

important

66
New cards

What distinguishes a nonparametric model from a parametric model?

A nonparametric model cannot be characterized by a bounded set of parameters; its complexity can grow with the amount of training data.

67
New cards

How does a K-nearest-neighbors (KNN) model make a prediction for a classification task?

It finds the 'k' nearest examples to the query point in the training data and predicts the most common output value among them.

68
New cards

What is the core principle of a Support Vector Machine (SVM)?

It constructs a maximum margin separator, which is a decision boundary with the largest possible distance to the nearest example points of any class.

69
New cards

What is the basic idea of deep learning that allows it to be more expressive than shallow models?

It uses models with long computation paths (deep circuits), allowing input variables to interact in complex ways.

70
New cards

In a neural network, what is a 'unit'?

A node that calculates the weighted sum of its inputs and then applies a nonlinear activation function to produce an output.

71
New cards

What are the three main types of layers in a typical feedforward neural network?

The input layer, hidden layer(s), and the output layer.

72
New cards

What is the purpose of the back-propagation algorithm in training neural networks?

It computes the gradient of the loss function with respect to each weight in the network, allowing weights to be updated via gradient descent.

73
New cards

What type of data are Convolutional Neural Networks (CNNs) primarily designed to process?

Images, by exploiting properties like local spatial invariance.

74
New cards

In a CNN, what is a 'kernel'?

A pattern of weights (a filter) that is replicated and applied across multiple local regions of an image to extract features.

75
New cards

What is the function of a 'pooling layer' in a CNN?

It summarizes a set of adjacent units from a preceding layer with a single value, performing down-sampling.

76
New cards

What distinguishes Recurrent Neural Networks (RNNs) from feedforward networks?

RNNs allow cycles in their computation graph, which gives them internal state or memory to process sequential data.

77
New cards

What type of data are Recurrent Neural Networks (RNNs) especially well-suited for?

Language and other sequential data, where information from previous time steps is relevant to the current one.