Intelligent Agents

0.0(0)
studied byStudied by 0 people
0.0(0)
linked notesView linked note
call with kaiCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/53

flashcard set

Earn XP

Description and Tags

CSE 4705

Last updated 11:44 PM on 2/1/25
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

54 Terms

1
New cards

Intelligent Agent

Anything that can perceive its environment through sensors and act upon that environment through actuators.

2
New cards

Percept

The content that an agent's sensors are perceiving.

3
New cards

Percept Sequence

The complete history of everything the agent has ever perceived.

4
New cards

Sensors

Devices that detect and take in percepts from the environment.

5
New cards

Actuators

Devices that carry out actions to change the agent’s state in its environment.

6
New cards

Agent Function

Mapping of percept histories to actions, aimed at maximizing the likelihood of achieving a goal.

7
New cards

Rational Agent

An agent that selects actions expected to maximize its performance measure based on its percept sequence and built-in knowledge.

8
New cards

PEAS

Performance Environment Actuators Sensors; a description framework for intelligent agent design.

9
New cards

Fully Observable vs. Partially Observable

Fully observable agents have complete access to the state of the environment; partially observable agents do not.

10
New cards

Deterministic vs. Nondeterministic (Stochastic)

Deterministic environments have outcomes that can be precisely determined by current states and actions, while nondeterministic environments do not.

11
New cards

Episodic vs. Sequential

Episodic environments have independent episodes of experience, while sequential environments depend on previous actions.

12
New cards

Static vs. Dynamic

Static environments do not change while the agent deliberates; dynamic environments can change.

13
New cards

Simple Reflex Agent

An agent that makes decisions based only on the current percept, ignoring prior percepts.

14
New cards

Model-Based Reflex Agent

An agent that keeps track of the world based on percept history and maintains an internal state.

15
New cards

Model-Based Goal-Based Agent

An agent that uses knowledge about its goals to make decisions.

16
New cards

Model-Based Utility-Based Agent

An agent that evaluates actions based on their utility relative to multiple goals.

17
New cards

Learning Agent

An agent that learns from experience to improve its performance over time.

18
New cards

Problem-Solving Agent

A goal-driven agent that evaluates future actions to find a path to a desired goal.

19
New cards

Search Problem

Defined by states, initial state, goal state, actions, transition model, and action cost.

20
New cards

Search Algorithm

Systematic approach to explore nodes representing states to achieve a goal.

21
New cards

Tree Search Algorithm

A graph search algorithm that represents a search problem as a tree and explores states by generating successors.

22
New cards

Simple Reflex Agent Example

A thermostat that turns on or off based on the temperature reading.

23
New cards

Model-Based Reflex Agent Example

A smart home system that adjusts the heating based on the current temperature and past preferences.

24
New cards

Model-Based Goal-Based Agent Example

A navigation app that calculates the best route to a destination considering traffic conditions.

25
New cards

Model-Based Utility-Based Agent Example

A self-driving car that evaluates multiple routes based on time, fuel efficiency, and safety.

26
New cards

Learning Agent Example

A recommendation system that learns user preferences over time to suggest movies or music.

27
New cards

Problem-Solving Agent Example

A chess-playing program that evaluates possible moves to checkmate the opponent.

28
New cards

States

The various possible configurations or conditions of the problem space in a search problem.

29
New cards

Initial State

The starting point from which the search begins in a search problem.

30
New cards

Goal State

The desired configuration that the search problem aims to achieve.

31
New cards

Actions

The set of operations that can change the current state to another state in a search problem.

32
New cards

Transition Model

The description of how the actions affect the state, showing the relationship between states and actions.

33
New cards

Action Cost

The cost associated with taking a particular action, which can affect the overall evaluation of a path in a search problem.

34
New cards

Frontier

The collection of nodes that have been discovered but not yet explored in a search problem.

35
New cards

FIFO Structure

First In, First Out; often used to describe the queue structure for storing the frontier.

36
New cards

IS-EMPTY(frontier)

A function that returns true if there are no nodes in the frontier.

37
New cards

POP(frontier)

A function that removes the top node from the frontier and returns it.

38
New cards

TOP(frontier)

A function that returns (but does not remove) the top node of the frontier.

39
New cards

ADD(node, frontier)

A function that inserts a node into its proper place in the queue.

40
New cards

Search Strategy

Defines the order of node expansion in a search process.

41
New cards

Completeness

A measure of whether a search strategy can always find a solution if one exists.

42
New cards

Time Complexity

The number of nodes generated/expanded as a function of the branching factor and depth.

43
New cards

Space Complexity

The maximum number of nodes in memory as a function of the branching factor and depth.

44
New cards

Optimality

A measure of whether a search strategy can always find the least action cost solution.

45
New cards

BFS (Breadth First Search)

A search strategy that explores nodes in order of increasing depth using a FIFO structure.

46
New cards

DFS (Depth First Search)

A search strategy that explores nodes in order of decreasing depth using a LIFO structure.

47
New cards

Uniform Cost Search

A search strategy that selects nodes based on the minimum path cost function, using a priority queue.

48
New cards

Limited Depth First Search (LDFS)

A variation of DFS that restricts exploration to a maximum depth, d = L.

49
New cards

Iterative Deepening Depth First Search (IDS)

A search strategy that conducts LDFS iteratively, increasing the maximum depth with each iteration.

50
New cards

Properties of BFS (Breadth First Search)

Complete, optimal if all edges have the same cost, time complexity is O(b^d), space complexity is O(b^d).

51
New cards

Properties of DFS (Depth First Search)

Complete only for finite spaces, not optimal, time complexity is O(b^d), space complexity is O(b*d), where d is the depth of the search.

52
New cards

Properties of Uniform Cost Search

Complete, optimal, time complexity is O(b^d) in the worst case, space complexity is O(b^d).

53
New cards

Properties of Limited Depth First Search (LDFS)

Complete if a solution exists within the maximum depth, not optimal, time complexity can be O(b^L), space complexity is O(b*L).

54
New cards

Properties of Iterative Deepening Depth First Search (IDS)

Complete, optimal, time complexity is O(b^d), space complexity is O(b*d), where d is the depth of the shallowest solution.