1/53
CSE 4705
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Intelligent Agent
Anything that can perceive its environment through sensors and act upon that environment through actuators.
Percept
The content that an agent's sensors are perceiving.
Percept Sequence
The complete history of everything the agent has ever perceived.
Sensors
Devices that detect and take in percepts from the environment.
Actuators
Devices that carry out actions to change the agent’s state in its environment.
Agent Function
Mapping of percept histories to actions, aimed at maximizing the likelihood of achieving a goal.
Rational Agent
An agent that selects actions expected to maximize its performance measure based on its percept sequence and built-in knowledge.
PEAS
Performance Environment Actuators Sensors; a description framework for intelligent agent design.
Fully Observable vs. Partially Observable
Fully observable agents have complete access to the state of the environment; partially observable agents do not.
Deterministic vs. Nondeterministic (Stochastic)
Deterministic environments have outcomes that can be precisely determined by current states and actions, while nondeterministic environments do not.
Episodic vs. Sequential
Episodic environments have independent episodes of experience, while sequential environments depend on previous actions.
Static vs. Dynamic
Static environments do not change while the agent deliberates; dynamic environments can change.
Simple Reflex Agent
An agent that makes decisions based only on the current percept, ignoring prior percepts.
Model-Based Reflex Agent
An agent that keeps track of the world based on percept history and maintains an internal state.
Model-Based Goal-Based Agent
An agent that uses knowledge about its goals to make decisions.
Model-Based Utility-Based Agent
An agent that evaluates actions based on their utility relative to multiple goals.
Learning Agent
An agent that learns from experience to improve its performance over time.
Problem-Solving Agent
A goal-driven agent that evaluates future actions to find a path to a desired goal.
Search Problem
Defined by states, initial state, goal state, actions, transition model, and action cost.
Search Algorithm
Systematic approach to explore nodes representing states to achieve a goal.
Tree Search Algorithm
A graph search algorithm that represents a search problem as a tree and explores states by generating successors.
Simple Reflex Agent Example
A thermostat that turns on or off based on the temperature reading.
Model-Based Reflex Agent Example
A smart home system that adjusts the heating based on the current temperature and past preferences.
Model-Based Goal-Based Agent Example
A navigation app that calculates the best route to a destination considering traffic conditions.
Model-Based Utility-Based Agent Example
A self-driving car that evaluates multiple routes based on time, fuel efficiency, and safety.
Learning Agent Example
A recommendation system that learns user preferences over time to suggest movies or music.
Problem-Solving Agent Example
A chess-playing program that evaluates possible moves to checkmate the opponent.
States
The various possible configurations or conditions of the problem space in a search problem.
Initial State
The starting point from which the search begins in a search problem.
Goal State
The desired configuration that the search problem aims to achieve.
Actions
The set of operations that can change the current state to another state in a search problem.
Transition Model
The description of how the actions affect the state, showing the relationship between states and actions.
Action Cost
The cost associated with taking a particular action, which can affect the overall evaluation of a path in a search problem.
Frontier
The collection of nodes that have been discovered but not yet explored in a search problem.
FIFO Structure
First In, First Out; often used to describe the queue structure for storing the frontier.
IS-EMPTY(frontier)
A function that returns true if there are no nodes in the frontier.
POP(frontier)
A function that removes the top node from the frontier and returns it.
TOP(frontier)
A function that returns (but does not remove) the top node of the frontier.
ADD(node, frontier)
A function that inserts a node into its proper place in the queue.
Search Strategy
Defines the order of node expansion in a search process.
Completeness
A measure of whether a search strategy can always find a solution if one exists.
Time Complexity
The number of nodes generated/expanded as a function of the branching factor and depth.
Space Complexity
The maximum number of nodes in memory as a function of the branching factor and depth.
Optimality
A measure of whether a search strategy can always find the least action cost solution.
BFS (Breadth First Search)
A search strategy that explores nodes in order of increasing depth using a FIFO structure.
DFS (Depth First Search)
A search strategy that explores nodes in order of decreasing depth using a LIFO structure.
Uniform Cost Search
A search strategy that selects nodes based on the minimum path cost function, using a priority queue.
Limited Depth First Search (LDFS)
A variation of DFS that restricts exploration to a maximum depth, d = L.
Iterative Deepening Depth First Search (IDS)
A search strategy that conducts LDFS iteratively, increasing the maximum depth with each iteration.
Properties of BFS (Breadth First Search)
Complete, optimal if all edges have the same cost, time complexity is O(b^d), space complexity is O(b^d).
Properties of DFS (Depth First Search)
Complete only for finite spaces, not optimal, time complexity is O(b^d), space complexity is O(b*d), where d is the depth of the search.
Properties of Uniform Cost Search
Complete, optimal, time complexity is O(b^d) in the worst case, space complexity is O(b^d).
Properties of Limited Depth First Search (LDFS)
Complete if a solution exists within the maximum depth, not optimal, time complexity can be O(b^L), space complexity is O(b*L).
Properties of Iterative Deepening Depth First Search (IDS)
Complete, optimal, time complexity is O(b^d), space complexity is O(b*d), where d is the depth of the shallowest solution.