1/22
These flashcards contain key terms and definitions related to the concepts discussed in the lecture on Artificial Intelligence, focusing on intelligent agents, their functions, types, and the environments they operate in.
Name  | Mastery  | Learn  | Test  | Matching  | Spaced  | 
|---|
No study sessions yet.
Intelligent Agent
An entity that can perceive its environment, act upon that environment, and interact with other agents.
Sensor
A device that detects aspects of the environment and provides percepts to the agent.
Actuator (Effector)
A component of an agent that acts on the environment based on the agent's decision.
Percept Sequence
The history of percepts that an agent has received over time.
Agent Function
The mapping from percept sequences to actions, typically described by the agent program.
Rational Agent
An agent that takes actions based on the expected maximization of its performance measure given its percepts.
Performance Measure
Criteria for evaluating an agent's success defined by the tasks it undertakes.
Simple Reflex Agent
An agent that selects actions based only on the current percept without considering the past percepts.
Model-based Reflex Agent
An agent that maintains an internal state to keep track of the environment based on past percepts.
Goal-based Agent
An agent that operates with a specific goal in mind and plans a sequence of actions to achieve that goal.
Utility-based Agent
An agent that evaluates actions based on a utility function representing the desirability of outcomes.
Learning Agent
An agent that improves its performance based on experiences from its interactions with the environment.
Fully Observable Environment
An environment where the agent can perceive all aspects that are relevant to its actions.
Partially Observable Environment
An environment where the agent cannot perceive certain important aspects, limiting its understanding.
Deterministic Environment
An environment where the next state is entirely determined by the current state and the actions of the agent.
Stochastic Environment
An environment where the next state is not fully determined and involves some randomness.
Dynamic Environment
An environment that can change while the agent is deliberating on its actions.
Static Environment
An environment that does not change while the agent is deliberating.
Discrete Environment
An environment where the possible states and actions can be distinctly identified and counted.
Continuous Environment
An environment where states and actions can take on a range of values.
Multi-agent System
A system where multiple agents interact, cooperating or competing with each other.
Skinnerian Learning
A learning approach where agents learn from the consequences of their actions through rewards and punishments.
Agent Program
The algorithm that governs how an agent behaves based on its inputs (sensors) and outputs (actions).