AI 100 Fall 2025 - Lecture Notes on Artificial Intelligence

0.0(0)
studied byStudied by 0 people
0.0(0)
linked notesView linked note
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/22

flashcard set

Earn XP

Description and Tags

These flashcards contain key terms and definitions related to the concepts discussed in the lecture on Artificial Intelligence, focusing on intelligent agents, their functions, types, and the environments they operate in.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

23 Terms

1
New cards

Intelligent Agent

An entity that can perceive its environment, act upon that environment, and interact with other agents.

2
New cards

Sensor

A device that detects aspects of the environment and provides percepts to the agent.

3
New cards

Actuator (Effector)

A component of an agent that acts on the environment based on the agent's decision.

4
New cards

Percept Sequence

The history of percepts that an agent has received over time.

5
New cards

Agent Function

The mapping from percept sequences to actions, typically described by the agent program.

6
New cards

Rational Agent

An agent that takes actions based on the expected maximization of its performance measure given its percepts.

7
New cards

Performance Measure

Criteria for evaluating an agent's success defined by the tasks it undertakes.

8
New cards

Simple Reflex Agent

An agent that selects actions based only on the current percept without considering the past percepts.

9
New cards

Model-based Reflex Agent

An agent that maintains an internal state to keep track of the environment based on past percepts.

10
New cards

Goal-based Agent

An agent that operates with a specific goal in mind and plans a sequence of actions to achieve that goal.

11
New cards

Utility-based Agent

An agent that evaluates actions based on a utility function representing the desirability of outcomes.

12
New cards

Learning Agent

An agent that improves its performance based on experiences from its interactions with the environment.

13
New cards

Fully Observable Environment

An environment where the agent can perceive all aspects that are relevant to its actions.

14
New cards

Partially Observable Environment

An environment where the agent cannot perceive certain important aspects, limiting its understanding.

15
New cards

Deterministic Environment

An environment where the next state is entirely determined by the current state and the actions of the agent.

16
New cards

Stochastic Environment

An environment where the next state is not fully determined and involves some randomness.

17
New cards

Dynamic Environment

An environment that can change while the agent is deliberating on its actions.

18
New cards

Static Environment

An environment that does not change while the agent is deliberating.

19
New cards

Discrete Environment

An environment where the possible states and actions can be distinctly identified and counted.

20
New cards

Continuous Environment

An environment where states and actions can take on a range of values.

21
New cards

Multi-agent System

A system where multiple agents interact, cooperating or competing with each other.

22
New cards

Skinnerian Learning

A learning approach where agents learn from the consequences of their actions through rewards and punishments.

23
New cards

Agent Program

The algorithm that governs how an agent behaves based on its inputs (sensors) and outputs (actions).