Intro to AI - Module 1

5.0(2)
studied byStudied by 5 people
5.0(2)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/223

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

224 Terms

1
New cards

AI

The study of systems that

  • think like humans

  • think rationally

  • act like humans

  • act rationally

2
New cards

AI is wanting to mimic ____

Human's intelligence

  • but some were more obsessed with "did the machine follow the steps" over "did it do it like a human"

  • Human decisions aren't always mathematically correct

3
New cards

Foundations of AI - Philosophy

  • Logic, mind, knowledge

  • Are there rules on how to draw correct conclusions?

  • How does the mind come from the brain, since the mind isn't the brain?

  • Where does knowledge come from, given everyone has different knowledge?

4
New cards

How is Philosophy related to AI?

  • How does a machine think on its own?

  • This has to answer why a human does something then we translate it to the machine

5
New cards

Empiricism Philosophy in relation to AI

Belief that nothing significant comes if not sensed, and everything comes from human senses to be significant.

Machine has to sense

6
New cards

Mathematics in AI

  • Proof, computability, probability

  • What are the rules to draw correct conclusions?

  • What can be computed?

  • How do we deal with uncertainties?

7
New cards

Economics in AI

  • Maximizing payoffs

  • How to make decisions aligned with our preferences, even if it can be realized in the future?

  • How do you make a machine understand this?

  • Thinking of the utility

8
New cards

Neuroscience in AI

  • Brain and neurons

  • How do brains process information?

9
New cards

Psychology in AI

  • Thought, perception, action

  • How do humans and animals act?

  • Why do I do what I do?

10
New cards

Computer Engineering in AI

  • Developments in hardware

  • How can we build an efficient computer?

11
New cards

Control Theory in AI

  • Stable feedback systems

  • How do artifacts operate on their own control?

  • When do you start doing an action? When do you stop?

12
New cards

Linguistics in AI

  • Knowledge representation

  • Syntax

  • How does language relate to thought?

  • Cultural terms

13
New cards

1943-1956: Inception of AI

  1. Boolean circuit model of the brain

  2. Neural network computers

  3. Turing's ideas on machine intelligence

  4. McCarthy (Dartmouth meeting)

14
New cards

1952-1969: Early enthusiasm, great expectations (Look Ma, no hands!)

  1. Early AI programs

  2. Development of logic theorists and general problem solvers

  3. Geometry theorem prover

  4. Checker program

  5. LISP

  6. Problems on limited domains

15
New cards

1966-1973: A Dose of Reality

  • AI limited by informed introspection (human approaches) and tractability of problems (hard to scale up)

  • Computational complexity

  • Neural network research almost disappears

16
New cards

1969-1986: Expert Systems

  • Early development of knowledge-based systems, specific area of know-how of specific subjects

  • DENDRAL

  • MYCIN

  • R1: DEC (Digital Equipment Corporation)

17
New cards

DENDRAL

  • Expert system

  • Infers molecular structure by being fed entire books

18
New cards

MYCIN

  • Expert system for medical diagnosis of blood infections

  • Knowledge from books and experts

19
New cards

R1: DEC (digital equipment corporation)

  • Configuration of computer systems

  • First commercially successful expert system

20
New cards

Boolean Circuit Model

  • Model of the brain

  • Proposed by McCulloch and Pitts

  • During the inception of AI

21
New cards

Snarc

  • Neural network computer

  • Developed by Minsky and Edmonds

22
New cards

Computing Machinery and Intelligence

  • Turing
  • Inception of AI
  • Computer can think like a human
23
New cards

McCarthy

  • Dartmouth meeting
  • Artificial Intelligence
  • 1943-1956: Inception of AI
24
New cards

Logic theorist and General problem solver

  • Newell and Simon
  • Can think like human
  • Concern: can it follow steps like human? can it solve problems like human?
  • Early enthusiasm (1952-1969)
25
New cards

Geometry Theorem Prover

  • Gelernter
  • Early enthusiasm (1952-1969)
26
New cards

Checker Program

  • Samuel
  • Early enthusiasm (1952-1969)
27
New cards

LISP

  • A high-level programming language developed for AI
  • Early enthusiasm (1952-1969)
28
New cards

Microworlds

  • Limited domains used in early AI research to test theories and programs
  • Early enthusiasm (1952-1969)
29
New cards

Computational Complexity

  • Complexity of problems limits AI's ability to process data
  • Can't deal with more data than what was limited by microworlds
30
New cards

Neural Network Research

A field of study that faced a decline in interest during the 1966-1973 period (Dose of Reality)

31
New cards

AI History 1980-1988

  • Expert systems industry booms

  • A lot of people promised that they can make the stuff and deliver it well

32
New cards

Advancements in AI

Improvements in computing power, memory storage, connectivity, and automation that support AI development.

33
New cards

AI History 1988-1993

  • Expert system industry busts (AI Winter)
  • They failed to make the stuff and they didn't deliver due to many issues
34
New cards

Contemporary AI: 1986 onwards

Neural Networks

35
New cards

Contemporary AI: 1987 onwards

probabilistic reasoning and machine learning

36
New cards

Contemporary AI: 2001 onwards

big data (user info)

37
New cards

Contemporary AI: 2011 onwards

deep learning, computer vision (pixels, lines, colors, position)

38
New cards

Contemporary AI: 2021 onwards

generative AI

39
New cards

Artificial intelligence Defintion

  • Do it like a human

  • Define variables like a human would

  • Hardcoded tech

  • How would a human respond

40
New cards

Machine learning

  • Do it based on past experience

  • Not hardcoded since there are existing models

  • Input data into it then it will predict as close to an actual answer as possible

41
New cards

Deep learning

  • Subset of machine learning

  • Can adjust by itself since it's built in

  • Layered so it can handle really large data

  • Works with nested or layered models adjusting based on the evaluation of its own results

42
New cards

Generative AI

  • Subset of deep learning

  • Not all deep learning is generative

43
New cards

4 Components of AI (TAHO)

  1. Technology
  2. Autonomy
  3. Human
  4. Outputs
44
New cards

4 Components of AI (TAHO): Technology

Branch of computer science concerned with automation of intelligent behavior

45
New cards

4 Components of AI (TAHO): Autonomy

Systems that display intelligent behavior by analyzing their environment and taking actions with some degree of autonomy to achieve specific goals

46
New cards

4 Components of AI (TAHO): Human

Ability to perceive, pursue goals, initiate actions and learn from a feedback loop

47
New cards

4 Components of AI (TAHO): Outputs

Technical scientific field devoted to the engineered system that generates outputs like content, forecasts, recommendation of decisions for a given set of human-defined objectives

48
New cards

What can AI do today?

  • Robotic vehicles
  • Autonomous planning and scheduling - spacecrafts and rovers, logistics and transportation planning, ride-hailing apps
  • Machine translation
  • Speech recognition
  • Recommender systems - Amazon, Walmart, Netflix, YouTube
  • Game playing - Deep Blue (chess), AlphaGo, Jeopardy, Poker
  • Medicine - disease diagnosis (LYNA for metastatic breast cancer)
  • Automated conversation (chatbots)
49
New cards

Systems Acting like Humans

  • Turing test
  • Natural language
  • Knowledge representation
  • Multilayer perception
  • Automated reasoning
50
New cards

Turing test

  • Test for intelligent behavior

  • Interrogator write questions and receives answers

  • System passes if interrogator can't tell if answers come from a person or not

51
New cards

Example of Turing Test now

CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart)

52
New cards

Automated reasoning

  • Follow steps and reach a certain answer, taking note of the context
  • Thinking with contextualization
  • How can it argue? How can it identify questions (rhetorical, open-ended)?
53
New cards

Systems Thinking like Humans

  • Formulate a theory of mind and brain

  • Express the theory in a computer program

  • 2 approaches (thought vs behavior)

54
New cards

2 Approaches of Systems Thinking like Humans

Cognitive science and psychology
Cognitive neuroscience

55
New cards

Cognitive science and psychology

  • Testing or predicting responses of human subjects

  • Behavior

  • Thought and perceptions logic

56
New cards

Cognitive neuroscience

  • Observing neurological data
  • Actual brain, neuroscience
57
New cards

Rational vs Human Intelligence

Rational = ideal intelligence
Human intelligence isn't always correct

58
New cards

Systems Thinking Rationally

  • Rational thinking governed by precise "laws of thought"

  • Systems (in theory) can solve problems using such laws

59
New cards

Laws of Thought

  • Syllogisms (Premise and conclusion)

  • Notation and logic

  • Uncertainty and probability

60
New cards

Systems Acting Rationally

  • Rational behavior and actions to achieve the best outcome

  • May or may not involve rational thinking (eg. reflexes)

  • Subsumes the other approaches

  • Lends well to science development

  • AI standard model

61
New cards

Standard model often makes unrealistic assumptions

  • Objectives and outcomes are fully specified and understood

  • Specification align with human values and objectives

  • Possible that the specified objectives are neither complete nor entirely correct

62
New cards

Value alignment problem with beneficial machines

Ensuring agreement between human and agent objectives

EX. ChatGPT, find me the best hotel in south korea but does it really do it well or properly? There is news where the AI agent goes through the transaction of paying. Make sure that your objectives align with the agent

63
New cards

Self-driving car example

  • Objective: reach destination safely
  • But what about tradeoffs between trip progress and injury risk? Or interactions between other agents/drivers?
  • Define tradeoffs that you're amenable to
64
New cards

Value alignment problem

Ensuring agreement between human and agent objectives.

65
New cards

AI Definition

Artificial Intelligence is the study of systems that act rationally. It is the study and construction of agents that "do the right thing".

66
New cards

Intelligent Agents

  • Anything that perceives and acts on its environment

  • Whatever the agent does, affects the environment

  • What the agent perceive comes from the environment

67
New cards

A rational agent carries out an action with the _______ after _______ .

A rational agent carries out an action with the best outcome after considering past and current percepts

68
New cards

Percepts

Content of what the agent perceives of its environment.

69
New cards

Agent Function

A = F(p)
F maps percepts to actions, F : P → A

p = current percept
P = set of all percepts
a = action carried out
A = set of all actions
F = agent function

70
New cards

True or False: An action doesn't depend on all percepts observed so far, just the current percept

False
May depend on all percepts observed so far, not just the current percept

71
New cards

Agent Function but considering all percepts

Ak = F (p0 p1 p2 … pk)

(p0 p1 p2 … pk) = sequence of percepts observed to date
Ak = resulting action carried out

72
New cards

Structure of Agents

Agent = architecture + program

73
New cards

Architecture

Device with sensors and actuators
EX. robotic car, a camera, a PC.

74
New cards

Program

Implements the agent function on the architecture

75
New cards

How do we specify the task environment?

PEAS
P - Performance Measure
E - Environment
A - Actuators
S - Sensors

76
New cards

Performance Measure

Captures agent's aspiration

EX. for a taxi driver: safe, fast, legal, comfy trip, max profit, min impact on other road users

77
New cards

Environment

Context, restrictions

EX. for a taxi driver: roads, other traffic, police, peds, customer, weather

78
New cards

Actuators

Indicate what the agent can carry out

EX. for a taxi driver: accelerate, steering, brake, signal, horn, display, speech

79
New cards

Sensors

Indicate what the agent can perceive

EX. for a taxi driver: camera, radar, speedometer, GPS, engine sensors, accelerometer, microphones, touchscreen

80
New cards

When specifying actuators and sensors, what distinction can be useful to make?

  • Distinguish the devices from their functions

  • Actions actuators can carry out or the percepts sensors can detect

81
New cards

Part-Picking Robot Actuator and Sensor

Actuator: jointed arm and hand; actions: pick up or drop part
Sensor: camera; percepts: (detect) type and position of a part

82
New cards

Properties of Environments

Fully VS partially observable
Single-agent VS multi-agent
Deterministic VS stochastic
Episodic VS sequential
Static VS dynamic
Discrete VS continuous
Known VS unknown

83
New cards

Fully VS Partially observable

Does agent have all the info on the environment?
If yes, fully if not, partially

EX. Solitaire is partially observable
EX. Chess is fully observable

84
New cards

Single-agent VS Multi-agent

Do you only have 1 agent tackling the problem or are there more?
And if multiagent, are they competitive or cooperative?


EX. Is there only 1 automated driver or are there more?

85
New cards

Deterministic VS Stochastic

Is the action predictable?


Deterministic: next state is dependent on current state and action
Stochastic: there's a random element

EX. TicTacToe is deterministic
EX. Poker is stochastic

86
New cards

Episodic VS Sequential

Episodic: episodes have distinct goals/objectives, atomic episodes, actions are independent
Sequential: actions are influence on prior action

87
New cards

Static VS Dynamic

Before you make the next action, will the environment change?


Static: environment may change only after an agent's action
Dynamic: environment can change independently of the agent, even while the agent is deliberating

EX. Cross word puzzles = static since actions won't change the environment
EX. taxi driver is dynamic since a lot of things can happen and the environment can change even as you're deliberating

88
New cards

Discrete VS Continuous

Measurements of state, time, percepts, and actions


Discrete: Environment has a finite number of clearly defined states, percepts, or actions.
You can differentiate between state 1 and state 2

Continuous: Environment involves measurements that vary smoothly over a range (state, time, percepts, or actions).
No sharp separation between states

EX. driving a taxi is continuous

89
New cards

Known VS Unknown

If all the rules of the environment are known

EX. Solitaire is known

90
New cards

Crossword Puzzle

Full observable
Single agent
Deterministic
Sequential
Static
Discrete

91
New cards

Image Analysis

Fully observable
Single agent
Deterministic
Episodic
Semi static and Semi dynamic
Continuous

92
New cards

English Tutor

Partially observable
Multi-agent
Stochastic
Sequential
Dynamic
Discrete

93
New cards

Medical Diagnosis

Partially observable
Single agent
Stochastic
Episodic
Dynamic
Continuous

94
New cards

Chess with Clock

Fully observable

Multi agent

Deterministic

Sequential

Semi-static Semi-dynamic

Discrete

95
New cards

Vacuum world: What is the performance measure?

clean both rooms, fewest steps

96
New cards
<p>Vacuum world: Properties</p>

Vacuum world: Properties

Fully observable
Single Agent
Deterministic
Sequential (if room is dirty, move left or right)
Static
Discrete
Known

97
New cards

State Management

The process of determining the next state and action based on the current state and percept.

98
New cards

Types of Agents

Reflex Agent
Reflex Agent with State (Model-based Reflex Agent)
Goal-based Agent
Utility-Based Agent
Learning Agent

99
New cards

Reflex Agent

  • Current state → current condition → act

  • Only if environment is fully observable

  • Function: reflex agent (percept) returns an action

  • Rules are stored and repeatedly checked

100
New cards

Reflex Agent Algorithm

State interpret-input (percept)
Convert raw percept (EX. sensor reading) into internal state (EX. "dirty")

Rule rule-match (state, rules)
Look up which rule matches this state

Action rule, action
Take the action specified by the rule

Return action
Output the action to environment