AI – Definitions, Approaches, Rationality & Philosophical Foundations

Definitions & Dimensions of Artificial Intelligence (AI)

  • Figure 1.1 positions 8 classical definitions of AI on two conceptual axes:
    • Vertical axis ⇢ focus of evaluation
    • Top: Internal processes / reasoning ("thinking").
    • Bottom: External behaviour ("acting").
    • Horizontal axis ⇢ benchmark used
    • Left: Fidelity to human performance.
    • Right: Fidelity to an ideal standard called rationality (i.e., doing "the right thing" given what is known).
  • Resulting 4 quadrants (with example quotations):
    • Thinking Humanly
    • Haugeland (1985): making computers really think.
    • Bellman (1978): automating human‐like decision-making, problem solving, learning.
    • Acting Humanly
    • Kurzweil (1990): machines that perform functions requiring intelligence in people.
    • Rich & Knight (1991): getting computers to do things humans currently do better.
    • Thinking Rationally
    • Charniak & McDermott (1985): study mental faculties via computational models.
    • Winston (1992): computations enabling perception, reasoning, action.
    • Acting Rationally
    • Poole et al. (1998): design of intelligent agents.
    • Nilsson (1998): concern with intelligent behaviour in artefacts.

Four Canonical Approaches Explained

1. Acting Humanly – The Turing Test

  • Alan Turing (1950) offered an operational definition: a machine is intelligent if an interrogator, limited to text interaction, cannot distinguish it from a human.
  • Abilities implicitly required to pass:
    • Natural-Language Processing (NLP) – converse fluently.
    • Knowledge Representation (KR) – store & organise facts.
    • Automated Reasoning – derive answers & new conclusions.
    • Machine Learning (ML) – adapt & generalise from data.
  • Total Turing Test (adds physical channel):
    • Requires Computer Vision (CV) for perception.
    • Requires Robotics for manipulation & mobility.
  • Historical note:
    • Research community has focused more on principles than on literally passing the test, much like aeronautical engineers abandoned perfect bird imitation in favour of aerodynamics (Wright brothers metaphor).

2. Thinking Humanly – Cognitive Modelling

  • Goal: Build programs that replicate actual mental processes.
  • Empirical methods for discovering how humans think:
    • Introspection – self-observation of thought.
    • Psychological Experiments – controlled studies of behaviour.
    • Brain Imaging – neural activity measurement.
  • Benchmark: If the program’s I/O trace matches human trace, underlying mechanisms are considered cognitively plausible.
  • Example: GPS (General Problem Solver) by Newell & Simon (1961) compared step-by-step reasoning paths.
  • Field synergy → Cognitive Science combines AI models with experimental psychology & neuroscience (e.g., computer vision uses neurophysiological data).
  • Modern practice: carefully separates performance claims from cognitive validity claims, allowing both AI & cognitive science to progress.

3. Thinking Rationally – “Laws of Thought”

  • Rooted in Aristotle’s syllogistic logic (e.g., “Socrates is a man; all men are mortal → Socrates is mortal”).
  • 19-century logicians produced precise logical notation; by 1965 there were programs that could solve any logically-described, solvable problem (though they could loop forever if unsolvable).
  • The Logicist Tradition aspires to build intelligence entirely on formal logic.
  • Obstacles:
    • Knowledge Engineering Bottleneck – hard to encode informal, uncertain knowledge formally.
    • Computational Explosion – even hundreds of facts can overwhelm search without effective heuristics.

4. Acting Rationally – The Rational-Agent Paradigm

  • Agent: an entity that perceives and acts; expected traits: autonomy, persistence, adaptivity, goal-directedness.
  • Rational Agent: selects action yielding best (expected) outcome given its knowledge.
  • Relation to previous approaches:
    • Correct logical inference is one route to rationality but not the only one (e.g., reflexes like recoiling from heat are fast & effective without explicit reasoning).
  • Advantages:
    • Generality – subsumes logical reasoning but allows any method that maximises performance.
    • Scientific Rigor – rationality has a mathematically precise definition (permitting formal analysis & proofs).
  • Connecting pieces: the same six abilities needed for the Total Turing Test (NLP, KR, Reasoning, ML, CV, Robotics) are instrumental for rational agency.

Practical Limits – “Limited Rationality”

  • Perfect rationality is computationally infeasible in complex, real-time environments.
  • Working hypothesis: assume ideal rationality for foundational study, then relax to bounded/limited rationality (explored explicitly in Chapters 5 & 17).
  • Concept reminder: agents often choose the action with maximum expected utility argmaxa  E[Ua]\text{argmax}_a\;E[U|a] when time & information permit; limited-rational agents approximate this.

Foundations of AI

Philosophy Contributions

  • Core philosophical questions framing AI:
    1. Can formal rules capture valid reasoning?
    2. How does mind arise from physical brain? (Mind–body problem)
    3. Origin of knowledge – innate vs learned?
    4. How does knowledge lead to action?
Key Historical Threads & Thinkers
  • Logic & Rationalism
    • Aristotle: syllogisms & goal-action linkage; first "instrument of thought" (Organon).
    • Ramon Lull (~1315): mechanical reasoning idea.
    • Hobbes (1651): thinking = numerical computation; metaphor of body as machine (springs, wheels).
  • Dualism vs Materialism
    • Descartes: separation between immaterial mind & material body; raises free-will concerns.
    • Materialist view: cognition = brain obeying physical laws; free will appears as perceived choice.
  • Empiricism & Induction
    • Bacon, Locke: mind starts as tabula rasa; knowledge from senses.
    • Hume: repeated association ⇒ general rules (principle of induction).
  • Logical Positivism
    • Carnap, Vienna Circle: all knowledge describable by logical theories ultimately grounded in observation sentences.
    • Carnap’s The Logical Structure of the World (1928): explicit computational procedure for building knowledge bases from elementary experiences – arguably first computational mind theory.
  • Knowledge → Action link
    • Aristotle’s reasoning chain in De Motu Animalium:
    • From need → derive intermediate means → conclude actionable step (“I need a cloak → I have to make a cloak”).
    • Implemented millennia later as regression planning (e.g., GPS system; see Chapter 10).
    • Aristotle’s algorithmic description of deliberation in Nicomachean Ethics: regress from goal to first achievable means, abandoning search if impossibility detected.
    • Quantitative decision proposal by Antoine Arnauld; broader ethical calculus formalised in Mill’s Utilitarianism — choose action maximising expected “utility.”

Mathematics Contributions (briefly introduced)

  • Need for formal answers to:
    1. What are valid inference rules? (Logic)
    2. What functions are computable? (Computation theory)
    3. How to reason under uncertainty? (Probability theory)
  • Early milestone: George Boole’s algebra of logic (detailed history continues beyond the provided excerpt).

Illustrative Examples & Metaphors Referenced

  • Wright Brothers vs pigeons: progress in flight came from studying aerodynamics, not from perfect bird imitation — analogy for AI researchers focusing on principles over passing Turing Test.
  • Socrates syllogism: canonical example of logically valid conclusion.
  • Hot-stove reflex: shows rational action can bypass explicit inference when speed matters.

Ethical, Philosophical & Practical Implications

  • Free Will Debate: deterministic physical mind vs dualistic soul.
  • Utility Maximisation: foundation for modern AI decision-theory and raises ethical questions about value specification.
  • Bounded Rationality: challenges ideal agent models; motivates research in heuristics, real-time systems, and human-AI alignment.

Connections to Later Chapters (Road-map)

  • Chapter 2: expands the high-level agent design issues introduced here.
  • Chapter 5 & 17: formal treatments of limited rationality / bounded resources.
  • Chapter 10: planning algorithms (including regression planning inspired by Aristotle & implemented in GPS).
  • Chapter 16: quantitative decision theory (links back to Arnauld & Mill).
  • Chapter 26: in-depth examination of the Turing Test – operational details & philosophical debates.