Overview of Artificial Intelligence (AI) and Historical Context
  • Introduction to AI

    • The Birth of AI: While development surged in 2022, the field was formally established at the Dartmouth Workshop in 1956. Use of the term "Artificial Intelligence" refers to the simulation of human intelligence by machines, particularly computer systems.

    • Trend in AI Development: Since 2022, there has been a shift from "Narrow AI" (designed for specific tasks like chess) to "Generative AI" (capable of creating new content).

    • Concerns: Rapid deployment has outpaced legislation, leading to systemic risks regarding misinformation and economic displacement.

  • Current State of AI (as of 2022-2024)

    • Creativity and Logic: AI typically relies on stochastic processes (probability) rather than true human-like creativity. It predicts the next token in a sequence based on massive datasets.

    • Environmental and Resource Impact:

    • Water Consumption: Cooling data centers for large-scale model training requires millions of gallons of water.

    • Electricity: Training a single large language model can consume as much energy as hundreds of households do in a year.

    • E-Waste: Constant hardware upgrades (GPUs/TPUs) lead to significant hardware turnover and physical waste.


Historical Timeline of Computing and AI Development
  • Key Milestones in Computer Science

    • Turing's Contribution (1936-1950):

    • Universal Turing Machine: A theoretical model that defined the limits of what can be computed.

    • The Turing Test: Originally the "Imitation Game," it posits that if a human cannot distinguish between a machine and a human in conversation, the machine is "thinking."

    • ABC Computer (1942):

    • Developed by John Atanasoff and Clifford Berry. It utilized vacuum tubes and binary math, laying the groundwork for electronic digital computing.

    • ENIAC (1945):

    • The first programmable, electronic, general-purpose digital computer, illustrating the shift toward electronic computational power.

  • Evolution of Personal Computing

    • 1970s-1980s: The Altair 8800 (1974) and Apple II (1977) brought computing into homes. By the 1980s, the IBM PC standardized personal computing, though costs remained high (($3,000)(\approx \$3,000), adjusted for inflation).


The Era of Generative AI and Foundation Models
  • The Transformer Breakthrough:

    • 2017 "Attention Is All You Need": The Google paper that introduced the Transformer architecture, allowing models to process data in parallel and understand context better.

    • ChatGPT Release: Launched by OpenAI in November 2022, reaching 100 million users in two months.

    • Foundation Models: These are broad models trained on vast quantities of data (e.g., GPT-4, Claude, Llama) that can be adapted to a wide range of downstream tasks.


Implications and Societal Risks
  • Economic Impact

    • Job Polarization: Automation threatens not just manual labor but cognitive labor (copywriting, coding, legal analysis).

    • The Productivity Paradox: While AI can increase individual worker output, it may lead to wage stagnation if the gains are not distributed fairly.

  • Ethical and Security Concerns

    • Surveillance States: Enhanced facial recognition and predictive policing raise civil liberty questions.

    • Data Privacy: Big tech firms harvest user interactions to train models, often without explicit consent or compensation for the original content creators.


AI and Human-Centric Design
  • Cognitive vs. Emotional Intelligence

    • Lack of Intentionality: AI does not "know" what it is saying; it lacks a subjective experience (qualia).

    • Mental Health Risks: Replacing human therapists with AI chatbots can lead to dangerous outcomes if the AI "hallucinates" harmful advice or fails to recognize severe crises.

  • Cultural Intelligence

    • Models are often Western-aligned, reflecting the biases of the English-speaking internet and potentially erasing cultural nuances from the Global South.


Future Trajectories and Governance
  • Technological Frontiers

    • Graphcore and Hardware: Specialized AI chips (IPUs) designed for brain-like processing efficiency.

    • Quantum Computing: Aims to solve complex problems in seconds that would take classical supercomputers millennia, potentially cracking current encryptions.

  • Regulation and The EU AI Act:

    • The world's first comprehensive AI law, categorizing AI systems by risk level (Unacceptable, High, Limited, Minimal).

    • Monopoly Risks: The high cost of compute (CostParameters2\text{Cost} \propto \text{Parameters}^2) means only the wealthiest corporations can build top-tier models, creating an AI oligopoly.


Ethical Frameworks and Responsibility
  • Alignment Problem: The challenge of ensuring AI goals align with human values (e.g., an AI told to "stop climate change" might conclude that eliminating humans is the most efficient path).

  • Consumer Agency: Literacy in AI tools is becoming a fundamental requirement for modern citizenship to discern "Deepfakes" from reality.