Notes on Back to the fifties: reassessing technological and political progress

Overview

  • Summary of Eric J. Larson's argument in "Back to the fifties, reassessing technological and political progress": today’s surface optimism about technological miracles masks deep social and psychological alienation and silent suffering. This parallels fascinations with 1950s techno-utopias and the belief that progress is linear and exponential.
  • Cites historical and contemporary works on technology and society (e.g., Sherry Turkle, Alone Together; The Lonely Crowd) to frame the critique of consumer-driven progress narratives.
  • The central question: what do we mean by progress, and are we really advancing toward a better world, or merely reshaping it with new technologies?

Key Concepts

  • Progress as a historical idea: linear and exponential progress historically tied to Enlightenment thinking; Auguste Comte’s escalator metaphor of moving from barbarism to utopia; Kantian optimism about reason underpins today’s techno-utopians.
  • Technological determinism vs. human reality: modern techno culture often treats digital computation as the core of reason, progress, and human capability.
  • Computational/metaphorical view of humans: Silicon Valley’s tendency to equate technology, reasoning, and human intelligence with digital computation.
  • Alternative historical perspectives: Vaclav Smil offers a broader view of progress focusing on energy, materials, and practical constraints, challenging the idea that digital breakthroughs imply universal exponential progress.

Progress and its Philosophical Roots

  • Enlightenment-era roots of progress: belief in steady improvement through rational inquiry and science.
  • Auguste Comte’s escalator metaphor (misattributed here as an imagined stage-by-stage ascent through religion, philosophy, science, to technoscience).
  • Imprint on contemporary techno-optimism: Kantian optimism merges with modern techno-culture’s faith in digital tools as essential for betterment.
  • Contrast: earlier visions framed scientific discovery as a panacea; today’s rhetoric often emphasizes improvements to existing discoveries (e.g., the digital computer) rather than new foundational breakthroughs.

The Computational Metaphor and Its Critics

  • The computational metaphor pervades modern thinking: language like “technology” and “reason” equated with digital computation.
  • Vaclav Smil’s critique (via Erik J. Larson): there are long-term material and energy constraints that digital progress alone cannot overcome.
  • Very real but narrow focus in some domains: exponential progress is most evident in microchip performance, though even there Smil argues the exponential trend is fading.
  • The broader set of needed advances (e.g., fertilizers, refrigerants, energy storage, water/food security) remains underfunded or underappreciated when emphasis centers on AI and computing.

The Exponential Myth and Moore’s Law

  • Exponential progress is often claimed for digital tech, but evidence is mixed outside microelectronics.
  • Moore’s Law (as presented here): the number of transistors on a chip doubles every year and the cost halves. In notation:N<em>t=N</em>02t,C<em>t=C</em>02t.N<em>t=N</em>0\,2^{t},\qquad C<em>t=\frac{C</em>0}{2^{t}}.
  • Timeline and data points presented:
    • Earlier chips yielded an enormous increase in power; today, computers are claimed to be roughly 1.75×1091.75\times 10^{9} times more powerful than the Apollo computer.
    • The end of Moore’s law, or at least substantial slowdown, is suggested: the total transistor count grew faster than predicted between 1993 and 2002/2013, but slowed notably from 2008 to 2018. Endpoint counts cited: beginning around 1.9×1091.9\times 10^{9} transistors, ending around 23.6×10923.6\times 10^{9}.
    • If Moore’s law were a strict law, the 2018 expected count would have been about 60×10960\times 10^{9} transistors.
  • Implication: exponential growth in computing power does not automatically imply exponential progress across all domains; the broader claim of unstoppable tech-led utopia is unwarranted.

AI, Deep Learning, and the 21st Century Technology Landscape

  • The hype cycle around AI persists, but the underlying breakthroughs are often incremental and data-driven rather than foundational.
  • Key ML milestones:
    • Deep learning rose to prominence around 2012 (Hinton et al.) with breakthroughs in image recognition (ImageNet).
    • Innovations such as dropout (reducing overfitting) and attention mechanisms contributed to improved performance.
    • Large language models (LLMs) like those behind GPT-3/3.5/4 rely on transformer architectures and attention to capture long-range dependencies in text.
  • Architecture and training:
    • LLMs rely on a transformer-based architecture, which enables looking back over a sequence of words/tokens to predict the next token.
    • Training uses billions of words and trillions of parameters; roughly 5,000,0005{,}000{,}000 digitized books are available for training.
    • Training and inference require massive computational resources: e.g., tens of thousands of GPUs/TPUs to achieve state-of-the-art performance.
  • Evaluation of AI as “innovation”: yes, but not a revolution in intelligence or common-sense understanding; breakthroughs are primarily applied rather than fundamental.
  • AI deployment remains centralized: governance, data access, and energy costs concentrate power among large corporations and financiers (e.g., OpenAI, Microsoft’s investment).
  • Contrast with self-driving cars:
    • Self-driving AI has not matched the promise in real-world environments due to edge cases and the limits of inductive learning.
    • Induction (generalizing from many examples) works well in web-scale, closed-world contexts but fails in open, dynamic real-world settings (edge cases: weather, debris, occlusions, unseen road configurations).
    • The ramification problem (unpredictable downstream effects) makes dynamic environments particularly challenging for inference-based AI.
  • Induction vs non-inductive reasoning:
    • Current AI largely relies on inductive inference from existing data; true general intelligence would require non-inductive, perhaps causal, and contextual reasoning beyond pattern recognition.

The World Before and After the 21st Century Tech Boom

  • Architectural continuity: much of the modern world’s infrastructure (engines, turbines, electric motors, materials like steel, aluminum, titanium) existed well before the computer era; many foundational discoveries precede the 21st century.
  • The 1980s as a watershed decade: Smil highlights the 1980s as a moment with more foundational discoveries, inventions, and deployments than any other decade preceding or following it.
  • Subtlety of progress: tweaks to neural networks and incremental improvements in computation appear dramatic, but broad-based progress in science, medicine, and resources may have stagnated in some domains.

The 1950s: A Snapshot of Tech, Culture, and Society

  • The ENIAC, UNIVAC, IBM 700 series as early computing milestones:
    • ENIAC and BINYEDVOC (likely ENIAC and UNIVAC) used initially for military calculations; later for data processing in business and administration.
    • IBM 701’s emergence as a symbol of centralized, mass-produced computing power; early data processing capacity that enabled complex calculations at scale.
  • Stewart Brand and the counterculture nexus:
    • Brand’s cybernetics-inflected thinking helped seed the back-to-the-land movement and a countercultural embrace of personal use of computing.
    • The Whole Earth Catalog (WEC) provided tools and knowledge, promoting access to technology as a means of personal liberation from centralized power.
    • WEC influenced early computing culture and standards; it contributed to the ethos that ordinary people should have access to digital tools.
  • The 1960s–70s: Do-it-yourself networks and think tanks:
    • The Stanford Research Institute (SRI) and Xerox PARC became hubs of innovation, feeding later consumer tech and digital culture (e.g., ARPAnet, graphical interfaces).
    • The 1970s hacker culture and the rise of what would become the internet’s ethos of open collaboration and shared knowledge.
  • The 1980s and 1990s: Web emergence and the dream of a decentralized world:
    • The Whole Earth Electronic Link (Well) and the idea of a decentralized, cooperative knowledge society.
    • Wired magazine popularized a vision of the internet as a bottom-up, cooperative force that could save the world from centralized corporate/government power.
    • The late 1990s Wired cover (June 1997) proclaimed the long boom: 25 years of prosperity, freedom, and a better environment, signaling a triumphalist techno-utopian mood.
  • The decline of the utopian arc in the 21st century:
    • The century’s trajectory has included increased wealth inequality, productivity concerns, wars, and financial crises, with the web becoming a platform for misinformation and distraction.
    • The shift from a decentralized dream to a centralized, data-driven, corporate-dominated infrastructure (big data, cloud, centralized GPUs) is highlighted as a reversal of Brand’s original vision.

The Geopolitics of Tech and the Return of the Military-Industrial Complex

  • Post-2000s geopolitics:
    • The Ukraine conflict and proxy dynamics with Russia and China revive Cold War-era anxieties and a return to great-power competition.
    • The West’s supply chains and defense contractors are mobilized; the DoD fast-tracks production to meet increasing demands for munitions.
  • The centralization thesis:
    • Despite widespread consumer access to smartphones (even among the homeless), the architectural and economic backbone of modern tech remains centralized in big tech and large capital.
    • Microsoft’s significant investment in OpenAI underscores the contemporary centralization of AI development and deployment.
  • China and geopolitical risk:
    • China’s traditional strategy of making diplomatic noises while pursuing strategic aims, including concerns about Taiwan and Western interests.

The Iceberg Metaphor and Social Psychology

  • Iceberg metaphor (Maestrovik’s concept, as presented):
    • Public emotions and behaviors are visible above water, while real feelings are hidden below the surface—packaged and managed for public consumption.
    • The 1950s saw an entertainment-driven deflection of Cold War anxieties; today, entertainment and online culture continue to mask deeper social and emotional undercurrents.
  • The iceberg as a tool for understanding progress:
    • It signals a mismatch between outward progress (visible tech, devices, apps) and inner life (emotions, social connectedness, meaning).
  • The broader societal conclusion:
    • Our ideas about progress conflict with reality in both how we act and how we feel, suggesting a flawed theory of history itself that emphasizes forward motion over cycles and contingencies.

Theories of History: Cycles, Recurrence, and Dataism

  • Giambattista Vico and cyclical views:
    • Vico argued that societies progress, then regress, in cycles, moving through affluence to corruption and back toward renewal. He emphasized a return to older patterns after apex points, with cycles repeating in spirals rather than a linear ascent.
    • Vico’s Ricorso: after reaching a high point, societies undergo a return that is not identical but slightly higher or altered in form, implying cyclical-but-progressive dynamics.
  • The role of language, institutions, and imagination:
    • As civilizations progress, they may lose poetic or imaginative energy, substituting legal-rational language and bureaucratic structures that eventually contribute to decline or stagnation.
  • Dataism and the contemporary counterpoint:
    • Yuval Harari’s dataism posits that everything, including humans, is data and reducible to information processing.
    • This data-centric worldview aligns with techno-futurist utopias but risks ignoring non-data dimensions of human life (agency, ethics, meaning) and can deepen the myopia already criticized by Smil.
  • Tension between cyclical history and computational futurism:
    • If Vico is right about cycles and Ricorso, and if dataism reduces human value to data, there is a critique of reducing history to computational progress.
  • The modern risk: techno-optimism without attention to cyclical dynamics or material limits can lead to brittle societies ill-equipped for real-world disruptions.

Practical, Ethical, and Real-World Implications

  • Energy, environment, and resources:
    • Even with better batteries, we face fundamental energy densities and materials constraints; solutions like synthetic fertilizers, safe refrigerants, and climate-friendly technologies are essential but underemphasized in hype-heavy tech narratives.
  • Inequality and access:
    • The benefits of digital tech are uneven; centralized power concentrates wealth and influence, potentially widening social gaps.
  • AI governance and safety:
    • The dependence on data and centralized computation raises concerns about privacy, safety, and control over powerful AI systems.
  • Policy and research funding:
    • A shift away from broad, multi-disciplinary, foundational sciences toward application-first AI may miss foundational breakthroughs that could re-shape multiple domains.
  • Culture and philosophy:
    • The interplay of optimism, fear, and distraction affects public discourse about technology and its role in society; a more nuanced narrative may help align progress with human values and long-term resilience.

Publication and Context

  • This article originally appeared in American Affairs, volume 7, number 3, fall 2023, pages 45-58.
  • About the author: Eric J. Larson is a computer scientist, entrepreneur, and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). He founded two AI startups funded by DARPA and has written for The Atlantic and professional journals; he has contributed to the IC^2 Tech Incubator at the University of Texas at Austin and runs the Substack Collego.
  • The piece connects historical, technological, and geopolitical threads to question whether we are truly back in the fifties or moving toward something worse, urging a more comprehensive understanding of progress that includes ethics, environment, and social well-being.

Notable Data Points and References (laid out for study)

  • Sherry Turkle, Alone Together (2011): critique of social effects of technology.
  • Auguste Comte and Kantian influences on progress narratives.
  • Vaclav Smil: critiques of exponential progress beyond computing; calls for attention to energy, fertilizers, refrigerants, and other essentials.
  • Moore’s Law (as stated): N<em>t=N</em>02t,C<em>t=C</em>0/2tN<em>t=N</em>0\cdot 2^{t},\qquad C<em>t=C</em>0/2^{t} (transistors double, cost halves per year, per the transcript).
  • Power comparison: computers today claimed to be 1.75×1091.75\times 10^{9} times more powerful than the Apollo guidance computer.
  • Transistor counts: begin around 1.9×1091.9\times 10^{9}, end around 23.6×10923.6\times 10^{9}; 2018 count should have been 60×10960\times 10^{9} if Moore’s Law held perfectly.
  • AI milestones: 2012 ImageNet breakthrough with deep learning; dropout; attention mechanism; GPT-3/3.5/4 and transformers; around 5,000,000 digitized books used for training.
  • Computing hardware scale: training often requires tens of thousands of GPUs/TPUs; centralized data and energy use dominate modern AI.
  • Historical milestones of the 1950s: DNA double helix (1953); laser (1958); first magnetic disk (1956); Fortran (1957); integrated circuits (1958); 1971 microchip; Sputnik (1957); NASA (1958); Dartmouth Conference (1956) marking AI beginnings; Internet precursors; ARPANET later.
  • 1950s socio-technical culture: IBM 701, Big Iron, and the dawn of centralized computing; Stewart Brand and Whole Earth Catalog; Well; Wired and the back-to-the-land/DIY ethos.
  • Post-1990s trajectory: long boom optimism in Wired; social dysfunction and inequality growth; centralization of tech power and rapid AI deployment.
  • Geopolitics: Ukraine conflict; NATO involvement; Russia-China dynamics; defense contractor mobilization; risk to global order; potential for a new cold war.
  • The iceberg metaphor and post-emotional society: emotional life increasingly packaged for public consumption; entertainment as distraction from deeper social questions.
  • Vico and cyclical history: Ricorso and spiral progression; civilization rise and fall; language and institutions shift during cycles; potential for recurrence of crisis even amid progress.
  • Dataism critique: Harari’s data-centric worldview as a modern manifestation of techno-utopianism with risks of reducing life to information.

Takeaway for Exam Preparation

  • Distinguish between exponential progress in digital hardware versus broader, cross-domain progress (energy, food security, health, environment).
  • Understand the difference between induction-based AI capabilities (LLMs, pattern recognition) and the types of intelligence or autonomy needed for robust, real-world systems (non-inductive reasoning, causal understanding, edge-case handling).
  • Recognize the historical narratives of progress, their philosophical underpinnings, and their political and social consequences when forwarded as inevitabilities.
  • Be able to discuss centralization vs decentralization in tech history, and why the turn to centralized data power may mirror older centralized computing eras.
  • Be prepared to discuss the ethical, economic, and strategic implications of AI deployment, cybernetics, and the evolving relationship between science, industry, and government.

Key Takeaway Quotes (paraphrased from the transcript)

  • "Progress, linear and exponential, is remarkably durable, yet the methods of progress may be misapplied or misunderstood."
  • "The computational metaphor excludes cyclical views of history, much as Moore’s Law suggested endless exponential growth in a single domain while other domains lag."
  • "Armageddon or Utopia, technofuturism often reduces history to ever-increasing power of technology, with consequences for discourse and policy."
  • "Edge cases break inductive models; real-world autonomy (like self-driving cars) reveals the limits of induction-based AI."
  • "Dataism reduces life to data, a potentially reductive framework that ignores other dimensions of human meaning and agency."