1) Transhumanism: Humans as a Stepping Stone
“Transhumanism is the moral position that there is value in exploring posthuman and transhuman modes of being
The results are to be beings— modified biological humans, cyborgs, androids, or digital simulants whose lives are considered more valuable than current ones.
Transhumanists argue that these beings could achieve far longer, richer lives marked by net positive experiences.
“Although transitioning to a posthuman stage could of course entail the extinction of Homo sapiens, Bostrom contends that “the permanent foreclosure of any possibility of this kind of transformative change of human biological nature may itself constitute an existential catastrophe”
“Preventing existential risk [...] are focused on preserving humans so that they may give rise to a post-human species that contains more value.”
2) Total Utilitarianism: future people matter equally / moral goal is to maximize good
“Total utilitarianism maximises the total amount of value in the universe, with as many people coming to exist as possible, each person living an overall happy (i.e., net-positive) life, regardless of where or when these people come into existence.”
3) Strong Longtermism: Long-term good is the priority
“A calculus that numerically favours strong longtermist actions, such as reducing existential risk, rather than saving millions of today’s people [...] We have no principled guidance about when and why a strong longermist should prioritise living humans of today.”
“The TUA is not representative of what most humans alive now believe
. [...] Representativeness itself says nothing about whether its philosophical pillars are wrong or right, but it is risky to rely exclusively on one unrepresentative approach given moral, political and empirical uncertainty.
1) Total utilitarianism is a minor view:
“There is no consensus among philosophers on moral theory. Utilitarianism is not the most commonly held view. In one of the few surveys in the area, less than a quarter (23.6%) of philosophers identified with consequentialism. An even smaller number will be utilitarian, and a small number still will be total utilitarian. The perspective that potential future lives are morally equivalent to existing lives may also be unintuitive to many.”
2) Strong longtermism does not represent values of most people:
“Techno-utopia offers futures of pleasurable, often virtual experiences, in which commonly valued attributes like purpose, virtue, love, and justice do not play a central role.”
“Both near-term extinction due to nuclear war and a future in which humans persist, sustainability and equitably for the next billion years without major technological progress, are seen as existential risks: worst-case outcomes for humanity. Equating these outcomes as morally equivalent is likely unintuitive to many.”
3) Transhumanism is not widespread
“That collected 760 responses from members of the World Transhumanist Association (now called Humanity+) in 2007, found that 90% of the respondents were male with a median age of 30-33 years old.
All definitions fixate on maximizing our potential. Here are three objections to the partisan definition of “potential” not representative of most people’s beliefs
Under the earliest TUA definition, a catastrophe that does not jeopardise the attainment of technological maturity is assigned comparatively little moral consideration. Limited resources would be wasted if they were directed towards global catastrophes that did not threaten technological maturity
Risks with less information about them are prioritized as information tends to reduce the perceived danger
“Imagining pathways to human extinction (kill mechanisms) invariably requires some creativity and speculation. This has meant that some areas of risk (e.g. AI) which are not as empirically constrained are often prioritised above others for which we have far more empirical data (e.g. climate change).”
“A field looking for the one hazard to kill them all will end up writing science fiction. More speculative risks are prioritised because a seemingly more complete story can be told and speculative mechanisms by which AI could kill every human can seemingly not yet be ruled out.”
“A risk perception that depends so strongly on speculation and yet-to-be verified assumptions will inevitably (to varying degrees) be an expression of researchers’ personal preferences, biases, and imagination.”
Extinction would most likely not unfold because of a singular existential threat but by an interaction of many risk factors interacting with our defensive measures and our weaknesses. “. The question of “is this an existential risk?” is naive. We should instead ask: in a given world-state (with structure, vulnerabilities, and the capacity for change) how much will a given process or event increase the overall likelihood of human extinction, and what are the plausible pathways for it to contribute to extinction risk?”
Sometimes, a softer version of potential of existential risk is provided, like in Ord (2021). “Ord suggests that we should first reduce existential risks to a minimum level to achieve “existential security” before undertaking a “Long Reflection”: a patient, collective discussion of what exactly humanity’s future “potential” is.
We will later define what is meant by “potential”, first we must secure humanity. Potential seems undefined. However, it faces three objections.
“The last chapter of The Precipice expands on a vision of humanity’s potential: transhumanist space expansion receives ample attention and adoration”
“The definition is seemingly agnostic in the abstract, but in practice there are numerous signals that it expresses the same commitment to total utilitarianism and transhumanism.”
The means to address the existential risk hides a conception of existential risk. “AI is an existential risk. The policy recommendations for mitigating such risks in The Precipice support R&D into aligned artificial general intelligence (AGI), instead of delaying, stopping or democratically controlling AI research and deployment.”
This is “an implicit advocacy for building advanced technology for instrumental purposes”: Transhumanism, total utilitarianism, or strong longtermism
“We need to define what our potential is before we can identify threats to it. How else would we know which risks to address?
This is an inherent tension within The Precipice since we are supposed to achieve existential security before undertaking the Long Reflection. It is difficult to know if we have achieved existential security if we haven’t defined an existential risk, since we haven’t undertaken the Long Reflection to define our potential.”
Counterargument:
“A reasonable counter could be that, in theory, there are certain futures that almost no one would like to live in (such as nuclear winter), and that there may be certain risks (for instance, an asteroid strike) that would take lots of plausibly good options off the table.
Extinction may indeed be an outcome which we could assume most people would agree we should avoid. Beyond this point of convergence, there may be far more disagreement on what futures are worth protecting.”
1) Ontological Technological determinism:
“The TUA often appears to assume an exogenous threat model in which existential hazards naturally and apolitically arise from inevitable and near-autonomous technological progress. The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible.”
2) Military-economic:
“Military-economic competition exerts a powerful selection pressure on technological development. This “military-economic adaptionism” constrains sociotechnical change to deterministic paths. Technologies that gift a strong strategic advantage will almost certainly be built”
3) Tractability argument
“Many in the related Effective Altruism community disregard controlling technology because of a perceived lack of tractability”. If a country does not do it, another one will, or people will hide, or will be ready to face the consequences.
“Importantly, assumptions around technological determinism are highly contested. Indeed, technological determinism is largely (for better or worse) derided and dismissed by scholars of science and technology studies.”
Example of past success: “We have historical evidence for collective action and coordination on technological progress and regress. One example is weather modification. Early attempts were made by the US during the Vietnam War to use weather modification technologies to extend the monsoon season and disrupt enemy supply chain. The introduction of the 1976 Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques (ENMOD Convention) seems to have successfully curtailed further research into the area.”
“For example, Bostrom argues that existential risk mitigation should be prioritised over other altruistic acts: if there is just a 1% chance of 10^54 people coming to exist in the future, then “the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth hundred billion times as much as a billion human lives”.
According to Bostrom, “even the tiniest reduction of existential risk has an expected value greater than that of the definitive provision of any “ordinary” good, such as the direct benefit of saving 1 billion lives”.
While EV is a useful theoretical tool in a range of contexts, in practice, it is hard to apply rigorously when working with low and highly uncertain probabilities. “The TUA applies expected value theory to the very areas where it faces the most pitfalls, that is, situations of deep uncertainty and low information about probabilities.”
How could studying global risks in a TUA framework contribute to catastrophe?
It can be dangerous for four reasons
The TUA’s focus on technological maturity never entertains the idea that we could stop developing technologies like AI, as it would be an existential risk, so it is better to take a huge risk in developing AI instead of stopping its development
“Furthermore, pursuing a techno-utopian future is dangerous and may come with considerable cost. It has already been noted that the attempt at colonising space and an expansion of technological capabilities could end in catastrophe if it foments a new arms race and large scale warfare. Upgrading the human body could construct a biological caste system, where, an enhanced, genetic elite could oversee a subjugated, unenhanced, “inferior” class.”
Emergency powers: Powers to protect existing legal and political structures in a period of tumult.
“The logic is that drastic times call for drastic measures. Emergency powers allow governments to disregard existing laws and exempt themselves from judicial or democratic restrictions and oversight.”
“Rather than protect, such measures are often abused to erode and transform fundamental political structures; when trying to centralise and extend state powers, fear is a powerful justification.
The larger the fear, the easier it is to justify more potent emergency powers
"If the perceived threat is human extinction, then the measures could be extreme”
“If the world is viewed from the TUA’s lens of existential risk, then we run the risk that almost any action is justified if it is believed to improve our chance of surviving to expand beyond Earth.”
“There is a clear danger in authoritative recommendations based on speculative thought experiments
Scholars using the TUA providing recommendations for surveillance and pre-emptive
measures in the name of avoiding catastrophe could contribute to birthing the very dystopias they fear”, e.g., entrenched authoritarianism
The authors argue that decisions regarding the future of humanity should be taken democratically as everyone’s interests are involved.
Reason 1: The superiority of democratic reason
“Democracy will improve our judgments in both the governance and the study of existential risks. [...] a diverse group of problem-solving agents will show greater accuracy than a less diverse group, even if the individual members of the diverse group were each less accurate.”
Reason 2: Democracy avoids extreme ideas
It seems, for example, vanishingly unlikely that a diverse group of thinkers or even ordinary citizens would entertain the idea of sacrificing 1 billion living beings for an infinitesimal improvement in reaching a galactic techno-utopia. In contrast, the TUA could recommend this trade-off.
Tying the study of a topic that fundamentally affects the whole of humanity to a niche belief system championed mainly by an unrepresentative, powerful minority (wealthy, Oxford, Cambridge, or the San Francisco Bay Area, Silicon Valley ideology, white, male) of the world is undemocratic and philosophically tenuous. Diversity of viewpoints would be beneficial to address such a complex issue.