Example: biology

Existential Risk Prevention as Global Priority

Global Policy Volume 4 . Issue 1 . February 2013. 15. Existential Risk Prevention as Global Priority Nick Bostrom Research Article University of Oxford Abstract Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net Existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this article, I clarify the concept of Existential risk and develop an improved classification scheme. I discuss the relation between Existential risks and basic issues in axiology, and show how Existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns.

Existential Risk Prevention as Global Priority Nick Bostrom University of Oxford Abstract Existential risks are those that threaten the entire future of humanity.

Tags:

  Risks, Prevention, Existential risk prevention as, Existential

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Existential Risk Prevention as Global Priority

1 Global Policy Volume 4 . Issue 1 . February 2013. 15. Existential Risk Prevention as Global Priority Nick Bostrom Research Article University of Oxford Abstract Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net Existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this article, I clarify the concept of Existential risk and develop an improved classification scheme. I discuss the relation between Existential risks and basic issues in axiology, and show how Existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns.

2 I also show how the notion of Existential risk suggests a new way of thinking about the ideal of sustainability. Policy Implications Existential risk is a concept that can focus long-term Global efforts and sustainability concerns. The biggest Existential risks are anthropogenic and related to potential future technologies. A moral case can be made that Existential risk reduction is strictly more important than any other Global public good. Sustainability should be reconceptualised in dynamic terms, as aiming for a sustainable trajectory rather than a sus- tainable state. Some small Existential risks can be mitigated today directly ( asteroids) or indirectly (by building resilience and reserves to increase survivability in a range of extreme scenarios) but it is more important to build capacity to improve humanity's ability to deal with the larger Existential risks that will arise later in this century.

3 This will require collective wisdom, technology foresight, and the ability when necessary to mobilise a strong Global coordi- nated response to anticipated Existential risks . Perhaps the most cost-effective way to reduce Existential risks today is to fund analysis of a wide range of existen- tial risks and potential mitigation strategies, with a long-term perspective. 1. The maxipok rule centuries to be significant is the extreme magnitude of the values at stake. Even a small probability of Existential catas- Existential risk and uncertainty trophe could be highly practically significant (Bostrom, An Existential risk is one that threatens the premature 2003; Matheny, 2007; Posner, 2004; Weitzman, 2009). extinction of Earth-originating intelligent life or the perma- Humanity has survived what we might call natural nent and drastic destruction of its potential for desirable Existential risks for hundreds of thousands of years; thus future development (Bostrom, 2002).

4 Although it is often it is prima facie unlikely that any of them will do us in difficult to assess the probability of Existential risks , there within the next This conclusion is buttressed are many reasons to suppose that the total such risk con- when we analyse specific risks from nature, such as fronting humanity over the next few centuries is significant. asteroid impacts, supervolcanic eruptions, earthquakes, Estimates of 10 20 per cent total Existential risk in this cen- gamma-ray bursts, and so forth: Empirical impact distri- tury are fairly typical among those who have examined the butions and scientific models suggest that the likelihood issue, though inevitably such estimates rely heavily on sub- of extinction because of these kinds of risk is extremely jective The most reasonable estimate might be small on a time scale of a century or substantially higher or lower.

5 But perhaps the strongest rea- In contrast, our species is introducing entirely new kinds son for judging the total Existential risk within the next few of Existential risk threats we have no track record of Global Policy (2013) 4:1 doi: 2013 University of Durham and John Wiley & Sons, Ltd. Nick Bostrom 16. surviving. Our longevity as a species therefore offers no Figure 1. Meta-level uncertainty. strong prior grounds for confident optimism. Consider- ation of specific Existential -risk scenarios bears out the sus- picion that the great bulk of Existential risk in the foreseeable future consists of anthropogenic Existential risks that is, those arising from human activity. In particu- lar, most of the biggest Existential risks seem to be linked to potential future technological breakthroughs that may radically expand our ability to manipulate the external world or our own biology.

6 As our powers expand, so will Source: Ord et al., 2010. Factoring in the fallibility of our first- the scale of their potential consequences intended and order risk assessments can amplify the probability of risks unintended, positive and negative. For example, there assessed to be extremely small. An initial analysis (left side) gives appear to be significant Existential risks in some of the a small probability of a disaster (black stripe). But the analysis advanced forms of biotechnology, molecular nanotechnol- could be wrong; this is represented by the grey area (right side). ogy, and machine intelligence that might be developed in Most of the all-things-considered risk may lie in the grey area the decades ahead. The bulk of Existential risk over the next rather than in the black stripe.

7 Century may thus reside in rather speculative scenarios to which we cannot assign precise probabilities through any rigorous statistical or scientific method. But the fact that as risky at all depends on an evaluation. Before we can the probability of some risk is difficult to quantify does not determine the seriousness of a risk, we must specify a imply that the risk is negligible. standard of evaluation by which the negative value of a Probability can be understood in different senses. Most particular possible loss scenario is measured. There are relevant here is the epistemic sense in which probability several types of such evaluation standard. For example, is construed as (something like) the credence that an ide- one could use a utility function that represents some ally reasonable observer should assign to the risk's mate- particular agent's preferences over various outcomes.

8 Rialising based on currently available If This might be appropriate when one's duty is to give something cannot presently be known to be objectively decision support to a particular decision maker. But here safe, it is risky at least in the subjective sense relevant to we will consider a normative evaluation, an ethically war- decision making. An empty cave is unsafe in just this ranted assignment of value to various possible out- sense if you cannot tell whether or not it is home to a comes. This type of evaluation is more relevant when we hungry lion. It would be rational for you to avoid the are inquiring into what our society's (or our own individ- cave if you reasonably judge that the expected harm of ual) risk-mitigation priorities ought to be. entry outweighs the expected benefit.

9 There are conflicting theories in moral philosophy The uncertainty and error-proneness of our first-order about which normative evaluations are correct. I will not assessments of risk is itself something we must factor here attempt to adjudicate any foundational axiological into our all-things-considered probability assignments. disagreement. Instead, let us consider a simplified ver- This factor often dominates in low-probability, high- sion of one important class of normative theories. Let us consequence risks especially those involving poorly suppose that the lives of persons usually have some sig- understood natural phenomena, complex social dynamics, nificant positive value and that this value is aggregative or new technology, or that are difficult to assess for other (in the sense that the value of two similar lives is twice reasons.)

10 Suppose that some scientific analysis A indicates that of one life). Let us also assume that, holding the that some catastrophe X has an extremely small probability quality and duration of a life constant, its value does not P(X) of occurring. Then the probability that A has some depend on when it occurs or on whether it already hidden crucial flaw may easily be much greater than P(X).5 exists or is yet to be brought into existence as a result Furthermore, the conditional probability of X given that A of future events and choices. These assumptions could is crucially flawed, P(X |!A), may be fairly high. We may be relaxed and complications could be introduced, but then find that most of the risk of X resides in the uncer- we will confine our discussion to the simplest case.


Related search queries