The value of existential risk reduction
In his interview on the 80000 hours podcast after the publication the Precipice, Toby Ord claimed that x-risk reduction was worth pursing regardless of weather x-risk was high or low. The claim is that when x-risk is low, it's worth working on x-risk because a small decline in the probability of x-risk has very large returns because by reducing the value of x-risk you're increasing the value of the future and the value of the future is greatest when x-risk is smallest. On the other hand, when x-risk is large, the return is also high because x-risk is likely to be neglected and therefore the marginal impact of work on x-risk is likely to be very high. This post aims to show the cases when this does and doesn't hold true.
I'll first lay out intuitively the cases where it does and doesn't hold, formalise it these notions and then give some numerical examples.
Intuition
There the are two parts to the claim: the value of the future part and the neglectedness part. The value of the future is the value of humanity per year, divided by the probability of x-risk per year. I'm assuming that the probability of x-risk is constant every year as I'll assume throughout this post. This has some quite counter-intuitive effects. It means that decreasing the probability of x-risk by 1/10 per year to 1/100 makes the future 10 times more valuable in expectation. However, the much larger percentage decline in year probability of x-risk from 1/2 to 1/10 only increases the expected value of the future 5 fold. In general, a decline from x to y increases the the value of the future by x/y. This does indeed mean that the value of decreasing x-risk is very large when x-risk is very small, and in fact in increases with square of x-risk. However, if you think that x-risk is very high the value of x-risk reduction is proportionally as small.
There's also a second factor to consider. We can think of probability of an existential risk in a given year as being dependent on all of the inputs going into x-risk reduction. This means that when we change the an input we get two effects - the effect from the change in the likelihood of x-risk in the year, multiplied by the everything the effect of the input is proportional to. For instance, the amount of x-risk per year could be dependent on the product of the amount of research being done and the amount of money being put into the researchers ideas.
Therefore, the effect of a small increase in research will be proportional to the product of the amount of money in x-risk reducation and the likelihood of x-risk at current amount of inputs. The impact of this consideration is that effect of increasing the inputs into x-risk reduction is highest when the probability of x-risk is around 50-50. However, at these value of reducing x-risk is very low because we're almost certainly going to all die if x-risk is anywhere close to 50-50.
The second part of the claim is that x-risk will be more neglected if it's large. There are two conditions that need to be met for this to increase the value of working on x-risk. The first is that x-risk must be neglected in a variable that we can increase. For instance, if preventing x-risk is proportional to the amount of political capital invested in x-risk, or even worse requires some minimum about of political capital, then the fact that very little labour is going into x-risk won't help us unless we can dramatically increase the amount of political capital as well. The second condition is diminishing returns to scale. This means that if all of the inputs into the x-risk reduction production function were all scaled by the same amount, the output of that production is scaled by less than that amount. If diminishing returns to scale holds then, all else equal, the return to putting resources into x-risk reduction is higher the fewer resources are in x-risk reduction.
But it's not at all obvious that that is the case! It could easily be that we need reach a threshold of high quality research done before we can start deploying capital in a productive way. In this case, scaling everything by an amount that meant that the research didn't reach that threshold would be useful, and then the amount that just got us over that threshold hold would be incredibly valuable!
I can’t figure out how to embed equations into substack so if you want to read the formalisation I’ve put it on the effective altruism forum here:
Numerical example
Assume that the production function is a standard Cobb-Douglas with labour and capital both equal to a half and both raised to the power of a half. The function which takes the output from the Cobb-Douglas and turns in into the probability of x-risk per year is a normal distribution with a mean of 0 and standard deviation of 1.
This gives a probability of x-risk as 0.3. If set utility per year as equal to 1 this gives an expected value of 3.24
Doubling the amount of labour going into x-risk reduction increases the expected value of future to 4.170973718
Now, if we imagine that we start with half a unit of capital and 20 units of labour we get an initial value of future of 1268. Increasing the amount of capital to 1 increase the expected value of future to 255754. As you can see, this is a very very very very dramatic increase in the value of the future with the same increase in resources. This post has assumed throughout a constant probability of extinction throughout - it's very unclear if this is the case and it seems extremely worthwhile to do the analysis for a variable rate of extinction.