There are two basic types of arguments vis-a-vis working on x-risk - ethical and non-ethical. Existential catastrophes are events that permanently curtail humanity's future, thereby forfeiting the great many numbers of good future lives. This post will focus on the normative arguments and the second will focus on practical arguments.
Is it right to work on x-risk?
The ethical arguments to work on x-risk further subdivide into two classes. Firstly those that focus on the longtermist case to work on x-risk - the fact that an x-risk means that many people who live wonderful lives would never be born. The second class focuses on the value of preventing catastrophic events. Most x-risk involve very large numbers of people dying and this would be very bad. This doesn’t include all x-risk - technological stagnation would also count as an x-risk because it would mean that many fewer humans were able to live good lives - but mostly when people talk about x-risk they’re worried about things like a pandemic which kills 5 billion people. A pandemic that killed 5bn would be extremely extremely bad and we should try to stop this. This post will focus on the longtermist reasons to work on x-risk.
There are four normative premises which I think you need to accept to think that working you should work on x-risk for longtermism reasons.
Future people matter
Potential people matter
More happy lives are good
The strength of the duty to create more happy lives can be commensurate with the duty to prevent suffering and death
The critical piece of machinery for the longtermist account of the value of x-risk is that x-risk the moral value of x-risk reduction necessarily relies on the value of helping potential people specifically by causing them to exist. One’s work to reduce x-risk relies is only valuable in the specific case where without your work there would have been this existential catastrophe. In this the action of reducing x-risk compared to the status quo action of not reducing x-risk helps people only by causing people who would not have existed to exist. This relies on thinking firstly that potential people matter. People who would not have existed are the ones who would be harmed (assume that present people are helped equally by the action you take to reduce x-risk and the action you would take otherwise) are they are specifically harmed by their non-existence rather than by decreasing their quality of life conditional on their coming into existence.
The final premise then comes into play here. One must accept that the action of improving people’s lives by bringing them into existence is more morally important, at least in the aggregate, than other harms such as bringing people into existence with very terrible lives, or preventing people being tortured forever in the future, or preventing current people from dying from malaria, or preventing animals from being tortured in factory farms.
I think this is probably where the argument for working on x-risk for longtermist reasons fails. I think - and this will cause me to bite some quite unpleasant bullets - that we have a much stronger responsibility to prevent terrible suffering that ensures that people live good lives. This is not an argument against longtermism because it’s possible - and I think likely - that most of the suffering that I can prevent will be of future people. It is an argument against working on x-risk based on the justification that it will benefit future people by ensuring that they have good lives.
This is a long post and I won’t blame you if you don’t want to read it all. If you are among that number, here is a summary.
Future people do matter. When somebody is born is an arbitrary factor like race or sex that shouldn’t affect how much we value them
Potential people matter. Whenever we take actions which harm potential people this caches out as there being more bad things or fewer good things in the real world with no additional real people harmed.
When there is greater value in the world and no one in the world has been harmed this is strictly better than when there is less value in the world and no one harmed and we must value potential people to get this property
Making more happy people is good
The impossibility theorems about population axiology mean that avoiding the repugnant conclusion means you have to accept some other unpleasant conclusion like that creating new sad people is good.
Making more happy people does in fact seem good for some of the same reasons that extending someone's life can be good - there’s a person in the world who gets to experience more what is valuable in life
I think weather or not other moral responsibilities trump our responsibilities to reduce x-risk depend on weather or not we have have duty to prevent terrible suffering if it means forfeiting a great number of people having wonderful lives
Allowing people to live good lives as would happen as a result of x-risk reduction seems like a positive event rather than the avoidance of a negative one - no existing person is harmed in worlds where we don’t prevent x-risk
It seems there are ways in which we could reduce terrible suffering either now or in the future and this is what we trade off when we choose to work on x-risk instead
Would you walk away from Omelas?
Future people matter.
This I think is the least controversial premise. It merely says that there’s nothing in particular about being in the future which makes people matter less. I think there are two ways to illustrate the truth of this premise, firstly with a thought experiment and secondly with a more first principles argument.
Following, everyone, I’ll illustrate the thought experiment I’ll use that comes out of Derek Parfit’s masterpiece Reasons and Persons.
You’re walking through a forest and find a piece of broken glass on the floor. You know that, in 1000 years, a small child will be walking barefoot through the forest and cut their foot on the piece of broken glass. Do you have a responsibility to pick the piece of glass up?
Clearly yes, clearly you must pick the glass up. But, this thought experiment also demonstrates the stronger point that not only should you pick up the glass but it doesn’t seem to matter in the slightest that the child will be harmed in 1000 years. Normally we have discount rates when we think about the value of things in the future and for perfectly sensible reasons. Perhaps we’ll go extinct, perhaps the resources that we devote to some end to be used in 1000 years will be put to some other use that we don’t approve of. In the case of the child and the glass however none of these epistemic concerns have any force. By the magic of thought experiment I’ve removed the epistemic problems which normally plague us when we endeavour to help those in the future. With this it becomes clear that there’s nothing intrinsically less valuable about who lives in the future merely because they happen to live in the future.
A second version of this argument is more deductive. What are the sorts of things that should lead us to have stronger moral duties to some people than others? The force of the argument for accepting the moral claims of future people is that it relies on exactly the same intuition that makes us think that racism, sexism and xenophobia are bad. We think that we shouldn’t let factors which are outside of an individual's control and have no effect on their experience of the good and bad things in life affect the strength of our duties towards them. Plausibly we have stronger duties to help those who are exceptionally close to us or that we owe a great debt towards. For instance, think of the young man who must choose between caring for his ailing mother and elisting to help fight Germany in the second world war. This is a genuinely hard ethical question and we would be sympathetic to the young man if we stayed to help his mother.
This is edge case however - it is not permissible for most people, at least in rich countries, to engage in discrimination for reasons that do connect to peoples experiences or circumstances within their control. At its heart I think this comes from a core moral principle of equal considerations of interest. We must have very very good reasons to violate this core moral tennant. When we are permitted to violate this core moral tenet it must be for some reason that is to do with what we consider valuable. If someone has many fewer experiences, for instance because they’re comatose, it might be permissible to take their interests less seriously because their interests matter less because one of the things which makes things matter is people’s phenomenal experiences. Part of what it could mean to treat people’s interests equally might mean that if someone makes a great sacrifice for you you should privilege them in your moral calculations because they’ve sacrificed some of their interests for you. Something is arbitrary when it doesn’t relate to any of these core moral concepts - interest, qualia, equality, duty ect - and this seems to apply to various personal characteristics like race, sex and postion in time.
You only need to accept this first premise to accept that longtermism, dependent on some empirical questions, could have some force. It just means that the way in which you should help future people is by making people’s lives better in the future, or preventing their lives from being terrible.
Potential people matter
Consider a potential pregnant person who is addicted to smoking. They can become pregnant now and their baby will live to 60 but it will be a good 60 years. Alternatively they can wait until they’ve quit smoking in 6 months and have the baby then. The baby will then live to 80, also a good 80 years. It seems like the person has reason to wait the 6 months and this seems uncontroversial. It implies however that we have to care about potential people.
If the individual becomes pregnant now, who is there to complain? The child can’t complain - they live a good life and would prefer to exist to not existing. This is the critical point - if the mother were to wait a different child would have been born. This different child, the one who would have been had the parent waited 6 months, can also not complain because they don’t exist. They’re merely a potential person. Therefore, thinking that the parent has reason to wait requires thinking that we should consider the interests of potential people in our ethical decision making.
I think the first principles version of this is being a potential person an arbitrary moral distinction like sex or a reasonable one like having consious experience. It seems on its face to be a pretty reasonable distinction. Potential people don’t have conscious experience, they don’t have cognitive states so in what sense can anything be bad, or good, for them. Maybe we should just conclude from the previous thought experiment and in fact the parent has no good reason to wait to stop smoking.
I think this is the core response. All of the ways in which the benefits to potential people cash out are in fulfilment of interests, or duties, or whatever you think is valuable. While the potential people are just potential it’s true that they don’t have any interests but what is at stake are people actually having good lives or avoiding bad experiences. I think this gets sharper if we consider a variant of the smoking parent thought experiment. Instead assume that instead of merely dying 20 years earlier, instead the child would be subject to an extremely painful degenerative condition for the last 5 years of their life that means that while their life is worth living, it just barely is.
There is a potentially relevant moral difference here, now by the parent delaying pregnancy they’re avoiding terrible suffering that their child would suffer. But if you reject that we should value potential people, iron logic compels you to think that the parent has no reason to stop smoking. The child born to the smoker has a life that, all things considered, they’d rather have lived than not. They would not, in hindsight, want you to deprive them of it. We cannot consider the interests of the child born once the smoker has quit because they’re a potential person. I’ve been struggling to put into words why I have this strong intuition that potential people matter and why I find these thought experiments so forceful. I think the answer is that what is at stake is in fact the quality of life of real people and that matters tremendously and our normative ethics is wrong whenever it denies people good happy lives and there are no fewer, on net, good happy lives.
The next question is does anything change if the thing that’s gained by the potential person is a good existence. In the thought experiments I’ve used so far the question is the quality of life of people in possible worlds. When we think about new people who could come into existence, there's no individual in the possible world who would have a better life as a result of our decision.
I think I basically reject this. Getting to live a good life is a wonderful thing. People getting to live good lives is in fact the only thing that matters. I think that I reject that there’s any principled difference between this case and the case in which the child lives to 60 instead of 80. In both cases their preference not to die is violated - the benefit they’re getting is years of good life. This is structurally the same as allowing someone to exist and live a good life - the benefit that someone in the actual world that will come to pass is (assuming you prevent x-risk) is that someone gets to live many years of good life.
One response to this is that this is distinct from the case of a first smokers child case because the thing that is good about living 20 more years is that one gets to advance their existing interests and projects in these 20 years whereas when one does not exist one does not miss out of the ability to advance projects because one has no projects.
I think this is only plausible if one has quite a strange conception of the good life that doesn’t include experiencing merely enjoyable experiences. More structurally, one’s conception of the good life must exclude anything that is not a continuation of some previous project. This is fundamentally implausible. Maybe you think the thing that is worthwhile is taking the right action. This is in some ways the opposite of enjoyable experiences. Yet this also seems like the sort of thing which one could begin anew in a new 20 years of life - one could easily start some new moral quest at the age of 65.
Creating new happy lives is valuable
It is common to have the intuition of neutrality - that we should be indifferent about creating new happy people. If you think this then the longtermist case for x-risk reduction falls through.
It’s worth teasing out the distinction here between valuing potential people and valuing creating new happy lives.
Consider the following thought experiment. You have two buttons in front of you. If you press button A you create 100 new people who live for 100 years with utility 100. If you press button B you create 1000 new people who live for 100 years with utility 99. In both cases the people you’re benefiting are potential people. Let’s grant that you do care about potential people. However, you think that we should be indifferent about creating new happy people. In this case you should prefer button A to button B since the potential people in the world where you press A have better lives than those if you press B.
This is (to me at least) a strange view but I think it is logically consistent. You would think that specifically the action of creating new people has no value - the parent has no reason to have a baby but given they are going to have a baby does have reason to ensure it has the best life possible. I think the strangeness comes in rejecting the claim that living a good life can be the same type of good thing as living another 20 years of life.
I hope this elucidates the relationship between believing that potential people matter and believing that it’s good to create good new lives. It’s necessary to believe that potential people matter to believe that it’s good to create new future lives since, before you take the action to create good future lives the lives are merely potential lives, but it is not sufficient.
With that discussion out of the way we can now get onto population ethics questions - questions about what to do when deciding between states with different numbers of people. This next section will mostly be on population axiology - thinking about which states are better than others without thinking about what our duties are with respect to bringing about or avoiding different states.
Unfortunately we run into an impossibility theorem. Interested readers can read the original paper (and I may write a blog post explaining it later) and here I will give a more intuitive explanation why you’re forced to bite some unpleasant bullet whatever population axiology you accept. The most famous of these is the repugnant conclusion, the idea that a very large number of lives are good, but only just, can outweigh any number of people living wonderful lives if your axiology simply sums the total amount of welfare in the population. This is often taken as a knockdown argument against such totalist axiologies but I think it’s in fact the least unpleasant bullet any population axiology is forced to bite.
There are various things you might want from a population axiology. You might want to say that we should be indifferent towards creating new happy people and not towards creating new sad people.
The bullet you bite here is that, had you been standing in front of a button 10 billion years ago which, if pressed, would create the Earth today except with none of the bad and all of the good you’d be indifferent between pressing it or not. It forces you to be indifferent between a universe filled with an uncountable number of people living wonderful lives, and a universe that is empty for all eternity. I have the intuition that this is bad.
A more principled objection to indifference about creating new happy lives is the same which drives my support of the interests of possible people. When we can costlessly have more people experiencing all of the things that make life worth living at no cost to anyone else it seems like any moral system should think that this is good because there is more of what is good in the world. It seems like any moral system that has a conception of the Good baked into it should prefer more of the Good to less of the Good, all else equal. The Good is what is valuable and it seems part of the definition of value that one should want more of it.
Another tack you could perhaps try is to say that we should only create people if their lives are sufficiently good. It’s not enough that their lives are barely good. This allows you to avoid the repugnant conclusion but I find this view implausible.
Think about, in practice, what your critical value corresponds to as a life lived. Maybe it’s the life of a low income person in a rich country today. Now consider someone with a life very very slightly below that in quality - they get one additional pinprick in their life. It seems implausible to me that now we shouldn’t bring them into existence if we should bring into existence someone with welfare at the critical value level.
More deductively, it seems very strange to set the critical value level anywhere about the 0 point, where someone looking back on their life would be indifferent about having lived it or not. We personally would want to have existed if our lives were above the 0 level, by definition. It seems strange not to apply our own standards here to others. If you think that creating new happy people is valuable at all, it seems very strange to me to limit it to only people living very good lives. The thing that is valuable about bringing new happy people into existence is that they have lives which they’re pleased to live and by definition this applies to anyone above the indifference point.
The strength of the duty to create more happy lives can be commensurate with the duty to prevent suffering and death
This is the premise which I think involves biting the most unpleasant bullets.
If we’re normie utilitarians and we accept premises 1-3 then preventing x-risk comes naturally. It’s a plausible candidate for maximising the good and so we should take very seriously the possibility that it’s the most important thing that we can be doing. In that case, job done, pack up your philosophical tools, it's time to enter the empirical world. But often people aren’t normie utilitarians.
There are two basic ways in which you can reject normie utilitarianism in ways which bite for x-risk. Firstly you can be an anti-axiological aggregationist or you can be a deontic anti-aggreationsit, at least with respect to bad things happening to people.
To unpack that jargon, there are two things you can reject. Firstly, we can aggregate value. You may think that any number of papercuts cannot outweigh any number of deaths. Secondly, you might accept that we can aggregate value. You accept that the badness of some number of papercuts can outweigh some number of deaths, but we have a duty to prevent the deaths. If you hold one of these two anti-aggreationist views then this could pose a problem for x-risk reduction.
It seems like the thing you’re doing when you’re bringing new people into existence is doing a good thing for someone - something positive is happening. Unlike for people who exist there’s no preference against death specifically that’s being violated meaning the way in which existence is benefiting someone seems like the same way in which adding years to someone's life benefits them. Another example of this, which doesn’t rely on one’s specific views as to why death is bad, is that it seems different to preventing terrible suffering. It seems clear that if you are wronging potential people by not causing them to exist you are wronging them by preventing them from getting some benefit rather than causing them to suffer some harm. To bring it back to real people living real lives, when you don’t bring someone into existence there’s no real person experiencing something terrible. In this way preventing x-risk seems like bringing about a good thing rather than preventing a bad thing.
If you hold one of these views, or some other view that posits a fundamental asymmetry between the Good and Bad, then it seems like there are other things which prevent terrible things much more effectively than preventing x-risk does. Therefore, according to these anti-aggreationist and asymmetric views, you should do one of those things rather than work to prevent x-risk.
These anti-aggreationist and suffering focused views have some attractive properties. They allow to prevent someone from being tortured to death rather than save a million people from papercuts. This seems good. They do, however, have their own bullets to bite. They unfortunately commit you to helping 1 person being tortured for 100 years rather than preventing 10 people from being tortured slightly less than the first person for 100 years. This seems like a very unpleasant bullet to bite.
One principled reason to not want to bite this bullet is that we so often make these kinds of tradeoffs in our daily lives for ourselves, trading off small, substantial amounts of pain for many instances of less pleasure. For instance I enjoy playing football and when I play football sometimes I reliably get injuries that are very painful for a short period of time. This to me seems a pretty knockdown argument against strong versions of axiological anti-aggreationism and suffering focused axiologies. Clearly I aggregate The Good and the Bad in my daily life and am happy with that choice. Why should I apply a different principle to the lives of others? This objection however breaks down when we consider much more intense pain which we don’t encounter in our daily lives. For this sort of pain perhaps we really wouldn’t trade it off against any number of minor pleasures. But I suspect that there is pain that’s a little less bad or lasts for a little less long that we would trade off many instances of this lesser pain for a single instance of the greater or longer pain.
This only deals with the axiological objections to aggregation. The deontic appeal against aggregation is separate. It accepts that we might not be doing the thing that maximises the Good, but deontological theories often accept this. These deontic theories argue that we are compelled by the joint considerations of separateness of persons and equality of interests to prevent the greatest harm even at the cost of many lesser harms or, less ambitiously perhaps, at the cost of many greater benefits to others.
I’m not especially sympathetic to the deontic anti-aggreationsim that appeals prevent us from aggregating harms. It seems to me clearly true that I should save two people being tortured for 10 minutes over one being tortured for 10 minutes and 1 second. It’s much less clear to me however that I should prevent one person from being tortured for 100 years for any number of good lives. In other words, I think I am morally required to walk away from Omelas. This then I think is the core moral question that working on x-risk turns on. Is there some number of good things to disparate people that can outweigh great but finite harm to a much smaller number of individuals?
One might want to try some kind of semi-aggregationist view. Maybe 100 years of torture is worse than a million million people getting paper cuts, but 1 person being tortured for a year is less bad than 10 people being torture slightly less than a year. Unfortunately these semi-aggreationist views leave you open to being money pumped and reject pareto improvements.
Both objections I’ve stolen from this paper which argues for semi-aggregationist views. Consider the following case. First you can choose between saving 1 man from a year of torture and a million million from severe migraines. You choose to save the man from torture. Then you are asked to choose between a million million having bad migraines and 10000 from being tortured for 10 minutes. You choose to save the million million. Finally you get to choose between saving the 10000 tortured for 10 minutes and the one from being tortured for a year and you save the the 10000. With these preferences you reject transitivity. Transitivity is the notion that if A is better than B and B better than C then A must also be better than C.
Rejecting semi-aggregationist views makes the bullet one has to bite by accepting deontic anti-aggregationism much more unpleasant. It means that one must reject one man suffering a pinprick in exchange for a million million living wonderful lives. You can make this more palatable if you think that deontic anti-aggreationism only applies beyond some threshold of suffering. It’s only required to prevent others from suffering, regardless of the benefits, for sufficiently bad instances of suffering.
One the other hand it allows you to reject the trade of a thousand people finding a joke mildly funny in exchange for one person being tortured. It’s just very unclear to me which one of these bullets it’s preferable to bite.
Should I work on x-risk?
I think this is still an open question, but I think I’ve reached some clarity. The question I need to answer is do I have a responsibility to prevent great harm to a small number of people over ensuring that a much larger number of people experience some great benefit.
I am unsure whether this is just a rephrasing of your crux, but the more intuitive framing for me is simply the question: is the expected value of the future positive?
If severe suffering is lexically prior to happy lives, then even if the future is big and good for almost everyone, it has negative value. While if we are 'normie utilitarians' then the future is (probably) positive.
Is this the same as your conclusion?