3 Comments

With regards to working on bio being less valuable than AI, my understanding of your argument is as follows:

1) the expected value of the future has sign uncertainty, so pressing a magic button that reduces X-risk and changes nothing else is of dubious value.

2) eliminating biorisk is akin to pressing the button, as it will have a few side-effects in improving health generally, but the big picture future of the universe looks pretty similar to now.

3) eliminating AI risk, however, will not only increase the expected size of the future, but also make us think it is more likely to have positive sign, as aligned AI will make ~everything go a lot better than otherwise in expectation.

Conclusion: an equi-sized reduction to x-risk from bio is a lot less valuable than that from AI.

I'm still thinking about what I make of this argument, but want to check I have understood it properly first. It seems big if true.

An initial thought is that this seems to be a general argument against working on x-risk and in favour of traejctory change; it just so happens that tranjectory change via making aligned AGI has aside-effect of reducing x-risk.

Also, the paragraph about aliens seems to have a mistake in it. You say that considering aliens both:

* pushes the expected value of x-risk reduction towards 0, and

* increases the variance of the expected value of the future

This isn't quite a contradiction, but it is close: variance decreasing is the same as squishing the distribution towards 0.

Am I missing something here?

Expand full comment

> There’s a famous graph

Could someone pinpoint where this is from? It might be famous in global health circles, but not everyone is in that club (yet). Thanks!

Expand full comment