Summary
The fundamental tension is that we don’t know whether or not the threat Putin was making was credible or not
If we knew the threat was credible we would always give in, if we knew it wasn’t we would never give in
As we don’t know if the threat it credible or not the equilibrium strategy is (probably) to mix with some probability between giving in to threats and not
If we put positive probability on the threat being credible the only way to ensure that there’s 0 probability of nuclear war in equilibrium is by always giving in to nuclear threats
How bad you consider nuclear war should have no effect on how often you should give into nuclear blackmail assuming that the equilibrium strategy is to mix
Recently there was a lot of Twitter drama about whether the US should have pushed Ukraine to accept a peace deal after Russia threatened to use tactical nuclear weapons. On the one hand, nuclear war would be terrible. On the other hand, yielding to a threat of the use of tactical nukes might give any old tyrant the ability to blackmail NATO with the threat of nuclear weapons as well as incentivising countries to develop nuclear weapons.
The fundamental problem here is that no one knew if the Russian threat to use nuclear weapons was credible. For a threat to be credible, it has to be worth the threatener carrying it out even if the threatened party doesn’t yield to the threat.
The key difference between these two games is that in the first one it makes sense for the aggressor to carry out the threat if the defender doesn’t give in, whereas in the second one it doesn’t. Therefore in the first game, the equilibrium is for the first player to give in immediately, whereas in the second one the equilibrium is for the first player not to give into the threat and then for the threatener not to carry out the threat.
In the Russia-Ukraine case, if Russia really would have used nuclear weapons if Ukraine didn’t accept a peace deal then the threat was credible. Maybe Putin really did care about winning the war in Ukraine enough to risk a nuclear war. However, as we’ve seen, the threat wasn’t credible. Putin didn’t use nukes because he values his life more than winning the war in Ukraine.
Fundamentally though, the problem we face is that we didn’t know Putin was the sort of person who valued his life (or rather disvalued the probability of death in a nuclear war) more or less than he valued winning the war in Ukraine. Fundamentally this means we’re playing a game with uncertainty baked in, so we need a different method for finding out what strategy we should be employing.
This more complicated diagram shows the game we’re actually playing against Putin. If Putin is a maniac who’s willing to gamble nuclear war if it means winning Ukraine then his threat to use nuclear weapons is credible. If we knew we were facing this sort of opponent we should back down. If we always backed down though when threatened with nukes then not crazy Putins would have an incentive to fake being crazy by threatening us with nukes, thereby faking that they’re the sort of person who would be willing to risk nuclear war.
The equilibria in this game depend on the exact value of p. In this specific game, with these specific payoffs, the set of equilibria is determined by whether p is greater or less than ⅔. If p is greater than ⅔, the only equilibrium is the one in which Putin threatens to nuke whether he’s a maniac or sane and NATO/Ukraine backs down. The intuition behind this is that there’s a high enough probability that they’re facing a Putin who would be willing to use nuclear weapons if Ukraine didn’t back down. Off the back of this, the sane Putin can fake being crazy and get away with it because the West knows that it’s probably facing a maniac.
On the other hand, if p is less than ⅔, what equilibria we get is determined by whether epsilon is greater than 0 or not. If ‘e’ is greater than 0, perhaps because after a nuclear war is threatened there’s some probability that it spirals into full-scale nuclear war in a way beyond either actor's control, then the equilibrium we get is semi-separating. In this case, the maniac type will always threaten nuclear war since it’s always better for them regardless of what the West’s response is - their threat is credible. The sane type on the other hand mixes between threatening nuclear war and not threatening nuclear war. It’s obvious why it’s not an equilibrium for them to not threaten nuclear weapon use while the maniac does - they can simply threaten nuclear weapon use and get all of the rewards while taking on none of the risk because the West thinks they’re facing a maniac.
On the other hand, always threatening nukes doesn’t work either - in this case, because the expected calculation works out that the West shouldn’t back down meaning that the sane type ends up in a worse position than had they never done any threatening at all. Here the intuition is that the West thinks that probably Putin is sane so they feel confident calling his bluff. Therefore the only equilibrium is one in which the sane type mixes between threatening nuclear war and not, while the maniac always threatens war. In this equilibrium, the West plays not back down with probability 15/16 and backs down with probability 1/16 (for e=-1.)
If however, ‘e’ is equal to 0 - i.e threatening nuclear war and then backing down has no cost in and of itself - the only equilibrium is where both types threaten nuclear war.
There’s no equilibrium in which neither type threatens nuclear war. Even if the West thought with 100% certainty that whenever they saw someone deviating from the no threatening equilibrium that it was the sane type and so didn’t back down, it would still be worth it to the maniac type to deviate and threaten nuclear war because their threat is credible. They’re willing to carry out their threat even if the West backed down, so while it would be better for them if their opponent could get the message that they’re willing to use nuclear weapons and back down, it’s still better for them to carry out their threat than not.
What does this mean for real life?
In the example above I made the specific numbers up, but specific parameter values matter for whether we get an equilibrium in which both the sane and maniac types. In the real world it seems like the probability that you’re facing the type which would use nuclear weapons if their bluff was called is low but not 0. Soviet troops had orders to use tactical nuclear weapons against Gautanemo if the US had invaded during the Cuban missile crisis for instance. The US war plan was to use nuclear weapons if the Soviet Union invaded Western Europe. Given the rigidity of those sorts of war plans it seems foolish to think that there was a low probability that nuclear weapons wouldn’t have been used had a war started. In 1953 with the Korean war going poorly Eisenhower planned to use tactical nuclear weapons.
On the other hand, other than at the end of the Second World War, nuclear weapons have obviously never been used. The relevant reference class for the probability of Russia using nuclear weapons in Ukraine seems something like a superpower losing a war which has happened many times in the 20th century - Korea, Vietnam, Afghanistan, Afghanistan again, and Iraq - are the most prominent examples. It should be noted that the last 3 were guerrilla wars unlike Ukraine so tactical nuclear weapons are much less clearly useful.
It seems reasonable then to put the probability that Putin really was willing to use nuclear weapons as something between 1/1000 and 1/10. The range that the forecasting site Metaclus put on nuclear detonation in Ukraine was between 1/100 and 1/10.
The other really key parameter is how bad a nuclear war would be. If nuclear war is viewed as sufficiently bad by player 2 then it’s possible you’ll get a unique equilibrium where player 2 gives into threats whenever they’re threatened and player 1 makes nuclear threats regardless of type, even when the probability that player 1’s nuclear threat is credible is very low. For instance if an offensive nuclear detonation is 1000 times worse for player 2 than giving into a nuclear threat then the probability that they’re facing a maniac whose nuclear threats are credible need only be 1/1000 to get an equilibrium where player 2, the West in our example, always backs down (assuming that status quo’s payoff is normalised to 0.)
Ethical theories that give substantial value to creating new happy lives (and expect the future to be good) consider full-scale nuclear war much much much worse than most people do because it would mean forfeiting so many happy future lives. This pushes towards either giving into threats as a pure strategy.
Somewhat counter-intuitively, this will have no effect on how often we should mix between playing giving into threats and not giving into threats when we’re in a semi-separating equilibrium since this is fully determined by the payoffs of the aggressor type. When the equilibrium strategy is tan d by what probability is needed to make the other player indifferent between some set of their strategies. If the other player isn’t indifferent to their strategies then they’ll always play their best strategy, in which case the first player should just play their best response to that strategy rather than mixing.
For instance, imagine playing a penalty shootout game. The goalkeeper can dive left or right and the penalty taker can shoot left or right. If the keeper dives the same way as the penalty they save the penalty, if they dive the wrong way they don’t. Consider an equilibrium where the goalkeeper dives left with probability ⅓ and right with probability ⅔. The striker will always shoot left, in which case the goalkeeper should always dive to the left so we don’t get an equilibrium. However, if the goalkeeper dives each way with equal probability then there’s no incentive for the penalty taker to shoot one way or the other.
So what should we do
There’s no particularly nice conclusion to this piece. We’re faced with a pretty sharp tradeoff. Either we give in whenever we’re faced with a nuclear threat, or give in sometimes but rarely and accept that there’s some chance we get a nuclear war. Unfortunately, it’s not clear which one of these we should be doing. In the real world, it seems both like we’re very unlikely to be facing an actor willing to use nuclear weapons, and that nuclear war would be extremely bad. If one of these two numbers was different - for instance, if nuclear war were 10000 worse than we currently think it is - our choice would be clear (always give in to threats in the case above for instance.) What this model has shed light on though is the counter-intuitive conclusion that, if you’re going to mix between giving in to nuclear blackmail and not this doesn’t depend at all on how much you disvalue nuclear war.