What about straightforward negative externalities and arms races? These can imply that it would be better to coordinate restrict a technology even if it woulld be individually advantageous to use freely.
(They don't seem very directly mentioned by your article.)
As far as negative externalities, consider a defense dominant technology which also emits lead fumes. You'd potentially prefer to coordinate so that no one uses this technology.
As far as arms races, imagine a new technology which has military value, but no direct economic usefulness. Further suppose the technology only interacts with itself and undefended targets (e.g. it can defend against other applications of the technology, it can attack undefended targets, and it can (with sufficient resources) take down defenses). In this case the only use of the technology for a defender is defending against an attacker, so we'd prefer to coordinate to avoid proliferation.
In the case of AI, my main concern is that the potential for rogue/powerseeking AI poses a negative externality from the perspective of humanity.
Externalities are definitely a distinct policy consideration not addressed here. But usually we think the best way to deal with externalities is through Pigouvian taxes.
For the misaligned AGI case, I tend to think that even beginning to apply the offense–defense balance of the AGI itself is fundamentally mistaken, because it assumes (crudely) that defenders (e.g., humans) can wield a copy of the AGI to defend against another copy of the AGI. But of course the whole worry is that the AGI will not "allow" humans to use it to defend against the other copy of itself. It's just not applicable (though the related question of whether/how humans could defend against an AGI is).
For arms races, a classic prediction of the offense–defense balance is that offense-dominant technologies encourage arms racing, but defense-dominant ones do not.
What about straightforward negative externalities and arms races? These can imply that it would be better to coordinate restrict a technology even if it woulld be individually advantageous to use freely.
(They don't seem very directly mentioned by your article.)
As far as negative externalities, consider a defense dominant technology which also emits lead fumes. You'd potentially prefer to coordinate so that no one uses this technology.
As far as arms races, imagine a new technology which has military value, but no direct economic usefulness. Further suppose the technology only interacts with itself and undefended targets (e.g. it can defend against other applications of the technology, it can attack undefended targets, and it can (with sufficient resources) take down defenses). In this case the only use of the technology for a defender is defending against an attacker, so we'd prefer to coordinate to avoid proliferation.
In the case of AI, my main concern is that the potential for rogue/powerseeking AI poses a negative externality from the perspective of humanity.
Externalities are definitely a distinct policy consideration not addressed here. But usually we think the best way to deal with externalities is through Pigouvian taxes.
For the misaligned AGI case, I tend to think that even beginning to apply the offense–defense balance of the AGI itself is fundamentally mistaken, because it assumes (crudely) that defenders (e.g., humans) can wield a copy of the AGI to defend against another copy of the AGI. But of course the whole worry is that the AGI will not "allow" humans to use it to defend against the other copy of itself. It's just not applicable (though the related question of whether/how humans could defend against an AGI is).
For arms races, a classic prediction of the offense–defense balance is that offense-dominant technologies encourage arms racing, but defense-dominant ones do not.