What the breathless AI 'Extinction' warning gets wrong
Industry leaders calling for regulation, as the AI CEO did before US senators recently, is an old-school tactic that has nothing to do with the greater good

Sometimes publicity stunts backfire. A case in point may be the one-sentence warning issued this week by the Center for AI Safety: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
The list of signatories is impressive, including many executives and software engineers at leading AI companies. Yet for all the apparent sincerity on display, this statement is more likely to generate backlash than assent.
The first problem is the word "extinction." Whether or not you think the current trajectory of AI systems poses an extinction risk — and I do not — the more you use that term, the more likely the matter will fall under the purview of the national security establishment. And its priority is to defeat foreign adversaries. The bureaucrats who staff the more mundane regulatory agencies will be shoved aside.
US national security experts are properly sceptical about the idea of an international agreement to limit AI systems, as they doubt anyone would be monitoring and sanctioning China, Russia or other states (even the UAE has a potentially powerful system on the way). So the more people say that AI systems can be super-powerful, the more national security advisers will insist that US technology must always be superior. I happen to agree about the need for US dominance — but realize that this is an argument for accelerating AI research, not slowing it down.
A second problem with the statement is that many of the signers are important players in AI development. So a common-sense objection might go like this: If you're so concerned, why don't you just stop working on AI? There is a perfectly legitimate response — you want to stay involved because you fear that if you leave, someone less responsible will be put in charge — but I am under no illusions that this argument would carry the day. As they say in politics, if you are explaining, you are losing.
The geographic distribution of the signatories will also create problems. Many of the best-known signers are on the West Coast, especially in California and Seattle. There is a cluster from Toronto and a few from the UK, but the US Midwest and South are hardly represented. If I were a chief of staff to a member of Congress or a political lobbyist, I would be wondering: Where are the community bankers? Where are the owners of auto dealerships? Why are so few states and House districts represented on the list?
I do not myself see the AI safety movement as a left-wing political project. But if all you knew about it was this document, you might conclude that it is. In short, the petition may be doing more to signal the weakness and narrowness of the movement than its strength.
Then there is the brevity of the statement itself. Perhaps, this is a bold move, and it will help stimulate debate and generate ideas. But an alternative view is that the group could not agree on anything more. There is no accompanying white paper or set of policy recommendations. I praise the signers' humility, but not their political instincts.
Again, consider the public as well as the political perception. If some well-known and very smart players in a given area think the world might end but make no recommendations about what to do about it, might you decide just to ignore them altogether? ("Get back to me when you've figured it out!") What if a group of scientists announced that a large asteroid was headed toward Earth? I suspect they would have some very specific recommendations, on such issues as how to deflect the asteroid and prepare defenses.
Finally, the petition errs by comparing AI risk to "other societal-scale risks such as pandemics and nuclear war." Even if you agree with this point, it is now commonly recognized that — even after well over 1 million deaths — the US is not remotely prepared for the next pandemic. That is a huge failure, but still: Why lump your cause in with a political stinker? As for nuclear war, public fears are rising but it is hardly a major political issue. I am not denying that it is important, just questioning whether it will have the desired galvanizing effect.
Given all this, allow me to make a prediction: Existential risk from AI will be packaged and commodified, like so many other ideas under capitalism, such as existential risk from climate change. I expect to soon start seeing slogans about the need for AI safety on T-shirts. Perhaps some of them will have been created by the AIs themselves.
Tyler Cowen is an American economist, columnist and blogger. He is a professor at George Mason University, where he holds the Holbert L. Harris Chair of the Economics department.
Disclaimer: This article first appeared on Bloomberg, and is published by a special syndication arrangement.