The AI Race
Everyone is talking about the AI Race. Policy speeches invoke it, newspaper headlines frame it, tech CEOs use it to justify speed over caution. The phrase has become so deeply embedded in how we discuss artificial intelligence that it no longer registers as a metaphor at all. It sounds like a description of something that is simply happening.
But the “AI Race” is not a description. It is a framing, and a borrowed one at that. The phrase is a future imaginary inherited from the Cold War, recycled without examining whether the conditions that made it plausible still hold. Sohail Inayatullah calls this pattern a “used future”: a narrative from a previous era applied to a new situation as if it were self-evident.1 The Space Race had a finish line. The Moon was real, and reaching it first was a definable achievement. The Arms Race had quantifiable metrics: you could count warheads. Both were contests between two superpowers with clear objectives. Critical Futures Studies would ask: what assumptions travel with the borrowed framing? What gets imported along with the metaphor?
What, exactly, is the finish line of the AI Race?
A race has three defining characteristics. First, a clearly defined goal. Second, a moment when it is over. Third, a clear winner. These are not incidental features. They are what makes a race a race.
The hidden bet
Applied to AI development, all three characteristics fail. There is no defined goal: Artificial General Intelligence (AGI) remains a moving target with no agreed-upon definition, and current AI progress is neither linear nor aimed at a single endpoint. There is no moment when it is over: unlike a Moon landing, AI development does not have a finish line after which everyone goes home. And there is no clear winner: capabilities are distributed across companies, countries, and research communities in ways that do not map onto a podium.
So the metaphor does not fit. But the more interesting question is: under what condition would it fit?
There is exactly one scenario in which the “AI Race” makes structural sense. If you believe in the Intelligence Explosion: the hypothesis that the first entity to build an AGI will achieve recursive self-improvement, rapidly outpace all competitors, and gain a decisive, permanent advantage. In that scenario, and only in that scenario, there is a finish line (AGI), a moment when it is over (the Singularity), and a winner (whoever gets there first). The metaphor works perfectly. It just requires you to accept one of the most speculative propositions in technology forecasting as a given.2
If AI development is gradual, distributed, and shaped by regulation, economics, and cultural choices rather than by a single breakthrough, then there is no race. There is a slow, contested, multi-actor process of technological change. Not very dramatic. But much closer to how technology actually develops.
This is the hidden bet embedded in the “AI Race” framing. Anyone who uses the phrase, consciously or not, implicitly accepts the Singularity thesis. The metaphor smuggles in a highly contested ideology without naming it. It collapses a speculative hypothesis into common sense. The lineage is traceable: Silicon Valley Rationalism to Singularity theory to Intelligence Explosion to “therefore it is a race.” Each step sounds more neutral than the last, until the endpoint feels like mere observation.
This is what makes it a textbook case of condensation: the narrative has congealed so thoroughly that it is no longer experienced as a narrative choice. It feels like a description of reality. And that is precisely when No future is neutral hits hardest: the moment a future imaginary becomes invisible as a future imaginary.
What the metaphor enforces
Once you accept the race framing, certain patterns of thought follow automatically. George Lakoff and Mark Johnson showed that metaphors do not just describe reality. They structure how we think about it.3 Accept the race, and four consequences follow.
Zero-sum logic. A race has a winner and losers. Cooperation becomes irrational. Any advantage shared is an advantage lost.
Speed over safety. Races reward the fastest, not the most careful. Within the race frame, regulation becomes a handicap. Slowing down is losing.
Bilateral framing. Races are contests between two participants: the US versus China. The rest of the world disappears. So do civil society actors, smaller nations, and the global research community.
Spectator roles. Races have runners and audiences. Citizens become spectators of a geopolitical contest they did not choose and cannot participate in. The only available role is watching and cheering.
These are not optional interpretations. They are structural consequences of the metaphor itself. Accept the framing, and the thought patterns come with it.
Who benefits
The race framing is not politically neutral, and the beneficiaries are predictable from the structural effects above.
Tech companies gain the most directly. If we are in a race, speed is a virtue and regulation is a competitive disadvantage. Safety concerns become obstacles. Every call for caution becomes, in the logic of the metaphor, an act of self-sabotage.
Deregulation advocates gain the most politically. The race frame turns China into a permanent boogeyman. Every regulation becomes a handicap in a contest where the other side plays by no rules. Tiffany C. Li has documented how this framing drives a technological determinism that forecloses regulatory options that are, in practice, available.4
Geopolitical hardliners gain the most strategically. The race metaphor militarizes a civilian technology. Vladimir Putin’s 2017 statement that “whoever leads in AI will rule the world” is often cited as a warning. It is more accurately read as the moment the metaphor became self-fulfilling prophecy. Once heads of state frame AI as a geopolitical weapon, it becomes one.5
There is a structural parallel here to what I explore in Exit strategies as collective surrender. Race thinking and exit thinking share the same presupposition: that AGI is a winner-takes-all event. One responds by racing. The other responds by surrendering. Neither questions the premise. Both assume the Intelligence Explosion is real and that the only question is how to position yourself relative to it.
The framing at work
The concrete evidence for how the race framing overrides even stated commitments arrived in March 2026, when Anthropic revised its Responsible Scaling Policy. Anthropic had built its brand on safety leadership. The previous version of the policy included a commitment to pause development if safety measures did not keep pace. The revised version removed that commitment. Chief Science Officer Jared Kaplan explained: “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.” The policy itself states that pausing while others advance “could result in a world that is less safe.”6 A safety commitment, abandoned using race logic. The framing did exactly what the framing does. And Anthropic is the company that was supposed to be different. If even the safety-first lab cannot hold the line against the race metaphor, the metaphor is doing serious structural work.
The invisible action space
Strip away the race framing, and a different picture appears. Not a utopia, not a counter-narrative. Just options that have been there all along but are made invisible by the metaphor.
Regulation becomes design, not drag. Without the race, the “handicap” argument collapses. Regulation is what it always was: a set of choices about what kind of technology we want to live with. The EU’s AI Act is not a concession to China. It is a policy decision.
Cooperation becomes rational. Without zero-sum, international collaboration on AI safety and governance is the obvious move, not an act of treason. Li calls this “Regulatory Collaboration”: nations working together on standards rather than racing each other to the bottom.4
Pace becomes negotiable. Without a race, deploying unsafe systems “because a competitor might” stops making sense. Speed is no longer a moral imperative.
Citizens become participants. Without the spectator frame, democratic engagement with AI policy is not naive. It is the normal way societies make decisions about technologies that affect everyone.
This connects to what Reactance and future narratives describes as the participatory bypass: the shift from receiving a future as a mandate to co-authoring it.
Every one of these options was available before I wrote this. They did not need to be invented. They needed to be made visible. The race metaphor is not a description of reality that we need to accept. It is a choice of framing that we can refuse. And refusing it does not mean ignoring the real challenges of AI development. It means engaging with them on our own terms.
Open questions
Is the race framing deployed strategically, to close action spaces, or recycled thoughtlessly, because it is the only metaphor people have? Where does strategy end and linguistic habit begin?
The “AI Race” dominates US and European discourse. How is AI development framed in other cultural contexts? Does China have a “race” framing, or is this a Western projection onto a competitor whose self-understanding may be entirely different?
If the AI Race is a used future, what other inherited metaphors structure the AI debate without anyone noticing? What would we see if we examined the “AI revolution,” the “AI frontier,” or the “AI ecosystem” with the same scrutiny?
-
Sohail Inayatullah, “Six pillars: futures thinking for transforming,” Foresight 10(1), 2008, pp. 4-21. https://doi.org/10.1108/14636680810855991. Inayatullah defines “used futures” as futures borrowed from another culture, time, or context and applied uncritically to the present. ↩
-
Vernor Vinge, “The Coming Technological Singularity: How to Survive in the Post-Human Era,” presented at VISION-21 Symposium, sponsored by NASA Lewis Research Center, 1993. ↩
-
George Lakoff and Mark Johnson, Metaphors We Live By (University of Chicago Press, 1980). ↩
-
Tiffany C. Li, “Ending the AI Race,” Villanova Law Review. Li argues that the race framing produces a technological determinism that makes regulation appear futile rather than necessary. ↩ ↩2
-
Heather Roff, cited in Paul Scharre, “Debunking the AI Arms Race Theory,” Texas National Security Review (2021). Roff describes the “race to the bottom” dynamic in which competitive pressure erodes safety standards. See also New America, “Reframing the US-China AI Arms Race.” ↩
-
Anthropic, “Responsible Scaling Policy v3” (March 2026). Jared Kaplan quoted in Time on the rationale for removing the pause commitment. ↩
Linked References
No notes link to this note yet.