OpenAI Frontier Alliances
On February 23, 2026, OpenAI announced multi-year partnerships with Boston Consulting Group, McKinsey, Accenture, and Capgemini. The program is called “Frontier Alliances.” Its stated purpose: helping enterprises deploy OpenAI’s Frontier platform, an agent-building environment that lets companies create what OpenAI calls “AI coworkers” for end-to-end business processes.12
The partnerships are structured with clear roles. BCG and McKinsey handle strategy: operating model redesign, organizational restructuring, change management, use case identification. Accenture and Capgemini handle implementation: wiring Frontier into CRM, ERP, cloud infrastructure, data pipelines, security.34 OpenAI embeds its own Forward Deployed Engineering teams alongside the consulting teams for joint client work.
Early customers include Intuit, State Farm, Thermo Fisher, and Uber.5
What the Signal Tells Us
The interesting part is what OpenAI chose to say about why they did this. Their framing:
“The limiting factor isn’t model intelligence. It’s how agents are built and run inside organisations.”2
That sentence is worth sitting with. The company leading the model capability race is telling us that model capability is no longer the bottleneck. The bottleneck is organizational absorption: governance, workflows, data structures, the human architecture that determines whether AI agents actually work in practice or sit in a demo nobody uses after week two.
This is a pattern that repeats across technology waves. SAP needed Accenture and Deloitte to build the implementation ecosystem before ERP could scale beyond early adopters. AWS needed consulting partners to help enterprises migrate from on-premise infrastructure. “Digital transformation” became a $1.5 trillion consulting market because the technology was never the real challenge. Changing how people work was.
What we can read from this: the AI industry has crossed a threshold. The competition is shifting from model capability to organizational adoption. And the organizations that shape how AI enters the workplace (the consulting firms, not the labs) may end up having more influence over what “AI at work” actually looks like in practice.
The Imagination Gap
A 2026 RBC study found that 97% of firms that adopted AI reported benefits, yet most struggled to envision what AI could do for their specific context. That gap between “AI is useful in general” and “here is what AI does for us, specifically” is what the Frontier Alliances are designed to bridge.
The consulting firms are being hired to do the imagining. They translate abstract AI capability into concrete organizational visions: here is your new workflow, here is how your operating model changes, here is what your teams will look like.
That translation work is real and valuable. It also means that the shape of AI adoption in enterprises will be filtered through how consultants think about organizations. The mental models of McKinsey and BCG (efficiency, operating models, restructuring) will imprint on how AI enters the workplace. Different consultants would produce different AI futures.
What This Tells Us About Consulting
Zoe Scaman’s analysis of Palantir describes a structurally identical model: technology companies that embed their people inside client organizations, not for weeks but for months or years, because “they assume the organisation is wrong about what’s actually broken.”6 The Frontier Alliances formalize this pattern at industry scale.
Consulting firms have long been translators between new technology and existing organizations. What changes with AI is the nature of the translation. Previous waves (cloud, mobile, digital) mostly asked organizations to adopt new tools. AI agents ask something harder: they force organizations to articulate how they actually work.
Igor Schwarzmann describes this as AI’s forcing function.7 Chris Argyris made a useful distinction between “espoused theories” (what organizations say they do) and “theories in use” (how they actually decide, especially under pressure). Most organizations live in the gap between the two. That gap was manageable when only humans needed to bridge it. Humans read between the lines. They know that “we value innovation” means something different on the third floor than on the seventh. AI agents cannot read between the lines. They need the real rules, not the stated ones.
“You cannot delegate what you cannot articulate.” When Schwarzmann tried to delegate his own analytical work to AI, he discovered that the hard part was not the delegation. It was being forced to spell out his actual method: which frameworks he really used, what “good enough” meant, which trade-offs he accepted without thinking about them. The same thing happens at organizational scale. When a company tries to hand a process to an AI agent, every ambiguity, every undocumented workaround, every decision that lives in someone’s head becomes a failure point.
That is what McKinsey and BCG are being paid to do: close the gap between how organizations describe themselves and how they actually function. AI makes that gap expensive in a way it was not before.
The “AI Coworkers” Problem
OpenAI and its partners consistently use “AI coworkers” rather than “copilots” or “assistants.” This framing deserves scrutiny, because it is doing more work as marketing than as description.
People who actually work with AI agents know how far removed the reality is from the “coworker” metaphor. What you are dealing with, in practice, is cronjobs, chat interfaces, skills repositories, and an endless stream of security problems. The technology is genuinely interesting. But calling it a “coworker” is like calling a spreadsheet a “colleague.”
The metaphor matters because it shapes organizational expectations. When McKinsey tells a CEO “your organization needs AI coworkers,” the CEO hears something very different from “your organization needs well-configured automation with appropriate governance.” The first framing sells transformation projects. The second sells implementation work. The Frontier Alliances are built on the first framing.
Tech conferences in 2025 were full of slides showing humans and agents side by side on org charts. Once you actually build and run these systems, that vision turns out to be nonsense. The reality is more mundane and more interesting: agents are tools that need structured context and constant supervision. The interesting question is who provides that, and for how long.
Open Questions
Whose mental models win? If consulting firms shape how enterprises adopt AI, which firm partners with which AI provider matters. Ben Appleton asks which firm becomes Anthropic’s preferred transformation partner.8 The answer will influence whether enterprise AI looks like McKinsey’s version of work or someone else’s.
Will organizations learn to imagine for themselves? The imagination gap is real today. The question is whether organizations develop their own capacity to envision AI-enabled work, or whether they remain dependent on consultants for this. The answer determines whether the Frontier Alliances are a bridge or a permanent dependency.
Who maintains the substrate? Consulting firms can help build the initial documentation, the workflows, the explicit knowledge that AI agents need. But organizations keep changing. Knowledge drifts. Documentation decays. Who does the ongoing maintenance? The Documentation as Infrastructure#The Prestige Problem|prestige problem applies: maintenance is the most important function and the least prestigious.
Sources
-
OpenAI Blog: Frontier Alliance Partners (February 23, 2026) ↩
-
The Decoder: OpenAI partners with major consulting firms ↩ ↩2
-
Fortune: OpenAI partners with McKinsey, BCG, Accenture, and Capgemini ↩
-
Zoe Scaman, The Palantir Model ↩
-
Igor Schwarzmann, Strategy as Protocol ↩
-
Ben Appleton, LinkedIn Post on Frontier Alliances (February 24, 2026) ↩