Cultural Lag and Cultural Lead
The standard model for how technology and culture interact has a name: Cultural Lag. Technology arrives, society scrambles to catch up. With AI, this model breaks down. The cultural images, the fears, the expectations were all in place decades before the technology could deliver anything close to artificial intelligence. I’ve started calling this reversal Cultural Lead.
Cultural Lag
In 1922, the American sociologist William F. Ogburn published Social Change with Respect to Culture and Original Nature, a book that quietly shaped a century of thinking about technology and society.1 His core idea: technology (what he called “material culture”) advances faster than the norms, laws, and institutions meant to govern it (“adaptive culture”). The gap between the two creates friction, confusion, and harm.
Ogburn saw technology as the primary driver of social change. Later scholars labeled this soft technological determinism: technology doesn’t dictate outcomes mechanically, but it sets the tempo. Society follows.
Social media is the textbook case. Platforms launched between 2004 and 2008. The Cambridge Analytica scandal broke in 2018. The EU Digital Services Act, the first serious regulatory framework, took effect in stages from 2023. And as of 2026, the biggest policy debate is whether children should be on these platforms at all. A conversation about technology that is over twenty years old. Two decades, and we’re still catching up.
The model feels intuitive because it matches most of our experience with technology. But it carries a significant limitation: it assumes a single direction of causality. Technology leads. Culture follows. Full stop.
Cultural Lead
With AI, the direction reversed.
Long before the technology could deliver anything resembling intelligence, our stories about it were already deeply embedded in culture. The line stretches back to Mary Shelley’s Frankenstein (1818), the first major exploration of artificial life in fiction. From there, the images kept accumulating: Metropolis (1927), HAL 9000 in 2001: A Space Odyssey (1968), the Terminator (1984), Samantha in Her (2013). Each generation added new layers to what “AI” meant in the collective imagination. And those layers coexisted, giving us simultaneously the killer robot, the helpful assistant, and the superintelligence that ends civilization. All still active. All shaping how people respond to the same three letters.
I’ve started calling this pattern Cultural Lead. It’s my own term, not established in academic literature, and I want to be transparent about that. But I think it names something the existing vocabulary misses: a case where culture didn’t merely precede the technology, but actively shaped what was built.
The evidence for this shaping is concrete. Jeff Bezos has stated that Amazon’s Alexa was inspired by the Star Trek computer. Both Apple and Google approached Majel Barrett, the actress who voiced that computer, to voice their AI assistants before her death in 2008. On May 13, 2024, Sam Altman posted a single word on X after OpenAI’s GPT-4o voice demo: “her.” The reference to Spike Jonze’s 2013 film was immediate. OpenAI had approached Scarlett Johansson to voice their AI. She refused. They launched with a voice that sounded strikingly similar. Johansson accused them of replicating her voice without consent.
These are not cases of science fiction vaguely inspiring technology. This is culture steering what gets built, down to the casting. Jens Beckert’s concept of Fictional Expectations offers a useful frame: in capitalist economies, collectively held images of the future coordinate the investment and effort that builds it. What we’re looking at with AI might be closer to cultural determinism: the direction of development shaped by narratives that were in place long before the first line of code.
Why it matters
Cultural Lead explains something about the emotional intensity of the AI debate that Cultural Lag cannot.
For most technologies, fear is a reaction to something new appearing in the world. With AI, the fear was already there. Cultivated by decades of Terminator movies and dystopian scenarios, pre-loaded and waiting for a technology to attach itself to. A 2019 study by Germany’s Allensbach Institut found that 76% of respondents cited the Terminator when asked which fictional AI they recognized.2 The emotional charge was in place long before anyone opened ChatGPT.
“Machines will take our jobs” is a wandering narrative, one that attaches to whatever new technology appears. The power loom in the 1810s, automation in the 1960s, the internet in the 1990s. But with AI, this old story hits differently. The fear wasn’t generated by encountering new technology. It was already running in the background, shaped by a century of future imaginaries and the specific AI imaginary that grew from them.
The implication is uncomfortable for organizations. If the resistance your teams show toward AI comes from cultural images they’ve been carrying for decades, not from the software you just deployed, then starting with the technology means starting in the wrong place. The cultural work has to come before, or at least alongside, the technical work. In my experience, most organizations skip this entirely. They build an AI strategy, roll out tools, and then wonder why people react as if you’re asking them to welcome the Terminator into their workflow.
This also reframes what the Imagination Economy is really about. If the AI images running through our culture are this powerful, the scarce resource is the ability to see past the images already in place. Loosening the grip of pre-existing narratives so that new ones become thinkable.
I explored this concept in more detail in issue #2 of my newsletter Futures Lens.3
-
William F. Ogburn, Social Change with Respect to Culture and Original Nature (1922). ↩
-
Allensbach Institut, Terminator und R2-D2: Die bekanntesten KIs in Deutschland (2019). ↩
-
“Cultural Lead: Why AI Is Different,” Futures Lens #2 (2026). ↩
Linked References
No notes link to this note yet.