Meaningmaking

Every day, we make dozens of subjective value judgments without noticing. We decide which email deserves a reply first, whether a meeting is worth attending, whether a draft is “good enough.” This activity is so woven into how we function that it becomes invisible. Vaughn Tan calls it meaningmaking and has spent over two years dissecting it in his newsletter The Uncertainty Mindset. His argument: meaningmaking only becomes visible when something appears that cannot do it. That something is AI.

As philosopher Alva Noë puts it: “We are makers of meaning. We can’t help doing this; no computer can do this.”1

The Concept

Meaningmaking is the act of making subjective decisions about the relative value of things. It is a specific form of sensemaking, but the distinction matters.

Sensemaking identifies. “Those are spoons.” Meaningmaking evaluates. “Those spoons are ugly. The old ones were better.” AI systems have become remarkably good at sensemaking. Meaningmaking remains beyond their reach.

Tan distinguishes four types:2

  1. Deciding that something is subjectively good or bad. “Diamonds are beautiful.” “Blood diamonds are morally reprehensible.”
  2. Deciding that something is subjectively worth doing or not. “Going to college is worth the tuition.”
  3. Deciding on subjective value-orderings. “Howard Hodgkin is a better painter than Damien Hirst.”
  4. Deciding to reject existing value decisions. “I used to think this pizza was excellent, but after eating at Pizza Dada, I now think it is pretty mid.”

The defining characteristic across all four types: no objective verification is possible. Two people can arrive at different judgments, and neither is “right.” This connects meaningmaking to judgment, aesthetics, taste, and fashion.

The current discourse around “taste” as the last human competency in the AI era circles around exactly this phenomenon. Sari Azout writes that “all skills will collapse until all that is left is taste.” Marc Andreessen pivoted from “AI makes creative work obsolete” to “AI will never replace jobs requiring taste.”3 Tan’s framework gives theoretical structure to what the popular discourse calls “taste.”

Tan situates meaningmaking within his broader project on “not-knowings”: the various types of genuine uncertainty that are not what colloquial language calls “risk.” Meaningmaking operates specifically in situations of not-knowing about value. It is the generative response to uncertainty about what is worth doing, having, or being.

Meaningmaking and AI

AI systems cannot do meaningmaking. They produce outputs that resemble it, but every subjective value judgment in the chain comes from humans: encoded in training data, embedded in prompts, applied during evaluation, baked into product design.

Tan illustrates this with a deceptively simple task: asking ChatGPT to summarize a long text in 140 characters.4 Sounds like AI work. But look at the meaningmaking involved. Deciding which points matter (Types 2 and 3). Judging whether the style fits (Type 1). Choosing John McPhee as the right reference point for tone (Type 3). Assessing whether the third version is good enough (Type 1). Deciding whether to keep prompting or switch to manual editing (Types 2 and 3). All of that was human meaningmaking work. The AI handled the non-meaningmaking parts.

This leads to a persistent illusion. Because AI outputs resemble human outputs, they create what Tan calls a “seductive mirage”: an affordance that appears easy to perceive and use but does not actually exist.5 Malevich’s Black Square makes this concrete. Two visually identical black squares on canvas. Only one is the product of meaningmaking that launched an entire art movement. The other is just paint.

The practical consequence is what Tan calls unbundling: decomposing work into meaningmaking (stays with humans) and non-meaningmaking (where AI excels). AI outperforms humans in three areas: data management, data analysis, and rule-following. But each of these only works when humans provide the meaningmaking layer on top.6 Skip the unbundling, and products fail. Tan points to an AI scheduling tool in Chicago that collapsed spectacularly because it tried to automate the meaningmaking work of calendar prioritization: deciding which meetings matter more than others.7

Unbundling is not a technical question. It is a question of organizational design: which value judgments are woven into a process, and who makes them after automation?

Connections

Causal Layered Analysis (CLA). CLA examines issues across four layers, from surface events (Litany) down to deep assumptions (Myth/Metaphor). Meaningmaking operates on that fourth layer. Whoever makes subjective value judgments is working with the narratives and archetypes that determine what counts as desirable, good, or right.

Sociotechnical Imaginaries. Jasanoff defines sociotechnical imaginaries as collectively held visions of desirable futures, attainable through science and technology. The word “desirable” is collective meaningmaking: a societal decision about the relative value of different technological futures. Jasanoff herself recommends methods that conduct “inquiries into meaning making.”8

Open Questions

Tan treats meaningmaking as something humans simply do. But if it can be trained, what are the pedagogical consequences? And if it is emergent, what conditions bring it about, and which ones suppress it?

AI-generated outputs increasingly dominate the inputs on which humans base their value judgments. What happens to collective meaningmaking when the raw material itself is machine-produced?

Tan consistently writes “(for now)” when he states that AI cannot do meaningmaking. Is that caveat warranted? Or is meaningmaking something that is not merely difficult for machines but categorically non-machinable?

Reading List

  1. What makes us human (for now)? (2023). The foundational piece where Tan first develops the concept of meaningmaking and argues it is what distinguishes humans from machines.
  2. AI’s meaning-making problem (2024). Defines the four types of meaningmaking and shows how each is essential for building and using AI systems.
  3. Where AI wins (2024). The complement: three areas where AI outperforms humans, and why each still depends on human meaningmaking.
  4. AI’s seductive mirage (2024). AI appears to have meaningmaking capabilities it does not actually possess, leading to dangerous product and policy decisions.
  5. Meaningmaking at work (2024). How meaningmaking is woven into every business process, and why unbundling it from non-meaningmaking work is the key to building AI products that actually work.
  6. AI’s missing middle (2024). Applies the meaningmaking framework to a specific market opportunity: industry-specific AI applications for medium-sized firms.
  7. Meaningmaking and not-knowing (2024). A retrospective connecting meaningmaking to Tan’s broader project on not-knowing, including Alva Noë‘s argument that humans are irreducible makers of meaning.
  1. Alva Noë, quoted in Vaughn Tan, “Meaningmaking and not-knowing,” The Uncertainty Mindset, 2024. 

  2. Vaughn Tan, “AI’s meaning-making problem,” The Uncertainty Mindset, 2024. 

  3. Sari Azout on the Follow the Rabbit podcast, Season 4. Marc Andreessen’s pivot reported in Business Insider, May 2025. 

  4. Vaughn Tan, “AI’s meaning-making problem,” The Uncertainty Mindset, 2024. 

  5. Vaughn Tan, “AI’s seductive mirage,” The Uncertainty Mindset, 2024. 

  6. Vaughn Tan, “Where AI wins,” The Uncertainty Mindset, 2024. 

  7. Vaughn Tan, “Meaningmaking at work,” The Uncertainty Mindset, 2024. 

  8. Sheila Jasanoff, “Future Imperfect: Science, Technology, and the Imaginations of Modernity,” in Dreamscapes of Modernity, 2015. 


Note Graph

ESC