Human Direction in an AI Civilization
There is a question underneath all the AI discourse that most people avoid saying out loud:
What do humans actually do when AI can think?
Not “what jobs survive.” That is too narrow. The real question is about the role of human cognition in a world where machine cognition is abundant and cheap.
This is not a new question, by the way. Norbert Wiener asked a version of it in 1950 when he wrote that the new industrial revolution would devalue the human brain, at least in its simpler and more routine decisions [1]. He was right. He was just early. What’s new is that “simpler and more routine decisions” now includes things we spent decades assuming were safe - writing, coding, analysis, design.
From doing to deciding
For most of human history, intelligence was the bottleneck. You needed human brains to analyze data, write code, draft documents, design systems, diagnose problems. AI is getting capable enough at all of these to fundamentally change the economics of cognitive work. Not perfectly. Not universally. But enough.
What stays expensive and scarce is different:
Deciding what problems matter. Setting the right constraints. Judging whether an output is actually good. Making irreversible decisions under deep uncertainty.
These are acts of direction, not execution. And they require something the models do not have: intrinsic goals, values, and taste.
Stuart Russell makes this point well. He argues that the central challenge of advanced AI is not making systems capable, but making them pursue the right objectives - and that specifying the right objectives is fundamentally a human problem [2]. As AI gets more capable, human goal-setting gets more important, not less.
Why direction is harder than execution
There is a common assumption that the “real work” is execution. Writing the code. Running the experiment. Building the prototype.
Anyone who has led a complex project knows better. Choosing what to build is far harder than building it. You can correct execution iteratively. Direction, once set, determines the entire trajectory.
AI makes execution cheap. That makes direction more valuable and more consequential. A wrong direction executed at AI speed wastes resources faster than ever. A right direction amplified by AI compounds faster than ever.
What people mean when they say “taste”
Taste is the word people reach for when they mean the ability to distinguish good from bad without being able to fully explain why.
In an AI civilization, this becomes a core productive skill. A researcher needs it to figure out which AI-generated hypotheses are worth pursuing. A founder needs it to pick a product direction when AI can build any of them equally fast. A policy maker needs it to judge which AI applications should be encouraged and which should be constrained.
This is not aesthetic preference. It is deep pattern recognition built from experience, values, and integrative understanding. And so far, it does not scale with parameter count.
What the human mind becomes
If AI handles search, synthesis, and execution, the human mind becomes something like a compass. It provides direction, values, and quality standards for systems that can move at enormous speed.
This is not a diminished role. It is arguably the most important cognitive function there is - the one that determines what gets optimized for.
Every optimization process needs an objective function. In an AI-augmented civilization, humans are the objective function.
What follows
Education needs to shift from teaching execution - memorization, routine analysis - to teaching judgment: evaluation, prioritization, ethical reasoning. The skills that are hardest to teach (taste, integrative thinking, comfort with ambiguity) are becoming the most important ones.
Organizations need to restructure around direction-setting instead of execution capacity. The org chart of the future might be very flat: a few people with exceptional judgment directing AI systems that handle everything else.
Governance gets more important, not less. When AI amplifies human decisions, the quality of those decisions - including collective ones about regulation, values, and goals - matters enormously. Bad judgment at scale compounds just as fast as good judgment.
We are not becoming irrelevant. We are becoming the part that matters most: the part that decides what all this intelligence is for.
References
[1] Wiener, N. (1950). The Human Use of Human Beings: Cybernetics and Society. Houghton Mifflin.
[2] Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.