What Remains: The Things AI Structurally Cannot Replace

Essay No. 02 · March 10, 2026 · David Jung

There is a list that people in the artificial intelligence industry like to keep. It is the list of things AI cannot do. Five years ago, the list was long: AI could not write coherently, could not generate images from text, could not compose music that sounded like anything a human would choose to listen to. Three years ago, it could not reason through multi-step problems, could not pass the bar exam, could not diagnose a rare disease from a medical image more accurately than a specialist. Today, it can do all of those things, imperfectly, with caveats, but well enough that the caveats are shrinking faster than most people expected.

I build AI systems for a living. I watch the list shrink in real time. Every week, a customer asks me some version of the same question: what is left? What can my people still do that the system cannot? And increasingly, the honest answer is: less than you think.

But the reason the list keeps shrinking is not that AI has become superhuman. It is that most of what people do, inside organizations, is tool-work. Processing information. Executing procedures. Implementing decisions made elsewhere. The org chart, if you look at it honestly, is a hierarchy of instruments, each layer converting the intent of the layer above into outputs for the layer below. And AI is a better instrument.

What remains, once the tool-work is absorbed, splits into two. A small class of people who direct the tools: who set the goals, make the judgment calls, bear the accountability, and capture the value. And a large class of people who consume what the tools produce. This essay is about that split, what it means, and why it may be the defining structural change of our time.

The first essay in this series asked what we do with our time when AI does all the work.1 This essay asks: who directs the work, who consumes the output, and what happens to everyone in between?


I. The Tool-Work

Most work, if you examine it honestly, is execution. Not in the pejorative sense. In the structural sense.

An organization is a system for converting decisions into outcomes, and the conversion happens through layers of people who receive instructions, process information, and produce outputs that feed into the next layer. The financial analyst builds the model the VP requested. The marketing associate writes the copy the director outlined. The software developer implements the feature the product manager specified. The paralegal researches the precedent the partner identified.

Each of these people brings skill, judgment, and experience to their work. They are not machines. But the function they serve within the organization is instrumental: they are the mechanism through which someone else’s intent becomes reality. This is not a criticism. It is a description of how organizations have always worked. Adam Smith’s pin factory was the first clear articulation of the principle: divide labor into specialized functions, and each worker becomes an instrument optimized for a single operation.2 What followed, from the assembly line to the cubicle farm to the open-plan office, was an elaboration of the same logic. Specialize. Execute. Pass the output to the next station.

What made human tools irreplaceable, until now, was that no other tool could do what they did. A spreadsheet could calculate, but it could not interpret ambiguous data and decide what mattered. A search engine could retrieve information, but it could not synthesize it into a recommendation. A database could store records, but it could not read a customer’s email and draft an appropriate response. Human judgment, adaptability, and language were the irreplaceable components of the tool-chain.

AI changes this. Not because it replicates every human capability. It does not. But it replicates the specific capabilities that most tool-work requires: processing language, recognizing patterns, following complex instructions, generating structured output, and improving with feedback. The list keeps shrinking because the list was always a list of tool-functions. And AI is becoming a better tool.


II. The Better Tool

I deploy AI systems into organizations where they work alongside the people they are, in many cases, beginning to replace. The pattern I see, across industries and company sizes, is consistent.

The AI does not replace the best version of what a person does. It replaces the median version. The average financial analysis, the typical legal memo, the standard marketing copy, the routine code, the ordinary customer service interaction. And in organizations, the median version is what most of the work actually is. The exceptional analyst, the brilliant attorney, the visionary designer: these are rare. The vast majority of organizational output is competent execution of well-understood tasks. That is precisely what AI does well, and it does it faster, at lower cost, around the clock, without salary negotiations or equity packages or sick days.

What happens when you deploy AI in an enterprise is not that the best people become unnecessary. It is that the organization discovers how much of its headcount served tool-functions that a well-configured AI system handles at a fraction of the cost. The 50-person team that processed insurance claims becomes a 5-person team that oversees the AI doing the processing. The 20-analyst research department becomes 3 senior analysts directing AI agents. The headcount does not shrink to zero. But it shrinks dramatically, and the people who remain are not the ones who executed best. They are the ones who directed, decided, and took responsibility.

This is not a future scenario. It is what I watch happen, customer by customer, quarter by quarter. And the people being displaced are not the worst performers or the least skilled. They are the people whose skills, however real, served a tool-function that a better tool now serves. A skilled carpenter is not diminished by the existence of a power saw. But the carpenter who was hired specifically because the company needed someone to saw boards by hand has a problem when a better saw arrives, regardless of how well they sawed.


III. What Remains for the Few

If most work is tool-work, and AI is a better tool, what remains for humans within the productive system?

The answer is narrower than the optimistic version of this story suggests. What remains is the work that, by its nature, cannot be a tool-function. Four things: accountability, judgment under uncertainty, verification, and direction. These are real, structurally irreplaceable, and consequential. But they describe what a small class of people does, not what “remains” for workers in general.

Accountability is the clearest case. On May 6, 2010, the Dow Jones dropped approximately 1,000 points in minutes when algorithmic trading cascaded out of control.3 Two years later, Knight Capital lost $440 million in 45 minutes from a software deployment error.4 In both cases, accountability flowed to people and institutions, because it had nowhere else to go. The philosopher Andreas Matthias calls this the “responsibility gap”: as machines grow more autonomous, the framework for assigning responsibility breaks down, because no single human made the decision that produced the outcome.5 But the gap cannot be filled by the machine. It can only be absorbed by people.

I see this every week in my own work. When a company deploys an AI system that decides which customers to approve for credit, which claims to flag for denial, which applicants to advance, someone must own what the system does. The AI can score and rank at a scale no team of analysts could match. But when the recommendation is wrong, the customer does not call the algorithm. They call the company. And the company needs a person who can explain, reverse, and take responsibility. Every enterprise customer I work with arrives at this realization: the AI is not the product. The AI plus the person who stands behind it is the product.

Judgment under uncertainty is distinct from calculation. AI excels at decisions with clear parameters and sufficient data. The decisions that matter most are the ones where the data is incomplete, the stakes are irreversible, and there may not be a right answer. A surgeon who encounters something the imaging did not show. An executive committing to a market entry on ambiguous signals. Someone must choose, knowing they might be wrong, and live with what follows. AI can generate options and estimate probabilities. But committing when the data does not clearly favor any direction requires a person who will bear the consequences.

Verification presents a paradox that worsens as AI improves. If AI writes the code, who catches the bug? If AI drafts the brief, who spots the hallucination? Expertise is built through doing.6 If juniors never build financial models, draft motions, or work through diagnoses themselves, they will never develop the pattern recognition needed to oversee the AI that replaced their training ground. The better AI gets at the doing, the harder it becomes to produce the humans who can verify what it did.

Direction is the deepest layer. AI optimizes brilliantly toward whatever goal it is given. It cannot set the goal. Harry Frankfurt’s distinction between first-order desires (wanting something) and second-order desires (wanting to want something) captures what is at stake.7 The capacity to evaluate your own goals, to ask not just “how do we maximize engagement?” but “should we be maximizing engagement at all?”, is not a tool-function. It is the function that decides what the tools are for.

These four capacities are genuine. They will not be automated. But notice who exercises them: the executive, the surgeon, the senior analyst, the policy-maker, the founder. The people at the top of the hierarchy who directed the human tools will now direct the AI tools. For them, “what remains” is a more powerful version of what they were already doing. For everyone who served as the tool, the question is different.


IV. The Shrinking Circle

Even within this producer class, the circle of structurally necessary humans is contracting.

Accountability, judgment, verification, and direction are all, in practice, increasingly absorbed by institutions rather than by individuals. Waymo’s autonomous vehicles have driven over 25 million miles on public roads.8 When one is involved in an accident, accountability falls not on a driver but on the corporation, its engineering protocols, its insurance, and the regulatory framework. Germany’s 2017 amendments to its Road Traffic Act placed liability on the vehicle’s “keeper” and manufacturer rather than a human operator.9 Safety improves over time. No individual human driver is needed.

Military applications present the starkest case. In 2020, a Kargu-2 loitering munition in Libya reportedly engaged targets autonomously, without a human operator making the final decision to fire.10 The accountability, to the extent it exists, belongs to the military command, the government, the manufacturer. The institution absorbs the responsibility.

This pattern extends wherever AI operates. Self-checkout replaces the cashier; the corporation handles disputes. AI diagnosis supplements the radiologist; the hospital carries the malpractice exposure. Algorithmic trading replaces the floor trader; the firm absorbs the risk. At each step, the number of humans required shrinks. Not to zero, but toward a number dramatically smaller than today.

And as the circle contracts, wealth concentrates within it. The economic logic is straightforward: if AI replaces the tool-work of fifty people and three people direct the AI, the value those fifty salaries captured now flows to the three, and to the shareholders, and to the AI providers. This is already visible. The most valuable companies in the world are the ones with the highest revenue per employee, precisely because they have replaced the most tool-work with technology. The pattern is not new. What is new is the breadth of tool-work that AI can absorb, which extends far beyond what any previous technology could reach.

The result is not a society without human agency. It is a society in which agency concentrates in very few hands. Too few people bearing too much responsibility, shielded by layers of corporate and legal abstraction, with too little oversight from the people those decisions affect. The question shifts from “which tasks require a human?” to something more unsettling: how few humans does a productive system actually need?


V. What Remains for the Rest

For the majority who are no longer needed as organizational tools, what remains?

For generations, the dominant framework for thinking about non-work life has been balance: work-life balance, the implicit acknowledgment that work is the gravitational center and everything else must be protected from its pull. If work recedes, that framework dissolves. There is nothing left to balance against. What replaces it is not leisure, not retirement, not the weekend expanded to fill every day. It is simply the question of how to live. Not how to live after work or around work or despite work. Just: how to live. That is a harder question than it sounds, because most of us have never had to answer it without the structure that work provided.

The honest answer, the one this essay has been building toward, is that for most people, what remains is consumption.

What AI creates, humans consume. The content, the entertainment, the services, the convenience, the companions: all produced at scale by a shrinking class directing AI, all consumed by a growing class with time to fill and no productive function to fill it with. This is not a moral judgment. It is a structural description. When an economy needs fewer people to produce, the remainder become consumers by default, not by choice. The factory worker displaced by the assembly line did not choose to become a service worker. The service worker displaced by AI will not choose to become a full-time consumer. But the logic of the system points in that direction.

And the products designed for this consumer class will be extraordinarily good. Personalized, adaptive, infinite in variety, neurologically optimized. If you think social media is compelling now, consider what happens when AI can generate content tailored not just to your stated preferences but to your real-time behavioral patterns. The experiences will be richer, cheaper, more convenient than anything that has come before. Everything about the consumer side of the equation will improve, except the one thing that consumption, by its nature, cannot provide: the sense that you are the one making something, building something, contributing something that the world would not have without you.


VI. The Path of Least Resistance

The neuroscience of reward explains why the consumption default is so durable.

Wolfram Schultz’s foundational research on dopamine established that the brain’s reward system responds not to pleasure itself but to prediction error: the gap between what was expected and what was received.11 When a reward matches expectation, dopamine neurons are silent. When it exceeds expectation, they fire. Sustained, predictable stimulation narrows the gap. The brain requires escalation.

Kent Berridge’s work made the picture more troubling. The dopamine system primarily drives wanting (the motivation to pursue a reward) rather than liking (the actual pleasure experienced upon receiving it). These two systems can dissociate entirely.12 You can want something intensely without enjoying it much. This is the neurological signature of compulsion: the scroll that continues long past the point of pleasure, the episode that auto-plays into the early morning, the game that absorbs hours that, in retrospect, contained no satisfaction.

Anna Lembke argues that chronic exposure to high-dopamine stimuli tips the brain’s homeostatic balance toward a deficit state, requiring ever-greater stimulation to reach baseline.13 A world of abundant, AI-optimized entertainment is a world that systematically trains brains to require more and find less.

Rome’s trajectory illustrates the structural version of this dynamic. The games began as occasional religious festivals. Under Claudius, the Roman calendar included 159 public holidays, 93 of them devoted to publicly funded games. By the fourth century, game days exceeded 170.14 The spectacles grew more extreme across centuries: from athletic contests to gladiatorial combat to staged hunts to public executions presented as theater. The escalation was not incidental. It was structural. When a population has time to fill and meaning is not readily available, spectacle expands to fill the vacuum. The threshold rises. What thrilled last year bores this year.

The Roman citizens who filled the Colosseum were not imprisoned. They chose to be there. That is what makes the pattern so durable, and so difficult to interrupt.


VII. The Harder Path

There is an alternative to consumption. It is available to anyone. And it is much harder.

The philosopher Agnes Callard distinguishes between choosing something you already value and aspiring to value something you do not yet fully understand.15 Aspiration is the process of becoming someone who cares about things you cannot yet appreciate. Learning to love difficult music. Learning to be a parent. Learning to care about a community you have just joined. This process cannot be optimized, because the person doing it does not yet know what they are optimizing for. They are becoming someone new, and the destination is not visible from the starting point.

AI cannot aspire on your behalf. It cannot do the work of becoming someone who wants different things than you want today. That work is irreducibly yours.

The same applies to refusal. Knowing what you do not want, what your family does not want, what your community refuses to accept: this is a form of agency, not a failure of optimization. AI is structurally incapable of refusal. It has no commitments to protect, no values to defend, no identity that would be compromised by saying yes to everything.

And there is relationship. Not algorithmic matching or automated check-ins or AI companions that are always available and never inconvenient. The actual, difficult work of knowing another person and being known in return. The conversations that go nowhere productive. The arguments endured rather than resolved. The presence that cannot be delegated because delegation is precisely what would drain it of meaning.

These capacities, aspiration, refusal, relationship, are genuinely human. They are not tool-functions and never were. But they are also not what “remains” in any structural or automatic sense. The economy will not demand them. No employer will pay for them. No institution will require them. They are things people must choose, deliberately, against the pull of a system that makes consumption effortless and meaning-making difficult.

The question is not whether this harder path exists. It does. The question is how many people will find it, and whether our institutions do anything to make it accessible.


VIII. The Split

What remains is not one thing. It is two.

For the few who direct the tools: accountability, judgment, verification, direction. More power than any previous generation of leaders has wielded. More responsibility, too, though whether the structures exist to enforce that responsibility is an open question. The wealth of the productive economy flows here, because this is where the decisions are made and the tools are aimed.

For the many whose tool-functions have been absorbed: time. Abundant, unstructured, potentially liberated time. And the question of what to fill it with. The default is consumption, and the consumption will be very good. Good enough that choosing otherwise will require the kind of deliberate effort that most people, in most periods of history, have not been asked to make.

The founding essay asked what we do with our time when AI does all the work. Here is the sharper version of that question: what do we do with our time when we are no longer the tools, but we are also not the ones holding them?

This is not a question about technology. It is a question about power, about wealth, about the kind of society that emerges when productive capacity detaches from broad human participation. The Roman answer was bread and circuses. The modern answer, if we build nothing better, will be its equivalent: comfort without agency, abundance without purpose, entertainment without end.

The capacity for something better is real. Aspiration, refusal, relationship: these are not fantasies. But they require cultivation, and cultivation requires structures that do not yet exist at the scale the transition demands. What we are building, right now, is the infrastructure for consumption. What we are not building, not yet, is the infrastructure for meaning.

That asymmetry is the thing worth paying attention to.


Notes

Footnotes

  1. David Jung, “What Do We Do With Our Time When AI Does All the Work?” Asymptronix, Essay No. 01 (2026). Available at asymptronix.com/essays/founding-essay.

  2. Adam Smith, An Inquiry into the Nature and Causes of the Wealth of Nations (London: W. Strahan and T. Cadell, 1776), Book I, Chapter 1. Smith’s pin factory example, where dividing manufacture into eighteen distinct operations multiplied output per worker by a factor of hundreds, remains the foundational illustration of how specialization converts workers into instruments optimized for single functions.

  3. The Flash Crash of May 6, 2010, saw the Dow Jones Industrial Average drop approximately 1,000 points (about 9%) in minutes before partially recovering. The SEC and CFTC joint report attributed the crash to a large automated sell order that triggered a cascade of high-frequency trading responses. Navinder Singh Sarao was later charged with spoofing. See U.S. Securities and Exchange Commission and Commodity Futures Trading Commission, “Findings Regarding the Market Events of May 6, 2010,” September 30, 2010.

  4. Knight Capital Group lost approximately $440 million in 45 minutes on August 1, 2012, due to a software deployment error that caused its trading algorithms to execute millions of unintended trades. The firm was acquired by Getco LLC within months. See SEC Administrative Proceeding, File No. 3-15570, October 16, 2013.

  5. Andreas Matthias, “The responsibility gap: Ascribing responsibility for the actions of learning automata,” Ethics and Information Technology 6 (2004): 175–183. Matthias argues that as machine learning systems become more autonomous, the traditional framework for assigning responsibility breaks down, creating a gap that existing legal and ethical concepts cannot fill.

  6. On automation and skill atrophy, see the Federal Aviation Administration, “Safety Alert for Operators (SAFO) 13002: Manual Flight Operations,” January 4, 2013, which urged airlines to promote manual flying to prevent skill degradation. The parallel to AI is direct: if junior professionals never build models, draft motions, or work through diagnoses themselves, they will lack the pattern recognition needed to verify AI output. See also Nadine B. Sarter, David D. Woods, and Charles E. Billings, “Automation Surprises,” in Handbook of Human Factors and Ergonomics, 2nd ed. (New York: Wiley, 1997).

  7. Harry Frankfurt, “Freedom of the Will and the Concept of a Person,” Journal of Philosophy 68, no. 1 (1971): 5–20. Frankfurt’s distinction between first-order desires (wanting something) and second-order desires (wanting to want something) is foundational in philosophy of action. A “wanton,” in Frankfurt’s framework, is a being that has desires but does not care which desires move it to action.

  8. Waymo has operated autonomous vehicles in San Francisco, Phoenix, and Los Angeles, accumulating over 25 million autonomous miles by 2025. Safety data indicates significantly lower crash rates than human drivers in comparable conditions. See Waymo, “Waymo Significantly Outperforms Comparable Human Benchmarks Over 22+ Million Miles of Autonomous Driving,” December 2024.

  9. Germany’s 2017 amendments to the Road Traffic Act (Strassenverkehrsgesetz, StVG sections 1a through 1e) established one of the first legal frameworks for autonomous vehicle accountability, placing liability on the vehicle’s “keeper” and manufacturer rather than a human driver. For a broader discussion, see Ryan Calo, “Robotics and the Lessons of Cyberlaw,” California Law Review 103, no. 3 (2015): 513–563.

  10. In March 2021, a UN Panel of Experts reported that a Kargu-2 loitering munition, manufactured by the Turkish firm STM, was used in Libya and “was programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.” See United Nations Security Council, “Letter dated 8 March 2021 from the Panel of Experts on Libya,” S/2021/229, paragraph 63. Whether the engagement was fully autonomous remains debated.

  11. Wolfram Schultz, Peter Dayan, and P. Read Montague, “A Neural Substrate of Prediction and Reward,” Science 275, no. 5306 (1997): 1593–1599. Schultz’s work established that dopamine neurons encode prediction errors rather than reward per se: they fire when a reward exceeds expectations, remain silent when expectations are met, and decrease firing when expectations are unmet.

  12. Kent C. Berridge and Terry E. Robinson, “Parsing Reward,” Trends in Neurosciences 26, no. 9 (2003): 507–513. Berridge and Robinson demonstrated that the dopamine system primarily drives “wanting” (incentive salience) rather than “liking” (hedonic impact). The two can dissociate: an organism can intensely want something it does not particularly enjoy.

  13. Anna Lembke, Dopamine Nation: Finding Balance in the Age of Indulgence (New York: Dutton, 2021). Lembke argues that chronic exposure to high-dopamine stimuli shifts the brain’s pleasure-pain balance toward a persistent deficit state, requiring increasing stimulation to achieve baseline.

  14. On the escalation of Roman games, see Keith Hopkins, “Murderous Games: Gladiatorial Contests in Ancient Rome,” History Today 33, no. 6 (1983). Under Claudius, the calendar included 159 public holidays, 93 devoted to games. By the fourth century CE, game days exceeded 170. See also Michele Renee Salzman, On Roman Time: The Codex-Calendar of 354 and the Rhythms of Urban Life in Late Antiquity (Berkeley: University of California Press, 1990).

  15. Agnes Callard, Aspiration: The Agency of Becoming (New York: Oxford University Press, 2018). Callard argues that aspiration is a rational process distinct from both desire and decision: the aspirant does not yet fully grasp the value they are working toward, and cannot, because grasping it requires having already become the person who holds it.