You wake up tomorrow and an AI can do your job. Not theoretically — actually. It writes better emails, generates better reports, produces better code. The outputs are indistinguishable from yours, maybe better.

What do you do?

Some people would freeze. Others would pivot within the week. The difference between those two responses isn't intelligence, education, or even adaptability. It's something more fundamental: agency — the capacity to act with intention when the ground shifts beneath you.

This essay is about that capacity. What it is, why machines can simulate it but never possess it, and why developing it might be the most important thing you do in the age of AI.

What Agency Actually Means

Agency isn't just "taking action." Busy people take action all day without exercising agency. Agency is the deliberate capacity to shape your circumstances rather than be shaped by them.

The philosophical roots run deep. Sartre called it radical freedom — the inescapable responsibility of choosing, even when you pretend you have no choice. The Stoics drew the line between what's in your control (your responses) and what isn't (everything else), then insisted you focus entirely on the first category. Self-determination theory identifies autonomy — the sense that your actions originate from yourself — as a fundamental human need, alongside competence and relatedness.

George Mack offers a more visceral definition: a high-agency person is "the person you'd call from a third-world jail." Not the smartest person you know. Not the most connected. The one who would figure it out — who would treat the situation as a problem to solve rather than a fate to accept.

This matters because agency is generative. Intelligence helps you analyze a situation. Knowledge helps you understand it. But agency is what makes you act on that understanding — and action is where reality changes.

The Agency Spectrum

Agency isn't binary — you have it or you don't. It's a spectrum, and most people oscillate along it depending on context, energy, and stakes.

Learned Helplessness

"Nothing I do matters." The belief that outcomes are independent of action. Often the result of repeated experiences where effort genuinely didn't help — but the conclusion overgeneralizes.

Passive Consumption

Scrolling, watching, absorbing. Not suffering exactly, but not directing anything either. The path of least resistance, where algorithms choose what you see and habit dictates how you spend your time.

Reactive Adaptation

Responding competently to whatever arises. Good at solving problems when they appear, but not choosing which problems to solve. Most professionals live here.

Proactive Creation

Choosing which problems matter, then building toward solutions before anyone asks. The leap from "I can handle what comes" to "I decide what comes."

Reality-Shaping

Operating on the environment itself — changing rules, creating institutions, building tools that reshape what's possible for others. Rare, but not reserved for the extraordinary.

Mack describes high agency through the tricycle model: three wheels that must all turn together. Clear thinking (knowing what you actually want), bias to action (defaulting to doing rather than deliberating), and disagreeability (the willingness to be wrong, unpopular, or uncomfortable in pursuit of what matters).

Why AI Amplifies the Gap

AI is a force multiplier. That's the standard line, and it's true — but force multipliers amplify in both directions.

High agency + AI = exponential leverage. A person who knows what they want to build can now prototype in hours what used to take teams and months. One developer with clear vision and AI tools can ship what once required a small company. One researcher with a sharp question can synthesize literature that would have taken a year to read.

Low agency + AI = deeper passivity. More content to consume. More decisions you can delegate. More comfortable numbness as AI curates your feed, drafts your messages, and makes your choices feel pre-made. The path of least resistance becomes frictionless.

The LLM Story explores this through its "agent without agency" chapter — the strange reality of machines that can execute complex plans while having no stake in the outcome. They optimize without caring. They "choose" without choosing. The appearance of agency without the substance.

This is why the gap widens. In a pre-AI world, low agency meant you progressed slowly. In an AI world, low agency means you may not progress at all — because the people who do exercise agency are moving at a fundamentally different speed.

The Defining Skill

Knowledge is being commoditized. Execution is being automated. What remains?

The ability to decide what matters.

An AI can answer any question you ask — but it cannot tell you which questions are worth asking. It can build anything you specify — but it cannot tell you what's worth building. It can optimize any metric you define — but it cannot tell you which metrics reflect what you actually value.

Agency is the meta-skill: the capacity to choose your problems, define your goals, and direct your tools toward ends that matter to you. Everything else — knowledge, skill, even intelligence — serves this capacity.

This isn't abstract. Every time you use an AI tool, you're making an agency decision: are you directing the tool toward something you've chosen, or are you letting the tool's defaults direct you? The question scales from individual productivity to civilizational trajectory.

Agency Is Learnable

The most important thing about agency: it's not an innate trait. It's a practice — one that can be developed, and one that can atrophy.

Five common traps erode agency. Recognizing them is the first step to escaping:

Vague Thinking

"I want to be successful" is not a goal — it's a feeling. Agency requires specificity. What does success look like on Tuesday? The vaguer your desires, the less your actions can serve them. The What For? exercise exists precisely to cut through this fog.

Overthinking

Analysis as avoidance. Gathering more information, considering more angles, waiting for certainty that never comes. The bias to action in Mack's tricycle model is the antidote: act on 70% information, correct in motion.

Attachment to Outcomes

When your identity is tied to a specific result, failure becomes existential rather than informational. Detaching from outcomes while staying committed to direction is the Stoic insight: control your actions, release the results.

Rumination

Replaying past decisions, rehearsing future scenarios — staying in your head instead of in the world. Agency lives in the present tense. It's what you do next, not what you should have done.

Overwhelm

Too many options, too many inputs, too much to process. The paradox of choice applied to action itself. The escape is narrowing: choose one thing, do it now, evaluate after.

The thought experiments in philosophy of mind raise a fascinating backdrop here. We don't fully understand consciousness. We can't prove free will. The philosophical zombie argument suggests behavior alone doesn't guarantee inner experience. And yet — even in this uncertainty, you can choose. Agency doesn't require resolving the hard problem of consciousness. It just requires acting as if your choices matter, because in practice, they do.

Key Takeaways

  • Agency is the capacity to shape reality through deliberate action, not just react to circumstances
  • It exists on a spectrum from learned helplessness to reality-shaping — most people oscillate
  • AI amplifies the gap: high agency + AI = exponential leverage; low agency + AI = deeper passivity
  • The defining skill in an AI world is deciding what matters — the one thing machines can't do for you
  • The alignment problem is an agency problem: whose goals should AI pursue?
  • Agency is learnable — it's a practice, not a trait, built through deliberate choice
  • Five traps erode it: vague thinking, overthinking, attachment, rumination, overwhelm

Explore Further

Theme
Language
Support
© funclosure 2025