The Status Quo is Unsustainable

I keep writing about this because it’s inescapable. AI Token consumption growth at 350%+, AI revenue going at 50% per annum, Coreweave booked for 4 years. Business productivity +1.3% GDP, and decelerating with GDP going sub 2% in the biggest AI market (the US). We’ve all seen the meme. 95% of Generative AI deployments at corporations are not meeting expectations.

This gap between AI growth and productivity is for several big reasons 1] AI is mostly making content more addictive (improving social media algorithms) 2] improving ability to make addictive content and ship apps with better in game experiences (see massive Roblox billings growth due to AI translation, as a concrete example). 3] The vast majority of growth is due to top down mandates to use AI for work - which results in AI generated code being hard to maintain, increasing technical debt 4] as People become increasingly reliant on AI systems their general abilities decrease

This seems right in aggregate. It aligns with macro numbers. But as an individual ‘the Doom Thesis’. is obviously not inspiring.

Is the argument to give up?

I hope not.

Wood Chopping Cubed

The response to the Doom thesis is that “it is possible to become more productive in the AI economy but your base case is probably very bad”. It’s not that Claude Code, Codex, Cognition etc don’t work. It’s that you’re using them wrong, and also just not handling the pace of change correctly

I want to break this into an analogy. Old productivity was chopping wood. Your job was to break through your tired fog, and get out there and jam and axe into a tree effectively. Then, once your hands broke down. You’d hire a group of Wood choppers to jam axes for you, and then manage them. Then - you’d transition up a level where you’re managing groups of groups, doing enterprise sales for wood chopping. That is the Old economy. Humans are the agents

In the Agentic Economy the wood chopping paradox breaks down in three ways: 1] Wood Chopper versus Chain Saw Design 2] accelerating levels of abstraction 3] the possibility of managing externalities (an IP rug and technical debt)

Here, AI chops wood for you. So your job is to write prompts and design sprints, much more like a product manager. So the type of ‘productivity’ is much closer to “being the type of person who can write a full product specification that is tightly designed” in one hour. And furthermore being able to leverage AI tools to do that effectively.

Chopping wood basically means having multiple Claude Code tabs open (and maybe some Codex Tabs). It’s using a chain saw effectively, and taking it into your house to repair it, sharpen the blades and make sure that it can cut trees with maximum effect.

The Three Problems

So that’s one problem. The old wood chopper would be a sleep deprived pseudo military mindset person slamming the axe into the tree with good form. And now you’re basically supposed to have uninterrupted thought streams like a philosopher, and do so on a wood-chopping like cadence to actually ship.

The type of person capable of doing this effectively is very unlikely the best wood chopper so it requires a major personality rearchitecture that might be easy for young people but is very hard for older people typically

Second - is accelerating levels of abstractions. Claude Code only really became a thing this year. Agents are only just coming online. “Research Reports” or other types of tool calls, are recent developments

Going from woodsaws to chainsaws is only the first level of abstraction. As AI self improves on an exponential curve – you potentially don’t just replace the Axe, you replace the fleet of workers, the sales cycle, and the enterprise structure behind your wood chopping org. And potentially even replace the underlying currency you were getting paid for in wood chopping services. Then - you get to a point where wood can simply be grown, rather than chopped – and you’re fully abstracted.

So you’re not just moving hierarchically upwards in tool use. You’re moving up on a different axis from before, one which has never been part of any productivity framework because previously things didn’t change that fast.

The type of person that can manage this can withstand ongoing existential crises about their last 4 months of work being obsolete, stay on top of new trends very effectively, ramp up on them quickly – and not get stuck in old ways. The default mindset is to fear increasing levels of abstraction because it renders your life’s work meaningless. And this mindset is almost obviously going to fail in this paradigm.

At some point you get one shotted by a paradigm shift and start being mental goo.

And third is externalities. These AI products are being sold at a loss for a reason, by companies with $500B+ valuations and relatively small revenue bases. It’s not realistic in the long run that AGI is available via an API call, just for basic national security reasons. It’s far more realistic that the companies giving you the tools to move up these levels of abstraction. Are using the same tools themselves, and you’re simply training data for this to occur

Mark Zuckerberg isn’t spending 120% of his operating cash flow on capital expenditures for altruistic reasons

So you going in and architecting your chainsaw effectively. Or building an agentic automated chainsaw workflow set – is going to simply go into a future model upgrade that you won’t have access to. Assuming you use the default tools everyone else does. And Anthropic, or OpenAI, Microsoft, Tencent, or Meta will end up launch fully functioning proprietary agent workflows.

This story is as old as time. You’re the product. Even if you’re paying $200 a month. The Enterprise TOS is very clear - and that’s not the point of this article, but companies are well within their rights to make synthetic data or abstractions on top of your tool use to train on even if they don’t train on it directly (which is prohibited).

So the final element is being clear eyed about the motivations of the other people in the axe chopping contest

The Transformation

So just talking through the three part transformation required to actually see aggregate productivity accelerate on a personal level (forget societal level for a second)

1 - Build strong immune system response to the distraction economy (which is getting rapidly stronger due to AI). Literally don’t use Roblox, Imagine, or any AI entertainment product. Become the type of person who can rapidly fire out specification docs / system designs, tools to generate those docs, then upgrade them. Not the type of person who can just grind out task after task, but the person who can design agentic workflows (Linkedin slop sorry)

2- accept that ‘researchers’ are now getting paid $100m+ or entire company exits, as a real economic indicator. That “research” is far more valuable bc you’re constantly moving up levels of abstraction. So research in your personal life, or consumption of cutting edge tech or academic papers is much much more important than it was.

3 - understanding it’s a zero sum game. The math doesn’t add up, so you necessarily have to be the product. That $75B Zuck is investing is going to get a return. You’re the product. So as you’re designing systems, moving up levels of abstraction, researching - assume that everything you’re doing is going to get rugged unless you design it specifically not to be

I think the reason I’m spelling this out - is that the delta between the type of person who succeeds in this paradigm vs the delta of the type of person who succeeded in the old paradigm is truly massive. It’s really hard to take on massive personal change but it is likely the only way to make it. At a societal level, the difficulty of this transformation is why the gap between productivity growth and AI consumption is likely to persist. I am trying my hardest to build tools and methods that align with this and will share them as soon as they’re ready for public consumption.