Intelligence Margins and The Distraction Economy
Anyone who has spent a serious amount of time in entrepreneurship - especially post-COVID entrepreneurship should understand that working alone, locked up at home - is quite difficult. The rituals of the office. The separation of work and home. Things taken for granted - are actually quite useful.
The digital economy provides endless distractions - that have been optimized through billions of dollars of investment to drive addiction. Youtube videos. Pornography. Games. Television. Customized social media algorithms. Workplaces install blockers on these technologies and rely on social norms to keep them out.
Every major big tech company has mandated a return to the office. Despite climate pledges (commuting drives most pollution in developed countries). And despite that they make remote work software. Because ultimately, Big Tech mostly delivers distraction. And they understand that their products are too good. There may be exceptions. But in general, individuals at home will dawdle away.
Think about it. The world’s smartest people are working around the clock to ensure you keep scrolling. What technology do you really have to resist them? Journaling?
People assume that AI would result in productivity. But this ignores that AI primarily boosts the distraction economy. Think of it this way. AI value is the extent that you can spend money on a token of GPU compute and make revenue from it. Call that the “intelligence margin”.
For example - if Mid Journey charges $20 per month and needs to generate 20 images per user to keep them subscribed, and the images cost $10 - its intelligence margin is 50%. As the cost of GPU compute goes down, and demand for images goes up - intelligence margins expand. Similarly if an AI robotics company requires $100 of GPU spend to flip a hamburger with real time vision, and the hamburger sells for $8 - it would have negative intelligence margins. This example helps explain why digital products are easy wins, with high margins - thus benefiting the distraction economy.
Given that the market is rewarding AI companies - because it’s clearly the future - companies with high intelligence margins already have the highest valuations and market caps. If you look at the AI products active today - it’s far easier to use AI to make an algorithm more addictive, manufacture generative porn, or customized video games than it is to deliver self-driving cars. Interfacing with the real world has low intelligence margins.
This seemingly mundane fact flows through to aggregates. AI has been out since 2022 - with many economists claiming it would cause productivity to ramp. As of Sept 2024 Productivity per hour is 2-2.5% year on year, not a panacea. 2025 deficit spending will be 6.1% of GDP - near World War 2 highs - while GDP growth is expected to decelerate from 2.4% in 2024 to 1.7-2.0% in 2025. This growth is not even enough to cover the interest on the existing debt load. So even though AI makes companies more profitable - because the distraction economy has higher intelligence margins, it doesn’t ramp actual GDP.
From an individual perspective - the question is: do you want to be an aggregate failing statistic? Or do you want to break free and make real wealth?
This led me to ask the question: how would you raise intelligence margins working for yourself instead of working against yourself resisting distractions?
The Implementation Problem: Complexity
I realized that the problem with motivation is that it mostly involves yelling at yourself. You are working against yourself. When your willpower dwindles, or you get a bad night’s sleep - you get served an ad for an addictive product. You start consuming it. And next thing you know - addiction upon addiction is stacked. And you have to reset, lose the weight, quit habits etc. In an endless loop.
This is the essence of our current economy. It’s why the number of diet books is up and to the right along with the obesity rate. The diet books work if you implement them. But people don’t implement them because it’s more profitable to sell addictive food than it is to convince people to quit. So the persuasive power of the demons is higher than the angels.
One of the core reasons for this - is the sheer number of habits that are required to live an effective life. You have to go to sleep at the right time. Work out at a regimented time. Have planning sessions regularly. Implement deep work regularly. Reach exhaustion or progressive overload in workouts. Regularly listen to your partner. And so forth. If you rely on 10 habits to deliver “life performance” it only really takes 1 of them breaking down for you to get in an addictive habit loop. And then - so often, the entire house of cards falls down because you get demoralized. You gain belly fat because you didn’t work out properly and next thing you know you’re fighting with your spouse.
The question then - that needs to be addressed to raise individual intelligence margins - is how can I simplify this as much as possible so it’s not fragile? How can I make something that is indifferent to the distraction economy?
The Implementation Solution: Simplicity
Thus - I realized the question is “how can I reduce the 20 habits I need to be effective working for myself into 1 habit”?
The answer to how we can resist AI manufactured addiction is somewhat obvious when you state it. Use AI.
Yes, AI systems have a dark side. But they have one really big benefit. They’re not human. They are not susceptible to vices, or advertising. And even better - when provided with deterministic context, and prompts they converge on uniform answers. The uniform answers - are for the most part, very good - and extremely useful - as anyone writing code will tell you.
Unlike our internal voices, AI answers come default augmented with huge amounts of academic literature, philosophy, and principles embedded in training data.
Thus the hack. Rather than having 20 habits. You can upload the 20 habits you want into an AI system. And convert it into 1 habit. Just listening to the AI, and agreeing to mostly abide by its recommendations. Ultra high net worth individuals have done this for decades. Paul Tudor Jones, for example hired Tony Robbins for over $1 million a year to coach him personally. And keep him on track. And most ultra wealthy people have a whole staff of personal trainers, coaches, dieticians, and experts that minimize the amount of willpower they personally need to exert to perform at the top level.
But most people don’t generate enough money with their activities to justify this. So it’s an intractable problem. “Sure would be nice to hire Tony to psych me up every morning but I have to settle for buying his book.” Which of course doesn’t work nearly as well, because it relies on you working against yourself. And all the addiction economy engineers.
The Post Fiat System
So this is all abstract. You probably get it. “Okay, I listen to AI.” But how do I actually make sure this works?
I built the Post Fiat system as an answer to this. If “Fiat” is all about authority. Someone yelling at you to go to work. Or the government telling you what to do. And a system with predictable results - endless, increasing debt, and deteriorating standards of living. Post Fiat is imagining, what comes after that? How can we work for ourselves instead of against ourselves without authoritarian measures? How can we be truly free in a world where there’s addiction at every turn?
It’s taken me time to architect a system that is “good enough”. I’ve spent the last year doing it, making tweaks and building software. I won’t lie - my software improvements aren’t what has made the system usable. It’s the improvement in model intelligence.
In terms of human agency, before I get into the specifics - the idea here is sort of like Uber, or Google Maps for your life. You don’t say you’ve lost your free will because you’re not driving the Uber. Your free will is mostly setting the destination and the time. And letting software route you effectively. If anything you have more freedom - because you don’t have to focus on the road. The concept here is to let you say what you want and let AI help you get it.
I’ll spell the basics out here:
- ** Context Doc. ** You have a context document. The context document contains your big picture goals, your approximate strategy for reaching your big picture goals, your tactics to flow through to the strategy, and an idealized daily schedule of habits that support completing your tactics. Context Doc Editor. You have a discord tool where you can chat with AI agents that interact with this document. The main interaction is !document_rewrite which gives you line by line suggested edits to ensure that your context document is internally consistent. It also offers suggested additions. The goal here is to offload a lot of the planning to AI - so that you’re mostly accepting and rejecting edits rather than coming up with everything yourself.
- ** A task management system. ** In the discord tool you can request tasks. An LLM ingests your context doc and chunks it down to a single task proposal - along with a reward. I created a native cryptocurrency called Post Fiat (PFT) to make the rewards quantitative. You accept a task, flag its initial completion, verify its completion and receive a reward. Or you can refuse or drop a task. The system looks at your verification evidence to determine your reward objectively - ensuring honesty/ rigor.
- ** A native cryptocurrency. ** Post Fiat (PFT) right now is an XRP token. It’s useful because it stores all the memos from this system in an immutable way that can be privacy preserving (or not). It also makes the economic rewards of following your will concrete - vs abstract. And fungible - so they can be shared with other community members. Non privacy preserving adds a cool sense of accountability to your actions - and permanence, which I find more meaningful. But to each their own. A completely private system is also possible - and I’m working to build that for other users (requires diffie handler messaging systems).
- ** A thought logging system. ** Over time in addition to your task completion and context documents - it’s helpful to formally log your thoughts and experiences in response to the system. Are you burned out? How are the tasks incorrect or useless? This loops back into the system to augment the context document - and logged thoughts get fed to the Editor which suggests they get formalized in the doc if they need to.
- ** On request recommendation. ** The system looks at your context, your Post Fiat generation and logs - and outputs a near term tactical recommendation on how to get your Post Fiat generation higher in the near term, while providing motivation. It also quantifies how well you’ve been doing in terms of task completion over the past week - and grounds advice in that framing. Is it time to accelerate existing momentum? Or get out of a rut?
- ** On request blind spot recognition. ** The system ingests all relevant information you’ve provided and identifies internal inconsistencies and hidden thought processes that are likely hindering your
- ** 30 minute recommendation engine. ** Every 30 minutes the system pulls in your context document and gives you advice to focus on based on your schedule / calendar obligations. This always pulls you back into the flow state/ helps you stick to commitments. Almost like digital advertisement
This is a pretty comprehensive system - and it’s taken over 6 months of hard work to fully implement. The biggest problems with old models is that the 30 minute recommendations were not very good, or were high friction. And the tasks suggested didn’t align well enough with the context document. This drove frustration and churn. However, the new models seem to address both of these issues without further need on my part.
Now - the system is smooth and I enjoy reading its outputs.
Where It’s Headed
As the distraction economy gets more and more addictive - it becomes imperative to figure out how to operate. And how to maximize our own intelligence margins.
With Post Fiat I’m trying to build something that lets you work for yourself, instead of working against yourself. By letting you have 1 habit, instead of 30. Just do what the system says. If you don’t like it, commit to improving the system until you do.
If the system isn’t producing tasks you like - or isn’t aligned with your objectives, the commitment is to iterate on logs. Context documents. And prompts. Until the output is solid, and improves your intelligence margins.
There’s a potential criticism that this will make you “incapable of making your own decisions”. But over time, I’ve found the system actually improves your own ability to introspect and make your own decisions. Because you constantly have an objective partner and are forced to articulate your thought process. That becomes a habit. And you learn to predict what the models will say before they provide guidance. Judgment becomes a muscle you can train.
The automation of the system simply lowers the willpower required to implement it - the same way that digital advertising lowers willpower required to buy a product.
Post Fiat isn’t doing everything you want. Caving to every distraction. Chasing every shiny thing. It’s doing what’s needed to reach the future you value. Using AI systems to help you reach your own goals, instead of to addict you and make you a servant to corporate interests.
That’s a key point that should overcome ethical objections. There’s the assumption right now - in the status quo that AI isn’t overwhelming your personal decision making. That is clearly not the case, as evidenced by the rise of Tik Tok, pervasive porn addiction and declining sleep quality in the developed economy. AI is being applied to subvert your willpower, craft messaging that sends you in paths unrelated to your true calling, and turn you into subscription revenue. The Post Fiat system isn’t just about setting your own direction. It’s about playing defense.
In a technological economy, you need to use technology to maximize your own agency. Don’t go to a gunfight with a knife.
Not everyone can afford Tony Robbins pepping them up every day to reach peak performance. The system I’ve designed here costs me $100-200 per month in GPU spend, and that number is down 50% year on year. It’s not cheap, but it’s a low price to pay for a simplified imperative.
I don’t expect everyone to adopt this system - especially not now. I’ve built it for myself and am using it actively in the Post Fiat discord. The Post Fiat experiment is small - but given the hyper acceleration of the distraction economy, I expect it to grow.
Don’t work against yourself. Work for yourself. Do as ye will. But actually do it.