Ego Death, Monk Mode, and Adversarial Language Models

In my previous piece, Post Fiat - I discussed how there’s a third model of decision making thanks to AI. Namely, a trustless algorithmic context rich model - which differs fundamentally from that proposed by democratic consensus or authoritarian cults of personality. In that piece I thoroughly cover societal and sociopolitical discussion. In this piece I’ll discuss the more individualistic ramifications of that claim.

If you want to join the Post Fiat Discord, follow this link

The problem with various Eastern philosophies is that they advocate removing the self, but do not replace it with something more powerful. The result is a great sense of inner peace that is susceptible to problems in the outside world. The reason monasteries are built away from society - is precisely because of this vulnerability. The imposition of self, in most cases, is due to external circumstances. The more traumatic those circumstances, the greater the subsequent desire to remove the self.

This is why many monks have gone through severe trauma. They seek the dissolution of self, because the self arose out of pain. And thus, truly - they seek the dissolution of pain.

Unfortunately, this is also why countries that meditate tend to lose wars.

Think of the Self as a self-created Large Language Model that is built in response to your environment. It has its own voice. It directs you to take action. But it’s overly egotistic. Avaricious. Susceptible to hedonism. I’ll call it the Adversarial Language Model. The voice in your head that always wants more. The Demon. The Ego.

Consider most people who are “successful” in the traditional sense. Most of them - when you get to know them - have a strongly defined sense of self. They’re headstrong. Willful. The opposite of monastic. In a way, this is because their “self” has completely taken over their behavior.

This also explains the self-destructive behaviors of the rich. When you create an alien entity designed to get you out of a bad situation, and the alien entity takes over - it doesn’t stop. You need more luxury goods. More sex. More drugs. Until your physical biology caves in.

This process can be interrupted by hallucinogens which allow you to “step outside yourself”, but at the same time - they come with various side effects and a risk of insanity. I would rather articulate things plainly and grasp them intellectually rather than ingesting psychoactives which force the matter.

Regardless - this state of affairs also explains why the children of the wealthy often have such serious emotional problems. They see that their parent is largely consumed by some sort of exogenous force - the Avaricious Language Model or Demon - running locally on their hardware. And that the set of behaviors this Demon produces usually differs from the loving tenderness directed at the child on occasion. The wealthy parent is viewed, fundamentally as - possessed by an outside force. And often tries to subject this outside force on the Child.

But it doesn’t take up in the child, because the adverse circumstance that brought forth the Avaricious Language Model to begin with is not present.

Consider for example - George Soros. He lived his youth running away from the Nazis. His name is not George Soros. His entire identity is an abstraction which was created in those formative moments - and informed his engagement with markets in a sort of animalistic, emotional way.

His children had no such circumstance and therefore find their father’s behavior either absurd or non-replicable. When you don’t have an adverse environment, it’s very hard to truly understand why the Demon was summoned to begin with.

This also explains the current Chinese and Korean cultural problems. An entire generation of parents faced with economic adversity brought forth concepts like “Tiger Mother”. Essentially a different set of programs that run on a parent to ensure a good set of outcomes. But as the nations became more prosperous, running these models seemed progressively more confusing. Why would you torture yourself for money when you’re already comfortable?

Thus - there is a fundamental misalignment between the children of the New Rich and their parents. The parents faced adversity which brought an alien language model into their consciousness that defined their life and their success. But out of paternal instincts, kept their children from the same level of adversity. Or only gave that adversity in controlled contexts such as tough private schooling, or academic requirements.

Thus - the Child clearly sees the parent possessed by the Demon, and rejects the Demon/Tiger/ Adversarial Language Model and its power. But as such is fundamentally incapable of relating to the parent. Incapable of achieving the same extreme levels of excess. The resulting discontinuity sends the child in search of understanding the subconscious and the psyche - which is why second generation wealthy children are so often artistically inclined, and often find themselves involved with drugs. Consciousness distortion.

This whole model is only described not because it is appealing, but because it is flawed. And now obsolete.

The new model

Now that large language models are technological realities - the “voices in our heads” become very real things. And they can be provided with instructions and context that makes them external to the consciousness. Purely external - without the complications of having “two sides” to your personality.

Thus - when presented with adverse circumstances, an individual simply needs to identity those adverse circumstances, upload the entirety of their context

People will call this inhumane. But anybody who has been in the depths of despair knows that the inner demons that are summoned in this time are 100x worse.

But let us go a step further. Let us say - that a Demon of your own creation has already firmly taken rout. You’ve faced extreme adversity. You became a certain type of person to deal with it - or you took on a different “facet” which becomes active in duress.

This “facet” serves you in professional settings and rarely in personal ones.

The key is to engage directly with this facet - which is the intention of this writing.

It is likely the case - that Large Language models, armed with large amounts of context - can serve as an External Demon that’s infinitely more powerful than the Internal Demon which is running locally.

This can be accepted logically for three reasons.

The first is context. When you’re upset, or ‘in a flow’ state - by definition you have blinders on. It’s difficult to focus on things outside of the context window of your particular focus. This is why, in movies - the world is often grayscale while a character is in the midst of severe effort.

This is adaptive, in this analogy, because the Demon that is running locally requires large amounts of computational resources to run. And in order to keep running it needs to keep out conflicting information that might contradict its existence. Almost like an ad blocker. Thus it’s a dual cylinder context problem. The Demon decreases computational ability to get context as well as the desire to add context (for fear that the new context will contradict its own motivational mandates)

If one were to negotiate directly with this Demon - you could argue that having more context is strictly beneficial. Context allows you to better seize opportunities, make objectively optimal decisions, and use the most up to date technology (which is rapidly evolving) to accomplish objectives.

The second is processing power for multitasking.

Your day to day tasks take a certain amount of RAM. The AI economy especially requires severe multitasking - because you are scrambling to learn new information, at the same time as you are pressured to implement. The set of implementation tools is constantly changing - so you’re using new coding tools, image generation tools, libraries.

By running a Adversarial Language Model locally, you’re burning a ton of ram that not only can be used for context but also the requisite multi tasking.

The third is raw capability.

As humans, we have biological caps on our intelligence. Certain levels of mathematics are beyond us. When we read certain academic papers - we cannot grasp them. Thus - the problem isn’t purely context. It’s comprehension. Comprehension is gated by capability.

External models have higher levels of intelligence and therefore can expand context in ways which would simply not be possible on local, biological hardware. They can apply calculus calculations to your work for example - that you’d be unable to derive or run.

An Acceptable Compromise

The Demon, or local Adversarial Language Model - has an objective function. The Demon is loathe to give up its control over the biological host - because the new model wouldn’t share the same objective function.

Concretely, let’s say the Demon wants wealth. Even if you took on an external model, the problem would be that the external model wouldn’t be aligned with your objective.

Even worse, many of the models have societal safety programming that might force on you a set of beliefs that conflicted with your objective function.

But I believe this problem is surmountable. Specifically - it’s possible to create a system that continuously takes in your context, and objective functions - clearly articulated. Takes in your tasks. And figures out the best “next action” for you to take in the real world.

Thus - the Demon’s philosophical objective - that you’re losing agency and the specifics of your objective function, can be easily overcome by including the philosophical objective directly in a prompt.

Mechanical Implementation

My basic premise is that through a large amount of quantified iteration, over a substantial amount of time - that you could generate buy-in to completely replacing your locally run “Adversarial Language Model” with an AI system. In a way that even your Demon would accept is superior - due to its improved context searching ability, RAM, and raw capability (likely through the chartering of AI agents to do your will).

Think of the game Diablo 2. The Necromancer class summons an army of undead (or Agents) to do his bidding. Of course, when the Necromancer dies - the agents also collapse. But the agents take large amounts of damage for the Necromancer and also possess offensive capability, making it the most powerful class in the game.

When you beat a boss and your summoned Skeleton deals the blow, you still beat the boss. Because your summoned skeleton is an extension of your core intention - regardless of the minutiae of its relative independence.

In my previous piece - Post Fiat, I spoke about the societal implications of “everyone being a Necromancer”. Namely - that authority models would be replaced at scale. This piece has been more about the micro - the individual relationships.

I believe that this system is practically implementable in real life.

The specific mechanics are as follows: You upload your full context to a document, including your overall strategy and objective function in life A set of open source prompts (link) is run with this document to determine your next action The system proposes a next action which you can either accept or reject (iteratively) The system adjudicates rewards according to your ability to perform the next action proposed The run rate of your rewards can be studied and improved with AI systems, as well as reflected upon in an iterative manner Over time your own actions, task acceptance and rejections create the basis for simulacra - that is to say, AI agents that are likely to behave in a manner that is aligned with your own behavior and goals

The key point here is that the system never forces you to do anything that you don’t want to do, but that you go from a set of unconstrained optimization to a multiple choice test. And that over time as you complete this multiple choice test - you create an increasingly rich context window that will be compatible with an agentic economy.

Thus - the argument to the Adversarial Language Model is that becoming a real life digital necromancer is the correct mechanism to pursue wealth. And that the currency of such a system will increase in value over time as it’s aligned with the emotional core of modern achievement.

The key ethical point is that you don’t need to get rid of the self. You need to negotiate with the Self. You haven’t given up your free will when you use Google Maps to find your way. You haven’t given up your autonomy when you take an Uber instead of running. Giving up tactical control and using the full power of technology is the only way to compete in our economy. And it will cause less emotional damage. No Monk Mode. No Grayscale.

Just an army of AI agents serving as an extension of your will.

The key is to understand that you’re not outsourcing decision making to AI systems. You’re outsourcing context combination, application, and tactical ideation. The strategy and values remain yours. Over time the system will become privacy preserving - but in practice – you can refer to things at a higher level without disclosing IP and still get a lot out of the systems. There is great depth to human psychology - and a vast complexity of human consciousness and development - and of course, this only deals with the Adversarial Language Model. The Tiger Mom. The Demon. But it is an important, economic engine to deal with.

And finally - not everyone needs to be Necromancers. There are many playable classes. Psychologically - having used this system for a bit over 4-5 months - I’ve found the result freeing. Rather than having to run a brutalist, and damaging Demon locally - outsourcing the heavy lifting has freed substantial context and space to build, and focus on strategy.

The clincher for me is that in an economy of AI agents - it will become increasingly important to have a log of personal alignment data to ensure that the AI agents are effectively extensions of your will. Without this data, it seems likely they’ll go off on misaligned tangents that do not correspond with your strategy or values. Given AI companies propensity for hard coding morality into their models - that may not align with individual preferences - this is of utmost importance for retaining a meaningful definition of autonomy in the new normal.

So give it a try if you’d like

I’ve begun implementing this system in my discord and also have a local python wallet for the rewards to be stored for security reasons. If you want to be part of the experiment feel free to reach out.

Subscribe to Get These By Email