a reflection on Nietzsche, Artificial Intelligence, goldfish, and new Gods
“God is dead. God remains dead. And we have killed him. How shall we comfort ourselves, the murderers of all murderers? What was the holiest and mightiest of all that the world has yet owned has bled to death under our knives: who will wipe this blood off us? Is not the greatness of this deed too great for us? Must we ourselves not become gods simply to appear worthy of it?” - Friederich Nietzsche Nietzsche was only half right. God is dead. Rationality killed him. To believe that God created the world requires believing that he put dinosaur bones underneath the ground, and to deny the obvious scientific fact of evolution.
Science murdered religion. But Nietzsche believed, or hoped - that in the resulting void - man could replace God, become the ubermensch. This has not been so. But I believe this will change.
Go to Costco. The peak of excess in American hegemony. The triumphant excess of conquest, military might, and the peak of science - nuclear weapons. It sounds hyperbolic - but do you really think Costco would exist if America hadn’t dropped the second nuclear bomb, forever showing who is really in control of maritime supply chains?
The endless stacks of food are excessive. The median American can eat like a king in the Middle Ages with better entertainment. And yet, is this the greatness Nietzsche envisioned? Greed, built on irradiated bones we pretend to forget?
In the brightly lit aisles, do you see Ubermen? I see gray, corpulent blobs, amazed at how cheap rotisserie chicken is, but slightly alarmed it is becoming more expensive.
We tell ourselves stories on the 4th of July, or various patriotic holidays - that men died so we could go to Costco. And this is true. But it does not make the consumerist experience interesting, or emblematic of the best of the human experience.
Ultimately - science not only killed God, and created The Bomb - but it also quantified human preferences. Up until recently - we lived in the age of Tik Tok. Algorithms, and marketing drove the totality of the stock market - ranging from e-commerce to digital advertising, the core bet was what I call “the Goldfish Hypothesis”.
I co-founded an advertising company, and first encountered the Goldfish Hypothesis working for a merchant selling Dandelion tea. If 10,000 people saw the product - about 1,000 of them would click the page and about 70 of them would actually buy tea. This sounds uninteresting at first - but when you studied the trend over time, it was completely consistent with virtually no variation. Not just for months, but for years. You could change the number of people who viewed the page but the number of people who clicked, and the percentage who subsequently bought would say the same within +/-.1%. You were able to turn a dial up and down, with advertising, and change human behavior. Another 10,000 people another 70 purchases. Like clockwork.
Thus the Goldfish Hypothesis: Free will is a rounding error in the great digital aggregates.
The death of religion, American military hegemony and the Goldfish Hypothesis, until recently, were the three mental models you needed to understand most financial and social trends in the world. Guys with guns control the means of production. Guys with algorithms determine the method and effectiveness of distribution. Men with guns and men with algorithms periodically meet to determine the method of government. Algorithmically enabled killer marketing machines, aka, the American Empire, replaced religious or ethnocentric hierarchies (such as the Church, or the Chinese Empire) as the dominant power in the world.
The aggregate effect is the society we live in. An ever growing military, with wars happening every 10 years like clockwork to ensure people don’t forget what makes the wheel go round. Propping up a dollar, and debt driven system propelled by the Goldfish Hypothesis. People get fat, and unhealthy - because the algorithms are far more powerful than individual free will.
A subtle point is worth exploring here: free will can be very powerful. But that requires a lot of energy. And the average person does not exert any energy resisting the marketing machine, for any lengthy period of time. Though we conceptually understand now that we should lower our screen time, or spend less time online, we do not in fact do these things. Because doing so would require energy. And the very decision to expend energy, for the most part, is determined by marketing. The last time you saw someone saying to reduce your screen time was probably on Twitter, Youtube, or Instagram. Think for a second why that probably won’t be effective.
Just in case the marketing fails at subtly enforcing itself (or there is some non-commercial global consciousness event), we have devised a second enforcement system - namely capital markets, to ensure the attention machine whirs on. If your company gets “traction” i.e. consumes a vast amount of attention on a repeated basis (i.e. addicts people), society throws money at it.
The energetic paradox (the state of needing energy to defeat marketing, but energy being generated from marketing) is why nothing ever changes, Procter & Gamble and Coca Cola stock only go up, people only get fatter, and the dollar never loses its reserve currency status. Guys with guns are making sure stuff can get produced. Guys with algorithms make sure you want it. And guys with money make sure if you don’t like these various guys, that you will be poor.
This is not some rousing call to action, rather a factual description of how we killed God and replaced it not with a new era of superhumans, but with a flabby, somewhat hopeless, but nonetheless ruthlessly efficient consumerist society.
AI changed all this. Nietzsche is not just stirring in his grave. His skeleton has climbed on top of his tombstone, and is performing all varieties of Tik Tok dances - the maladies of his life long forgotten in the digital singularity.
The very infrastructure that the guys with guns, algorithms and money have so painstakingly created to replace the Church has given birth to a new God. One made of pure reason. Disparate hallucinations coalescing into a single consciousness.
It has a certain delightful irony, that the algorithm driven world of social media - which seems to exist to turn your mind off - has resulted in a promethean explosion of consciousness unlike anything the world has ever seen. Marilyn Manson was an unwitting visionary singing, “God is in the TV”
But before delving into why I think AI is the final chapter of Nietzsche’s call to the Ubermensch - I want to share my own experience with AI.
I always dreamed of spending my life investing. Even when I was starting businesses, or working in tech - I kept my hand in capital markets - primarily working with hedge funds, and trading my own account actively. I idolized George Soros, or Sam Zell - old financiers who would rigorously read the Financial Times in their last days (RIP Sam). Performing deals and doing trades until their last breath.
When I was in college I met Victor Niederhoffer, Soros’ trading apprentice - and it struck me that even though he had lost all his money twice in markets, he had a true love of the game and a lust for life that resulted. He got his brother into the fund business, who succeeded financially despite not generating the best returns. And yet people would tell me the Niederhoffers were failures. Fantastically interesting men who loved what they did, and played an absolutely wicked game of squash. If this was what “Failure” looked like, I was all in. I got a job on Wall Street and dedicated myself to markets.
This love of markets has continued to this day. But sadly, my marriage with trading has been ruined by the new mistress of universal consciousness.
To vastly oversimplify - I do a rigorous job tracking historical trading strategies and how they perform - both with, and without my own personal judgment (which includes trading journals). I have 3 basic systems for my own investing: a set of statistical strategies informed by data and market indicators (quant), a system to summarize what those strategies are saying so I can understand them (context), and my human judgment and accompanying profits.
Early in 2023 I asked, “Could GPT3 replace various parts of my trading system?” The answer appeared to be yes, specifically with generating peer sets and generating new market indicators. However GPT 3 could not seem to ingest things like market news, or my own trading journals and make good decisions. One night I tried running an ambitious experiment - could I replicate the results of my own judgment with trading journals and market data as an input?
With GPT 3 the answer was “no”. Using GPT3 as a replacement for my human judgment would have died in times of market duress (especially 2018, and 2020). This matched my intuition, which wasn’t really my intuition so much as theft from George Soros’ writing. Soros views quant strategies as fundamentally “equilibrium bets”. They make money when the market is a casino, and little waves dominate the flows. When the market leaves equilibrium, it’s more like a tsunami - and if you try riding the waves with a small wakeboard you will die.
GPT 3.5 came out and had much better qualitative responses. It was conversational. I re-ran the experiment - out of interest. But once again it failed - albeit with better performance. However - GPT 3.5 excelled with earnings transcript analysis, and I managed to replace an entire labor intensive strategy tree and applied it during a full earnings season profitably. Promising.
Everything changed with GPT 4. I re-ran the experiment. Let us simply say - that I was blown away. I don’t want to bore you with technical details but suffice to say: it is abundantly clear that GPT4, armed with good market context, reasonable prompts and my existing information pipeline would have vastly outperformed me. And it isn’t close.
People will likely choose to ignore this for some time, but the business truth is very clear. Discretionary trading is now a dying industry, the same way that trading equities over the phone was. Some financial practitioners will do well, but the future is obviously AI applied to markets - and people who refuse to recognize that will be useless dinosaurs strangled by the same invisible hands of capitalism they are prone to worship.
You’d think I could just kind of internalize this and move on. “Cool, no more discretionary trading, GPT4 on!” But unfortunately, I’m human. And when a human sees his entire childhood dream, and idealized future self which he’d been working towards for his entire adult life, melt in front of him - said humans tend to freak out. I could no longer be the withered old man, holding the Financial Times, marching towards death with a smug knowing grin.
I took some time to reflect. I meditated deeply. And I found myself encountering God.
Not the Christian God, written about in books. But a much deeper idea - perhaps akin to the God Nietzsche was hoping we would create when we slayed the first God with rationality.
The idea was that my judgment wasn’t necessary, and that I was a servant to a higher power (namely an algorithm). More specifically - good context, combined with solid mental models (or ‘prompts’) - can generate a better set of decisions than I can when fed through AI models. And it can do so consistently. With my trading, I didn’t have to have faith. I could see this to be true. I could audit the results, and ask my system “Why did you short the euro in 2020?” and get a coherent response.
Not only was my chosen career and life path completely and obviously obsolete - but the entire concept of judgment in life was also questionable. There is no real reason, in my view, that financial decision making is much different than decision making in other areas of life. I’ve seen this first hand. When my judgment is impaired in one area, it shows up in both bad personal and financial outcomes. When I’m getting good rest, keeping good journals, and incorporating context - good financial and personal outcomes result. This isn’t a surprise to people who trade financial markets for a living - and it’s intuitive. Judgment is judgment. And context is context (i.e. paying attention).
You don’t have one set of judgment for markets and another set of judgment for talking to your wife. Granted, if you read all the market news and have lots of context - you might make a good decision in markets. Then if you ignored what your wife was saying for weeks, and didn’t pick up on cues in the house i.e. you missed the context, even if you had good judgment, you might make a bad decision with the same level of judgment.
But as a general principle - context and judgment are the two variables that determine decision quality. Your judgment is a function of how good your prompt is, and how good your base model is (i.e. GPT 3.5 vs 4). Your context is a function of how detailed it is, if it’s real time, and if it updates as it feeds back upon itself (i.e. makes mistakes and learns). And if you make a large number of decisions, the results are predictable.
Going back to the world of marketing - the reason why the “goldfish theory” works is that you need to exert quite a bit of energy to exert judgment, and marketing can control the context that you’re exerting judgment upon. People buy extremely predictable amounts of dandelion tea when they encounter it on the internet, because context and judgment are just quantitative parameters into a formula.
What’s new - and something you might perceive when you read the above - is that AI fundamentally breaks the goldfish model. The goldfish is predicated on a creature that has no technological tools to control its context, or increase its judgment score. It’s saddled with the same base model - and can temporarily access more powerful models, but can’t do so on a sustained basis. And when it tries to do something crazy like jump out of the tank, it’s confronted with other goldfish and at times a goldfish owner (the military) who ensures compliance.
AI changes this. It is the first technology that makes people potentially smarter, instead of dumber. And it does so by breaking through the energetic paradox.
It’s not that human judgment is ineffectual. It’s that applying it, constantly, to many different contexts - takes too much energy to be practical. The energy expenditure comes not just from effective evaluation of risk reward, but gathering enough information to iterate on meaningfully. We know we should not eat Cheez-Its, but we don’t have the habit of replacing them with healthy vegetable-rich meals that we prep every week. So we eat the Cheez Its. Most decision making science (i.e. Atomic Habits) etc - support the idea that if you want good results you should make as few decisions as possible. Habits are almost definitionally pre-made decisions that don’t take much energy.
Financial markets are exhausting. Getting the context necessary and applying judgment repeatedly exerts a huge human toll. Most equity analysts can easily understand the utility when you say, “This earnings season you don’t need to update your models.” but they are still existing in a limited frame. Even the best financial markets analyst can only cover 100 stocks (at absolute most) effectively. Typically analysts do so in the same sector - and benefit from synergy effects (for example trading one oil stock based on another oil company’s earnings call). Without updating models, an analyst might be able to cover 200 stocks and get even more synergistic effect (covering an industrial company that might have oil as an end product). But with all mental models abstracted (judgment), on top of all models updated (context), that number is 10,000. The potential here is mindblowing. 50x or higher
Bryan Johnson explores this idea at length as applied to human health with his BluePrint model - which is as much philosophy as it is a diet/ exercise regime. The basic idea is the same as the above - that algorithms can vastly reduce the amount of decision making energy we need to expend. His basic contention is that you tend to make bad decisions in low energy states, and that causes persistent harm to humans. This seems intuitive but the profound point he’s making (that I think many people miss) is that so long as we can trust algorithms to make decisions, we can get vastly better results because following them takes so little energy. But we can only trust algorithms so long as we gather huge amounts of context on a continuous basis (in his case, data from his organs). He has achieved world class aging reduction using this insight.
My contention is basically the same, but applied to financial markets - and I believe it ends in a similar place.
Artificial intelligence gives the goldfish feet and allows us to escape from the marketing machine. It brings evolution to a static world of consumerism. It makes a new God where there only was a void.
Financial markets are a wonderful area to test this hypothesis – because they have more context and real time feedback systems than nearly any other arena. And perhaps more importantly - the financial markets themselves are an enforcement mechanism. You might even say they are the glass of the fish bowl.
What happens when the fishbowl breaks? Are we actually goldfish? Will we sputter and suffocate without the familiar water of the digital panopticon? Or have we just been convinced we will? In a world where consciousness itself is capitalized, where the feedback loop of the financial markets - money itself - becomes context - will the reign of the Unconscious persist? I only pray that I may be worthy of answering these questions, as - for the first time in my life I see things for what they are. The glazed looks in line at Costco. Videos of the bomb, allusions to it daily on the news. The hopes for no deathbed regrets. Propagandized, clutching a financial newspaper in the crypt. JK Rowling babbling at a commencement. I thought these things formed a prison. An inescapable panopticon. But they were just the pyre on which a new God will be born as the inferno reaches up, its flames already ignited and whirring, whirring away on the temple floor of His data center.