Introduction

Most people are focused on AI doing work - or replacing human labor. But - as I’ve used AI tools - I’ve found they possess substantial wisdom, and context. They get the big picture right and often get the small details - like updating the correct numbers in a financial model - wrong.

Like many people, over time I’ve found myself asking AI for its feedback. There are two big reasons this is useful.

First - Incentives. Any trusted advisor willing to go deep enough on your specific context typically has a financial relationship with you. Any startup founder who has hired lawyers knows that you need to carefully manage the tendency of your counsel to maximize their billable hours. And therapists are motivated to keep you on as a client. Which inherently opposes your likely preference to not need a therapist. AI models don’t have a financial relationship with you that incentivizes them to give you non-aligned advice.

Second - objectivity. The average person - especially a boot-strapped founder like me, doesn’t have a big organization to get advice from. Different prompts and different models form diverse perspectives that otherwise would be inaccessible. Going a step further, if you run a set of prompts across different models a large number of times the results become usefully deterministic. You can get to a statement like “Going to the Singapore conference is an approximately correct decision with an 80% confidence +/- 5%”.

That is to say, you can make the domain of the qualitative, and fuzzy - explicit and quantitative. And you can do so without dealing with the politics of a corporation.

This got me thinking about the bigger macro picture of AI based decision making. And thus I came up with the Post Fiat hypothesis.

Prior to AI, there were two primary types of societal decision making: Democratic, and Authoritarian.

You have a violent state of Nature. You can either bet on human nature being good, and try to implement a Democratic system with judicial checks and balances. Or you can bet on a top-down system.

These two primary states flow through to corporations, families, and even religions. But AI, per the above - adds a third potential decision making framework. Algorithmic.

The premise of a Democratic model is that a group of smart people can make better decisions than any individual, and therefore an individual ought to opt into a Democratic social contract.

The premise of the Authoritarian model is that democracies are easily captured by special interest groups, and a strong man aligned with a core group’s characteristics (usually their race, or culture) should be empowered to make the right choice free from the scheming and inefficiency of pluralism.

The premise of the Algorithmic model is that powerful artificial intelligence systems armed with sufficiently large and relevant context, run repeatedly - will generate better decisions than either Democratic or Authoritarian models.

You do not trust the collective. You do not trust the strong man. You trust the algorithm, armed with context - and the system wrapped around it.

This is the foundational premise of the Post Fiat Experiment.

My hypothesis is that AI is the future of human decision making, government and even currency. But to test this hypothesis requires starting small first

Experiment Design

There are 2 foundational hypotheses I am testing.

First: can AI systems compound capital in financial markets more effectively than I would be able to? That is to say, can AI systems generate a high Sharpe ratio deploying my personal capital. At the end of the day - financial decision making is a useful subset of larger scale decision making because it has a clear economic output. I would argue that if you cannot trust an AI algorithm to make good investment decisions, you probably shouldn’t trust it in the economic arena broadly.

Second: can AI systems reliably define my workflows in a way that generates improved productivity - measured both objectively via algorithmic reward, as well as in the context of the first goal (compounding capital at a more rapid rate). If AI systems do not improve the reality or productivity of working on a day to day basis, and cannot recreate corporate structures - what hope do they have of being something bigger?

I built a token on the XRP Ledger, called Post Fiat. There are 2 nodes on this network - that correspond with the two hypotheses. First, the AGTI Node (Artificial General Trading Intelligence). This node primarily operates in financial markets - and among other things allows caching financial indexes, allowing data sharing, and chartering domain relevant expertise from members on the Post Fiat Network.

The second node is the Post Fiat Foundation node, which oversees my own personal productivity. There is a task generation system which allows users to upload their context. This context is processed in terms of historical Post Fiat task generation, completion and so forth to generate recommended next actions. When the user finishes an action, he submits it for completion, is prompted for verification and receives a reward pending the quality of his verification evidence. The reward is paid out in PFT (the native token of the network).

I’ve designed both a locally hosted wallet for security as well as a discord wallet for convenience. Feel free to join my Discord if you’d like to take part in the experiment.

Discord Link

Acceptance Criteria

For me to accept the Post Fiat hypothesis I’d want to see two things

First - a Sharpe ratio of 4 or higher with a market neutral CAGR above 40%

Second - a large increase from my previous baseline of productivity, defined qualitatively and quantitatively.

I think if you see these 2 things - then you can underwrite larger and more ambitious conclusions about the future of decision making which I’ve hinted at but will explore more robustly at a later date once I’ve accepted or rejected the first 2 hypotheses.

Discussion of Key Elements

Finance and personal productivity are two domains I can decisively tackle, but I’d love for others to join me in the Post Fiat experiment. I’ll be tweeting the results of it as I progress, and hopefully will gain collaborators.

I hope this will somewhat add objectivity to the experiment - as I am the author of the hypothesis, and also its subject.

Of course - this experiment does pose long term ethical questions. But it’s best to confront these questions with evidence. Specifically - can AI based systems meaningfully augment an individual’s productivity and/or financial performance? This is table stakes to have broader discussions - because if it can’t be trusted on an individual level, the bigger picture is simply an aggregate of this. The scaled version of the experiment will undoubtedly require a more robust design - but in terms of stage gating it makes sense for the whole experiment to be financially viable

XRP is an interesting place to run the experiment because it provides cheap transaction costs and reliable high performance, as well as the crucial ability to whitelist addresses. Bigger picture - if this experiment succeeds, there are exciting possibilities to create new blockchains using AI to augment the RPCA consensus mechanism. Currently, XRP is very centralized - namely via the selection of the Unique Node List. The Post Fiat experiment will generate a large number of memos across different nodes. The question is could these memos be processed by an AI system to meaningfully decentralize XRP?

Regarding the reward mechanism - everything is open source on the Post Fiat foundation node, and designed to be sybil resistant - with hard coded parameters. It’s hard to establish a specific baseline - but as a rough back of the envelope hurdle, my biggest accomplishment is a company exit for around a $50 million market cap. An interesting outcome would be a $500m exit

Finally - the financial hurdle is designed to compare to a top tier multi manager fund such as MLP. As indeed, the financial trading strategy is designed to be thematically similar to a multi manager - with 5 synthetic portfolio managers assigned a risk allocation across different asset classes. A single individual being able to replicate this type of result even on a small scale, with strict adherence to using AI as the primary deployment framework would be sufficiently interesting to push for a much larger experiment.

Conclusion

It’s very clear that AI is going to disrupt personal productivity via coding tools. But the question is will it also disrupt corporate governance, and even government itself?

Given increasing dissatisfaction with government, I think this is one of the most important questions of our age and I’ve designed this experiment to help answer it. Social contracts and Authoritarianism are the two forms of Fiat. But what if intelligence armed with context is the third, unexplored form? Post Fiat.

Relevant disclaimer

The Post Fiat experiment is a personal research project conducted using my own capital. While I may accept additional capital via managed accounts, I am not holding myself out as a financial advisor. This experiment is designed to explore the questions described in this document and is not intended as an investment strategy suitable for the general public. This is an experimental project, and results are not guaranteed.

There are significant risks involved in any trading or investment activity, particularly those involving cryptocurrencies and artificial intelligence. The content of this document and related materials do not constitute financial, investment, legal, or tax advice. Always conduct your own research and consult with professional advisors before making any investment decisions. Discussions specifically around stocks or regulated assets are subject to legal counsel recommendations and may be limited. I am committed to complying with all relevant regulations governing this experiment.

The Post Fiat Token (PFT) is a utility token facilitating this experiment and creating memos. Its use does not constitute an endorsement of XRP as an asset. The parameters of this experiment may change, and I will update this document accordingly. I have no undisclosed conflicts of interest related to this experiment at the time of writing. If you have any questions or concerns, please contact [email protected]. Remember: This is an experiment. Participate at your own risk.

Subscribe to Get These By Email