Thoughts on AI, v0.1
I want to put down some thoughts on my current AI workflow and how it helps me, along with a few predictions on the trajectory of AI.
Premise 1: The Adversarial Workflow
Here is how I currently operate. I have an idea, I open my CLI (maybe type opencode), and start brainstorming with a plan agent. We throw around a few stupid ideas until one sticks. Then, I can ask the plan agent to summon multiple subagents to gather information. For example, if I hit a wall with a bug, “hey, why is this tool not working?”, the plan agent reads the codebase, delegates specific forensic tasks to its subagents, and those subagents report back. The plan agent then synthesizes a solution. If I agree, I switch to the build agent to implement it. The build agent can, of course, delegate further to other subagents.
But what truly surprises me is the adversarial workflow (quite similar to the adversarial workflow suggested by Pedro Sant’Anna). Suppose I ask, “hey, what do you think of the identifying variation of this particular problem and this specific parameter?” I can summon one or even multiple econometrician agents, IO economist agents, and theorist agents, all running on my own custom harnesses and scaffolds, to attack the problem from multiple angles. Then comes the critical part, the adversarial roles: I summon multiple identification auditors, IO auditors, and theory auditors to ruthlessly audit the thinking of the previous subagents. I keep iterating on this workflow until either the main agents (plan or build) or I am fully satisfied.
It is essentially creating a council of smart machines for a single human: me.
Consequently, my role shifts entirely. Whereas in the past my role mostly involved brute-forcing code or math by myself, I now find myself managing more (which is hard to believe, as someone who actually dislikes managing things). I think more about the meta-structure of the problem. I have to think deeply about how to hone the skills of my agents, which tools to be used properly by them, and how to precisely formalize the way I, as a human, perform a literature review, so that these AI agents can automate the process to my exact standards. Ironically, this makes me both slower and faster at the same time. Slower in cognitive load and management, vastly faster in execution and output.
This dynamic is not limited to coding. It applies equally to writing papers or brainstorming research ideas. A human worker now inherently has a choice: whether to blindly trust AI agents or to actually manage them, possessing the guts to retain ownership and agency.
If a human chooses to deliberately and blindly delegate 100% of everything to an AI, then the AI will almost definitely become a substitute for that specific human. It won’t just replace isolated tasks; the entire bundle of tasks that constitutes that person’s job will be substituted out of the market. There is no marginal value in a human rubber stamp. However, if a human chooses to brainstorm, think from first principles, delegate strategically, and, most importantly, own the final output, not every task from the bundle gets substituted. The AI undeniably becomes a powerful complement.
Premise 2: Intelligence as Capital
Substitute, independent, or complement. These are very standard economic classifications. But here is the profound shift: intelligence, which was historically supposed to be an enhancement to labor (i.e., augmented human capital), is rapidly becoming capital in a typical, commoner sense. Pure, deployable capital.
If you understand this, the implications are quite stark. If you do not own the capital, cannot afford the capital, or simply lack access to the capital, you are going to lose in the marketplace. Period.
Furthermore, the sooner you realize that intelligence has been financialized into capital, the faster you can accumulate it. This first-mover advantage is massive and will likely last remarkably long. Those who learn to act on this and secure access to frontier AI models are accumulating insurmountable leverage over those who are still trying to sell raw, un-augmented labor.
Inevitably, inequality, for whatever definition of the term you subscribe to, will probably worsen. When productivity is no longer constrained by the biological limits of human workers but is instead bounded only by the compute you can deploy to your subagents, the distribution of returns will become intensely skewed.
What is the solution to this? Well, well. Some might clamor for top-down interventions or subsidized compute. But speaking frankly, I don’t know a simple market solution that can “fix” this, because it isn’t necessarily a market failure. It is a paradigm shift in how value is produced. As always, the market will reprice labor and capital accordingly. The question is simply which side of the trade you want to be on.