Agents are the new users.
Investing capital in a machine-native world.
A note before this week’s piece.
I started in AI in 1992. After thirty years of working in distributed systems, IP markets, and a decade in crypto, I came back to it in 2023 because I believed the new models were genuinely different: human-enhancing rather than human-extractive. No longer exclusively for mining data or capturing attention. Something that actually makes people more capable.
The thesis has been building for three years. But the last few months have changed what agents can actually do, and that changes everything that follows from it.
This is where I have landed. Here is the thesis.
For half a century, software has been built to serve humans. Sessions began when users opened an application and ended when they closed it. Humans needed intuitive interfaces. Discovery happened through search, recommendation, and advertising. Fortunes were made optimising for these principles, spawning the most successful companies in history.
The assumption that every piece of software has a human at the end of it is no longer safe to make.
Agents are programs instructed by humans to act through other software, and a rapidly growing share of software is now being written for them. What agents want is clean APIs, persistent memory, cheap tokens, and frictionless coordination at scale. A polished user interface, the obsession of a generation of product managers, is worthless to a program calling an API.
A new kind of user, not a new kind of client
The instinct is to frame this as client-server done differently. It is not. A client fetches resources a human will consume. An agent consumes the resources itself, decides what to do with them, and acts. The human is one or more steps removed from the transaction, sometimes absent entirely. That is a different relationship to the software, and it produces different requirements at every layer.
eBay could not have existed before the internet. It was a categorically different thing from a classified ad, made possible by a new medium of exchange. Uber could not have existed before GPS-enabled smartphones, always-on location, mobile payments, and two-sided real-time matching existed simultaneously. Neither company was an incumbent adapting to a new platform. Both were founded because a new kind of user made a new category of product possible.
The agent is that kind of user.
Why now
Until late 2025, language models could write paragraphs and answer questions but could not be trusted to chain tool calls, hold multi-step context, or act on behalf of a user long enough to complete anything useful. Once they could, a class of applications became viable that had not previously existed. The most popular among them accumulated hundreds of thousands of users within months.
Inference costs fell an order of magnitude in eighteen months. Open-source models narrowed the capability gap with frontier labs substantially for practical workloads. Within twelve to eighteen months, frontier-quality inference will run locally on consumer hardware, within reach of any high-agency developer.
None of these conditions existed two years ago. None of them is reversing.
The stack is not ready
Software is built on what came before. Some layers will evolve, some will be replaced, but the stack as it stands is not sufficient for an agent at the end of it.
Compute must be summoned and released by code, on demand, addressable by a running program rather than provisioned by a human sizing a server. Memory must persist across sessions and travel between agents without losing fidelity. Today’s vector databases and context windows are early approximations, not solutions.
Coordination must become a first-class primitive, because software now brokers work between thousands of autonomous programs sharing context and negotiating handoffs in real time. Identity is the most underestimated piece. OAuth scopes, SAML assertions, and IAM role chains exist, but none were designed for autonomous delegation without a human in the loop. That is the new surface area, and it is mostly empty.
Tooling has to be built for machine consumption. Structured outputs, predictable schemas, clean error handling, because no human will ever see them. The harness that decides when and how a model is called has come to matter as much as the model itself.
None of these layers exists in mature, agent-native form. Architecture that serves agents well rarely emerges from architecture built for someone else.
What most people are getting wrong
Every incumbent is taking existing infrastructure and adding an agent layer on top: a database with an agent API, a cloud provider with agentic orchestration, an identity system with delegation hooks. These are the right ideas built the wrong way around.
Vast.ai is purpose-built for agent workloads and has over 20,000 GPUs online. It serves a class of customer AWS was not designed to serve: agents procuring compute dynamically on their own. Prime Intellect is building the training surface for autonomous agents themselves. Their Environments Hub hosts hundreds of open RL environments where agents learn to act, the substrate frontier labs now spend hundreds of millions of dollars on, a category of software that did not need to exist when the entity being trained was a human. Dimensional, an OS bridging LLM reasoning to robot hardware, could not exist before agents could reason well enough to direct physical systems.
These companies exist because of the agent. They could not have been built for any other user.
Why I believe this
I wrote my PhD on Mixtures of Experts applied to speech recognition at Cambridge in 1997, the architecture that now powers almost every frontier model. I backed Circle at Series A in 2013 when digital payment infrastructure was an abstraction most people found unconvincing.
The pattern is the same each time. Infrastructure precedes applications. The best time to invest in it is when the application layer is still being imagined. That is exactly what we are doing now, across fourteen companies building the compute, memory, identity, and verification layers agents need to operate at scale.

