Picture two founders starting a company today.
One scales by hiring. Each person sees a small part of the company and develops expertise that stays in their head. Everyone has a different perspective on what’s the right thing to do and it takes a while for them to reach a consensus. Over time, the company gets bigger and the coordination overhead builds up. Because of siloed expertise and competing incentives, the company ends up making lower-quality decisions at an even slower pace.
The other founder does something different. They build an AI-native company. They structure every decision and action as the output of a superintelligent model. Every outcome, every signal, every failure gets logged and is used as feedback to further train the model.
After three years, the first company has a hundred people who each hold a piece of the puzzle. Their shared understanding is scattered across Slack threads, undocumented meetings, and the heads of employees who might leave tomorrow. The second company, however, is a unified operating system that has compounded every outcome from every decision for a thousand days. Always getting smarter. Always getting faster. Always getting better.
One of these companies ends up eating the other.
Companies are information processors
A company is just a group of actors working together as one legal entity. The inhabitants transform data into reports, coordinate work, and decide what action to take. Remove the internal relationships and it just comes down to information processing. Signals come in, decisions are made, and actions go out.
Much of the work in running a company is informational, whether it’s engineering, product, strategy, operations, finance, marketing, sales, legal, hiring, or logistics. But processing information is exactly what models are best at!
The inevitable conclusion as AI capability improves exponentially is that one day the model doesn’t just assist with these functions but becomes the company itself. A model that is both the orchestrator and the actor. When that day comes, the company becomes the harness and the model becomes the company.
Three barriers
So why don’t we have this today? There are three barriers: one scientific, one engineering, and one sociological.
The scientific barrier is that models don’t yet learn from their experience the way we do. Until recently, most models learned by imitating humans (see imitation learning). But you can’t learn to run a company by watching others do it. You need to learn from your own mistakes. Recent advances in reinforcement learning are closing this gap, and have produced a new generation of agents. AI can finally take actions reliably. But unlike us, AI still separates learning from acting. Researchers must train the model in a simulated environment before deploying it into production. The model is frozen, and its core capabilities and knowledge cannot update based on what it encounters. As a result, AI cannot yet develop expertise from working the same way we can. To solve this problem, we need new research to unlock continual learning.
The engineering barrier is building a harness that captures every action and outcome of a company as training signal. A company is a complex distributed system. Dozens of processes run in parallel. Imagine logging every customer interaction, every operational decision, and every sale as a live feedback loop that shapes the model’s behavior in real time. We don’t have this infrastructure yet, but it’s a solvable engineering problem. In fact, this kind of harness is being built as we speak (see OpenAI Frontier).
The last barrier, the sociological one, is different. It may not be solvable at all, at least by incumbent companies.
No one will build this willingly
Let me tell you a secret. Companies are not actually optimized for shareholder value. They are optimized for the people working there. Careers are built on judgement, taste, and the accumulation of decision-making power.
Now imagine telling these people that the optimal future for their company is one that doesn’t include them.
No one will build this willingly. Managers may try their best to automate the work beneath them, but they will resist replacing themselves. The incentives are perfectly misaligned, a classic principal-agent problem. In fact, we’ve seen this before in finance. Portfolio managers are careful to keep their decision-making processes to themselves. Quants are rumored to keep their code undocumented on purpose. Making your expertise legible in the age of learning machines is simply self-obsolescence. No one will willingly fire themselves, no matter how sincere they may sound.
Of course, incumbents will still try to make this transition. They will start by being semi-autonomous and expand until only those with irreplaceable relationships or expertise remain. But this will take time, human-in-the loop will interrupt the feedback cycle, and hollowing out of the company will take a toll on everyone.
Starting from scratch
The science and the engineering will be solved soon. But it’s the organizational friction that reveals an opportunity for autonomous companies to be built from scratch.
Startups that are AI-native from day one won’t have this problem because there’s nothing to resist. There’s no middle management, no org chart to preserve, and no established careers to threaten.
The world is already moving in this direction. Foundation models can work for hours and soon days. Agents are managing logistics for supply chains and running vending machine businesses. Researchers are making steady progress on continual learning.
The first wave of autonomous companies is coming. The only question is, who will build them first?
