AI should be Free Software
Perhaps surprisingly, the rise of powerful AI means that "free software" is actually one of the most important things anyone can work on today.
Allow me to explain.
As I've pointed out earlier, one of the core problems of our time is the increasing centralization of power: as we get increasingly powerful AI agents (and we will), it will really start to matter who controls those AI systems. If we're not careful about it, and we allow a small number of for-profit companies to control access to the power of AI agents, the results probably won't be pretty.
But what are "AI agents?" When you break it down, they're just software 1
All we're really saying by "AI is becoming powerful" is "[a certain type of software] is becoming powerful".
Which is why a very specific type of "free software" matters so much: it's not that we care about Open Source versions of Photoshop or Windows, but that we need competitive, open versions of the specific type of software that matters in the future: AI agents (and their underlying models and infrastructure).
Ensuring that the best AI agents are open software (i.e., free or Open Source software) is pretty much the only way I've ever seen that has a chance of avoiding the dystopic AI futures that so many people are concerned about, and shifting the default outcome closer to the rosier, more optimistic ideas of what could be possible with AI.
At the core, it's a question of power: when AI agents are owned and controlled by a huge company, individuals become disempowered. They cannot change those agents, they cannot understand those agents, and critically, they cannot ensure that those agents are working towards their goals.
The importance of this point cannot be understated: if a company is running an AI agent for you, they will subvert the goals of that AI agent to benefit themselves.
It will simply be too tempting. Imagine that you run OpenAI. Some company offers you $1M to have ChatGPT suggest a particular toaster whenever people mention that they're thinking about buying a toaster. Many of your users are using ChatGPT for free! They expect to be advertised to! That's just the default business model for Google, so what harm is there in copying it?
OpenAI is literally building this right now. They've said as much, and the client code was recently updated to include references to ad-related features.
And perhaps there's nothing wrong with advertising to users on the free plan. But it doesn't stop there: the long term game plan is to control not just product placements, but the ways in which agents do work for you. In the future, when you ask one of these commercial AI agents to create a report for you, or to write some code, or to do some travel research for you, they will be optimizing more for the company's goals than for yours. It's this mismatch, this conflict between "their goals" and "your goals" that is the fundamental problem--it's a classic example of what is known in economics as a "principal agent problem".
The problem with principal agent problems is that when there is a huge power asymmetry--like there is between an individual consumer and a $100B company--guess which goals end up getting prioritized?
If your goals conflict with theirs, they can even go as far as completely shutting off your access. This isn't a hypothetical concern--if you read the Terms of Service, you will see very clearly that you are not allowed to "compete with [their] offerings". This is a nonsensical term to include in products that are literally meant to be general intelligence--what business user would not be competing with one of these providers, in the long term? It seems crazy that AI providers would be able to cut off users who are competitive or unprofitable, but that's the world we live in now, and they certainly want to keep things that way.
AI companies will make all sorts of attempts to justify this crazy level of control. "It's safer," they'll claim, or they'll say "It's more efficient!", or "It's for security".
But these are fictions that are made up to justify centralization--they're not actually true. It's widely known open source system often have fewer bugs than proprietary systems (thanks in part to Linus's Law: "given enough eyeballs, all bugs are shallow"), and it's clearly safer to have transparency into agents than to treat them as a black box. And similarly, there's no fundamental efficiency reason why agents must be closed--it's merely more convenient (and profitable) for companies to keep them that way.
Even Anthropic, who produces a lot of valuable research on AI systems, hides the current agent chain of thought. Their documentation used to claim that this was for "safety" reasons, and while that justification has since been removed, the reason is (and always was) clearly driven by commercial interests: they don't want other companies copying their models.
Much of the hand-wringing about "AI safety" is precisely this type of capture and power centralization: by framing AI models as potential "existential risks" or "important for national security", these companies are positioning themselves to be the gatekeepers of what promises to be one of the most transformative technologies in all of human history.
When VCs and companies are pouring hundreds of billions of dollars into something, you have to ask yourself: are they all crazy, or do they actually see a plausible path to being able to exploit the world so much in the future that they'll be able to not only recoup that money, but make even more? The literal definition of investment is to spend money now in order to make a profit later, so you have to wonder: how are AI companies planning on making so much profit that they can justify spending upwards of a trillion dollars? The Stargate Project alone is estimated to be about $500B dollars. If we assume that represents half of the spend over the next decade, and they wanted to 2x their investment, they would need to make about as much money as is spent by everyone person during the entire year in Australia (as the 15th largest country by GDP, their GDP is about $2T). That's an insane amount of money.
It's simply not possible to make that money by taking a small fraction of the total spend--these companies are not looking to make a few percent, like an infrastructure or utility company (ex: power companies have around a 10% margin). These companies are going to be trying to make a 90% margin--they want to be selling you something for $10 that costs them $1. So when you ask an AI agent to do something in the future, it's probably going to cost you $10 for the work that OpenAI spent $1 to do. That's literally their plan: to make great margins, and expand to do a huge fraction of the world's knowledge work. And now you see why VCs and companies are so excited.
But it doesn't have to be this way.
We don't have to be stuck paying 10x above the fundamental cost of all knowledge work in the future.
If we can make free software and open alternatives to these closed AI systems, then these closed AI model providers won't have any real leverage or pricing power. If people are free to switch to their own open models at any time, or to run agents on whatever provider they want--including even their own local hardware--then these large companies will be forced to play the (appropriate) role of infrastructure. If they really do want to make "intelligence too cheap to meter", then we want them making money like utilities (10% margins)--not like software businesses (90% margins).
Thankfully, open models are already a surprisingly good alternative! Models like kimi k2 are extremely close to the best closed models on most benchmarks and leaderboards, despite being trained for a tiny fraction of the cost that went into training for the closed models. And open models have been getting better at a rate that exceeds even the closed models (which are themselves improving quite rapidly). Unlike even a year or two ago, when Meta's LLAMA models were pretty much the only game in town, there is now great competition at the model layer to provide the best open weight models, with companies like Moonshot AI, DeepSeek AI, and Reflection all hard at work to create the next best, most useful, open models and make them freely available to the world.
These open models are already having a powerful effect on the market--in just a short time, they've turned LLM inference from a place where companies had hoped to make healthy margins, to a place where even many of the closed AI companies are effectively forced to sell their models at a loss in order to compete. Because of the presence of open models, there's always an alternative of just buying your own hardware (or renting from a variety of cloud providers all in cutthroat competition with one another) and running an open model, which has strong downward pressure on the price of AI inference.
But the future of AI is about more than just open weight models and inference: we need viable open alternatives for the entire stack, all of the way from pre-training to the final systems for AI agents, and for usefully coordinating those agents.
This is why I'm focused on "free software" rather than focusing too narrowly on just, say, AI models or AI agents. Certainly, AI agents and their supporting infrastructure seem like one of the next most critical places where we need compelling open alternatives (which is why we've built Sculptor, and are planning to make its source available in the future). But it's more important that we create an entire, thriving ecosystem of open, free software for the entire lifecycle of training and deploying AI, lest some part of that chain become captured by a small number of for-profit companies.
One of the nice things about this effort is that it is gradual: it's not necessarily a binary outcome where either 100% of AI systems are closed or 100% of AI systems are open. Rather, each little library we make, each task we make more transparent, each new open source project contributes to making it easier to freely access AI technologies. Collectively, our efforts towards making open and free software eventually add up to shift the overall balance from closed to open.
If we can manage to keep free and open AI software components competitive with the closed alternatives for each part of the stack, or better yet, make them unarguably better (which I think is possible!), then I think we invite far better futures for all of us.
Rather than a world where these AI systems are black boxes, we can have a world where anyone can open them up and understand how they work (and improve them!)
Rather than a world where we are overcharged for access to the critical tools and technologies of the future, we can have a competitive market that provides useful AI tools efficiently and cheaply.
Rather than a world where money and power continually accumulates to a small number of the largest companies, we can have a world where anyone is free to run AI agents to build their own ideas and bring their own visions to life, and even make a living doing so.
Instead of a world with a small number of huge datacenters that pose national security and ecological risks, perhaps we can distribute the compute for AI to something that is owned and controlled by each individual, and which promotes a more stable geopolitical climate.
Instead of a world with a monoculture of models, we can have a diversity of local and personal models that are better suited to the individuals and communities that use them.
Sure, in these worlds, maybe some of these initial investors might not make the huge returns they were hoping for. But the rest of us will be far better off, and it's not our responsibility to guarantee their returns for such risky investments.
So let's ruthlessly copy and duplicate and commodify these closed AI systems. Let's share data, and collaborate on building open systems that benefit everyone in the world instead of creating a brittle world where our creativity and productivity are constrained to just whatever is boring and safe and profitable for the largest companies. Let's create an explosion of diverse ways of thinking and creating and building with software, where our future AI tools are directly under our own personal control, and where we can spend time working on the things that we care about without worrying about losing access to critical infrastructure because some company finds our work displeasing or unprofitable.
We can make good futures that have AI.
To do so, we simply need to make the software that is AI part of the public commons rather than something locked behind a private paywall.
So go forth, and make (AI-related) software free.
- AI agents are literally just software, in the sense that they are instructions that run on a computer. Now, some of you might complain "oh but what about the weights!?" or "but that computer needs a GPU!!", but those are not particularly valid complaints -- I didn't see you complaining that something wasn't software when a small embedding model was downloaded, and nothing needs a GPU (it's just a lot faster with one). Yes, software that is largely a bunch of pre-trained weights is of a very different type than most software (because, for example, it is harder to understand and edit), but lots of software is hard to understand and edit -- that's just generally called "badly written software". The whole point of this article is that, hey, if AI agents are software, maybe we should do a good job writing that software, and that means we probably want it to be open. ↩︎
