Why agents matter more than other AI
Seven advantages that AI agents have over human employees, and what that means for the future of knowledge work.
Written on
When people talk about AI, they often lump a bunch of things together:
- Generative image and video systems
- Old school "machine learning" algorithms
- Ranking and search and recommendations (e.g., for social media feeds)
- Language models
- Chat bots
- Voice recognition models
- Agents
Out of all of those, the only one that really matters for the future is "agents" (i.e. systems that "run tools in a loop to achieve a goal").
This is not to say that the other types of AI won't cause huge changes--they will.
The fundamental difference with AI agents is that they take the human completely out of the loop, and this changes everything.
Today's generative AI systems--chat bots, image and video generation, recommendations--are fundamentally about engaging with human end users. Humans can only consume so much content per day: no matter how good the AI is, you only have 24 hours in a day to watch videos, read chat bot messages, generate new images, etc. No matter how powerful and capable we make these generative systems and machine learning classifiers, there is ultimately only so much demand for their outputs because there are only so many people, and they only have so much attention to allocate. But AI agents are not limited by the global pool of human attention.
Instead, AI agents are only limited by the amount of "compute" (money) that we are willing to spend on them. And there are many places where the demand for knowledge work far outstrips the supply--literally every open job posting is an indication that some company wants to spend money on knowledge work, but is currently unable to do so.
In fact, estimating from open job postings would be a fairly gross under-estimate of how much pent up demand there is for more knowledge work: because the overhead to hiring and managing a new employee is so high, companies only post jobs that absolutely need to get done. Furthermore, the only jobs that get posted are those where it is profitable to hire someone. Given that skilled human labor is quite expensive, there are even more jobs that would be posted if only the cost were lower--and that's exactly what's about to happen as AI agents start to actually work.
I've covered whether AI agents will work in a separate piece, so let us ignore that for a moment (and assume that they will).
Then the question is: what does the world look like when we have AI agents that really work?
What the world looks like when AI agents actually work
We actually already have an example of useful AI agents today: coding agents.
Over the past year, the capabilities of coding agents have improved dramatically from being occasionally useful for small, standalone scripts, to being able to create entire small projects and websites from scratch. Because I work in the space (we make our own platform for running coding agents, Sculptor), I've had a front row seat to this transformation, and the progress has been incredible. We're at the point today where our designer, who doesn't know how to program, used these tools to make a full clone of "Geometry Wars" that is honestly better than some of the sequels (we played both back-to-back last week). Revenues at companies like Cursor and Lovable have skyrocketed over the past year--some users spend >$1K per day on these tools, and there are reports of even crazier spending.
We can generalize a bit from coding agents in order to understand what to expect from other AI agents that we should expect to see soon. Coding agents work well because they are safe to use. If the agent makes a mistake and generates bad code (they often do), the downside is very low: you can just delete the code, and you've only lost the few cents that it cost to generate it.
We can likely expect agents to work in other domains where this property applies: if you can easily verify the output, or if most of the work goes into generating the right information rather than interacting with the world (which introduces opportunities for more serious errors), then agents are a particularly good fit in the short term. Other similar applications besides coding include things like creating research reports, presentations, low-stakes images, drafting emails, editing documents, and filtering through large amounts of information. Those types of activities account for a lot of knowledge work!
Coding agents are also a great example because they are not what people originally expected: the naive vision of coding agents was that you would have a "fully automated software engineer." This is not how it has played out. Rather, we have tools for human engineers that are getting increasingly capable of an ever-widening range of activities, and these tools are getting much more expensive.
This is why we haven't really seen major impacts on employment due to coding agents--yet. Because AI agents are only doing a part of the overall job of being a software engineer, companies cannot simply "hire" an AI system instead. Rather, companies are simply spending more money in order to make their existing employees more effective (which is arguably more productive for companies anyway, since having more productive engineers means fewer managers and lower overhead).
But that doesn't mean that AI agents won't have any impact on employment over the long run.
We're already starting to see companies change how they think about hiring. For example, the number of "front end developer" job openings has declined by almost 10% since last year, and some professions like "photographer" and "writer" have seen declines of almost 30% (source). It's not that there is less front end engineering work to do, but rather that today's coding agents are so good at such tasks that there's no longer as much need to have a dedicated employee for that role. Companies are simply restructuring the types of roles they are hiring to take advantage of these new capabilities.
What's somewhat worrying is that I can see a near future world where it actually will make sense for companies to "hire" AI agents instead of employees. Even just yesterday during lunch, one of my colleagues just left his laptop at his desk with the coding agent running while he played chess with another teammate. How long before the agent can not just run over a 30-minute lunch break in response to some instructions from an engineer, but can actually generate those instructions itself, and continue working around the clock? I think that day is a lot less distant than many people imagine.
The advantages of AI agents over human employees
Which bring us back to why AI agents matter so much more than all of the other types of AI put together: agents are what we call AI systems that do useful work, and they have structural advantages over human employees.
In particular, AI agents have seven main advantages over human employees:
1. The best agent can be copied infinitely.
Unlike humans, as soon as an improvement is made to one agent, it can be made available to all copies of that agent. This is part of why we see people switch between coding agents so frequently--when a new one comes out that is better, you might as well switch and use the improved version.
2. Agents can run 24/7
Unlike humans, agents can run around the clock.
They don't need to rest or sleep or eat. They can be constantly available to work, or respond to queries or changes that happen anywhere in the world.
3. Agents could theoretically think faster than humans
In theory, it might be possible to make agents that can not only run 24/7, but that can actually just think faster than their human counterparts.
Right now, while it's debatable whether agents can even be said to "think", it's not debatable that they can, for example, write code faster than humans (in terms of literal output speed). Over time, as that output gets higher quality, it could be extremely difficult to compete with a software engineering agent in terms of both quantity and quality of output in a given time window.
4. Agents have minimal management overhead
Unlike human employees, AI agents do not require human managers to discuss growth plans, performance reviews, or their feelings.
In fact, one of the major features for AI agents is precisely this lack of overhead--it's just really nice to be able to tell an AI agent to go write some code without worrying about its motivation or interests, since it has none.
Surprisingly, this is actually something that slows adoption as well--since human managers get some level of importance and status from having human reports, they're less likely to reduce the sizes of their own teams (even when that might be the economically rational thing to do).
5. Agents can be instantly scaled up and down
Unlike humans, agents can be almost immediately started and stopped, and they only need to be paid when they are actually working. This means that you could, as a business, suddenly start up 100 AI agents to get something done very quickly, then take them all offline (to stop paying).
The same cannot be done with humans--they get pretty annoyed when you stop paying them--which means that agents are a particularly good fit for cases where the amount of work to be done is highly variable.
6. Agents don't mind running in a nightmare surveillance prison
Unlike humans, agents don't mind having their every action watched in excruciating detail. This is perceived as beneficial for two reasons.
First, businesses often want to have metrics and visibility into work to understand how things are going. Many businesses value legibility and transparency, even at the expense of actually getting more work done (for example, think of all of the time most people spend filling out work tracking reports).
Second, this seriously reduces risk for businesses. If you're watching an agent's every move, it's difficult for it to launder money, leave backdoors in software, etc. There may come a point where, from a security perspective, it's actually a best practice to not have human employees for this reason alone!
7. Agents are more tax efficient
This is a bit strange, but is worth understanding: money given to humans is subject to all sorts of taxes, including payroll tax, social security, healthcare, etc.
Money spent on agents, on the other hand, is treated fully as an expense--as a cost of goods. Actually, even worse than that, some money spent on agents can be counted as R&D expenses as well, which are even further tax-advantaged.
Thus, even if a human could produce the exact same amount of value as a human in the same amount of time, it'd be cheaper to pay the agent than to pay the human.
Where we go from here
Because agents have so many advantages over human employees, there is a huge incentive for businesses to get as much work done with agents as possible. This creates the pent-up demand for useful AI agents, and an enormous pressure for startups and other technology companies to develop reliable, useful AI agents as quickly as they can. Even if AI agents don't work very well today for any given task, it's a pretty risky proposition to bet that they won’t improve over time.
So while agents today are restricted to tasks like coding, where the outputs can be verified, and the agent doesn't need to really act in the world, we should expect to see the scope of knowledge work that agents can do expand over the next few years.
Slowly, we will start to see agents taking limited actions without any human supervision. We can even see the beginnings of this today with AI agents for low-stakes, easy-to-handle tasks like customer service, triaging support tickets, and answering emails. While humans still handle the more complex cases, their work can be recorded to serve as training data for the next version of the AI agents, which slowly become capable of taking on ever-more difficult tasks. The latest models have just surpassed human performance on GDPval (a suite of economically valuable knowledge work tasks)--how long before the majority of work can be done in a fully automated way?
It is this slow creep of capabilities that will happen over the next ~decade: AI agents will go from their nascent, bumbling, trivial state today, to a world where they are powering perhaps even most of the knowledge work being done in our civilization. Each new capability gained by AI agents is something that has forever left the realm of "economically useful work that requires a human to do"1--once one AI agent can do some work, you can make an almost unlimited number of AI agents capable of doing that same set of tasks.
That is why AI agents are ultimately the type of AI that matters the most: because agents have the potential to eventually perform the trillions of dollars of knowledge work that is done (for purely instrumental reasons) in our economies today.
There is a race to build systems that are capable enough to do so, and the final result of that dedicated effort over the next decade will be a world where white collar labor is completely transformed, or perhaps nearly absent (at least in the way we think of it today). No other type of AI system has anywhere near the same potential to radically re-shape society as AI agents that actually work.
Will these agents be working for you, or will they be maximizing profit for some company?
What kind of future are we building as the value of human mental labor approaches zero?
Even if this plays out over 20 or 30 years instead of 10 years, what kind of world are we leaving for our descendants?
What should we be doing today to prepare for (or prevent) this future?
These are questions that are worth engaging with--and sooner rather than later.
If you're interested, follow me on Substack--I'll be writing more about these (and related) topics over the next few weeks!
- Just because humans are no longer required to do a task (from a purely utilitarian perspective, e.g., in order to get it done at all), that doesn't mean that they won't still be employed doing that thing. Think about furniture making--just because machines can make furniture doesn't mean that we don't have artisan furniture makers. They're just valued for different things. Even if an agent can do it, there may be higher demand for the human-made version because of the other things that it signals, bestows, and communicates. ↩︎