2026-01-20

I, Robot Manager

Historically, I've had a tendency to hold onto ideas until they feel "finished", slaving over a giant blog post once or so per year. But in 2026, one of my many personal goals is to get more raw thoughts out and worry less about everything being perfectly polished, stress-tested, and bulletproof.

So, with that in mind, what follows is a work in progress. It's a snapshot of how I am wrestling with the world right now, incomplete thoughts, warts, and all.

---

These days, I find myself in a lot of conversations about the future of work - often through the lens of education (given my workplace) - as well as through the typical leadership lenses of modern team composition, skill needs, org charts, and the like.

Typically, I find that these conversations revolve mainly around vague, anxious questions about which specific skills will survive the coming apocalypse. But I've increasingly come to see this as the wrong question. In my mind, the real question isn't which jobs will disappear, but who is going to manage what's left.

Simply put, I suspect we're about to see a hard and violent shift away from people like me managing teams of humans. Instead, I believe that in short order, people like myself will be much more likely to act as a department of one and "manage" a team of machines, agents, and systems—automated collaborators that work exactly like human teams once did, just faster and without the quarterly off-sites involving a pizza party and an escape room at the edge of town.

And the people best suited for this new robot management role won't be today's hyper-specialists. The future, as it often has, will belong to the generalists.

The Team of One

In my current role, I'm effectively a team of one. I'm responsible for building and launching strategic marketing programs to help my organization grow. And while I have nearly 50,000 talented and wonderful colleagues around me, when it comes to executing the bulk of my work—strategy, copy, design, HTML coding, deployment—my hands are typically the only ones over the keyboard.

I have no writers. I have no designers. I have no marketing production team. It's just me.

So, out of necessity (and a healthy dose of desperation), I did what any enterprising marketer would do in 2026: I built a "staff" out of machines and software. I (along with my pal Anders J.) created a simple (but powerful) system of interconnected AI agents that takes campaigns from a raw idea to deployment-ready assets at remarkable speed and with a surprising level of quality. Copy written, designs structured, HTML coded, and all of it loaded into our marketing automation platform, ready to go.

Campaigns that once took 3-4 people a week or more to develop and build now take just a few clicks and even fewer minutes.

Just last week, I was able to generate 100+ emails for a series of communication streams in one afternoon, where the sum total of my hands-on work was probably 30 minutes or less.

And when I show this little system to my colleagues, they almost never ask how it works. They almost immediately ask, "What do you think this will mean for how we structure teams in the future?" They realize in that moment that a tool like this devours the hours of work that currently justify the existence of multiple roles across their current marketing departments.

"Gas Town"

And to be clear, I'm not breaking some new ground here or discovering something others haven't already thought deeply about and acted on. In fact, if you think this is just me tinkering in isolation, look at what's happening in the actual software engineering space. There's a new concept gaining traction among elite developers called "Gas Town" which brings this to a whole new, and far more refined level, beyond anything I've been capable of.

Created by Steve Yegge, the idea of "Gas Town" is essentially to create a literal org chart for AI, organizing work into a functioning "town" of agents, each with a specific job:

  • The Polecats: Specialist agents that spin up to do a single task, like write a function or fix a bug.
  • The Refinery: A specialized agent whose only job is to check for conflicts and merge them.
  • The Mayor: An AI chief-of-staff that takes high-level instructions and decides which agents need to do what.

In the documentation, the human isn't called the "Developer" or the "Engineer." The human is called "The Overseer."

The Overseer doesn't lay the bricks. The Overseer sketches the blueprint, hands it to the robot Mayor, and then sits back and watches the town get to work.

This is the role I'm describing and the role that will own the future. And it's no longer science fiction; it's a downloadable software library that you can grab right here on GitHub.

The Math of Inefficiency

At this point, there's simply no use tiptoeing around the most obvious truth when it comes to the near-term future of industry and employment. We're just going to need fewer humans to do this kind of work, period. This isn't something that will happen. It's something that's already happening, and it isn't slowing down. Structured, repeatable execution simply is not a growth area for human beings anymore.

And as these systems only get better, faster, and cheaper, the value of labor will continue to shift violently upward—away from execution and toward orchestration, judgment, and architecture.

My own current mental bar for AI marketing output is pretty basic (and totally made up): 75% of human quality at 100x the speed. And that trade-off works 95% of the time for most of the things I am working on. But looking at the trajectory of AI model quality improvements to this point, it seems more than reasonable that we will likely hit 100% human quality at 500x the speed in no time.

I also know that there is still a big part of the world that thinks true "creative" work is safe from the robots. But this idea that humans will cede only the rote work to machines while indefinitely holding the line on "creative" or "strategic" work strikes me as (unfortunately) a false hope. I truly and deeply wish this were not the case, but the robots are coming for your Creative Director job much faster than you think.

The Creative Moat?

This is the sticky part that I'm wrestling with both practically and philosophically. It's where I feel the most internal strife and unresolved tension in my own head.

I've personally been holding onto the idea that true, breakthrough creative work is safe. In part, this is because up until recently, I've found AI to do a pretty piss-poor job at this kind of "thinking" when prompted. But if I'm being honest, I've mainly held onto this view because I don't want the machines to get good at it.

So far, I've been generally fine with the robots knocking out the menial and trivial tasks well suited to a pile of computer chips to grind through. But once the line is crossed into strategic, creative work, and they start drifting into my lane... well, it gets uncomfortable and a bit too close to home.

One standard argument that I've generally clung to is that AI is, by definition, backward-looking. It scrapes the past to assemble a "new" image. So the logic follows that if breakthrough work requires seeing a future that the past alone cannot predict, the human mind must therefore hold a distinct advantage.

But lately, I've been questioning that safety net more and more.

Aren't we humans the same way? Is our ability to imagine the future not also bound by our contextual understanding of the past? When a "visionary" creative director pitches a campaign, aren't they just synthesizing their own training data—decades of pop culture, conversations, and experiences—into a new output?

So if the difference between human creativity and machine creativity is just the size of the dataset and the "soul" in the synthesis, then the gap is not only unsafe, it's closing alarmingly fast.

The Rise of the "Clever and Lazy"

Back then, to my initial question. Who will own this future? To me, it's clear that the future will not belong to the specialist engineer, the specialist copywriter, or the specialist anything, for that matter.

I suspect the near future will belong firmly to the clever generalists.

It will belong to people who see in systems and architecture rather than silos. People who understand how strategy, creative, data, and execution fit together, even if they aren't the best in the world at any single one of them.

And it will belong to people with a healthy lazy streak.

There's a loosely translated quote often attributed to Kurt von Hammerstein-Equord that I've always loved, and that feels apt for this moment:

"I divide my officers into four classes: the clever, the industrious, the lazy, and the stupid... The man who is clever and lazy qualifies for the highest leadership posts. He has the requisite mental clarity for difficult decisions."

Clever and lazy.

Not careless or slothlike, but simply unwilling to do unnecessary work. Viscerally opposed to inefficiency, and relentlessly curious about how to assemble modern tooling in order to get more out of less.

These are the basic and vital characteristics of the Robot Managers that will rule the next several years.

Because the glaring truth in front of us is that the human role is moving rapidly away from doing the work and equally rapidly toward architecting the work.

And to survive and thrive in the coming future, we will no longer be managers of people; we will be architects of systems. Broad generalists. Leaders of robots.

Of course, this argument is admittedly self-serving, as I have always been a generalist myself. A Jack of all trades and a master of none.

But self-serving as this may be, that doesn't make it wrong. The future is spreading quietly, unevenly, and quickly. And in my view, it belongs to those clever and lazy generalists who see easily how the pieces all fit together, and who are comfortable telling the robots what to do.