People talk about AI and automation as though they have a single, universal meaning. “Will it replace people?” “Will it destroy jobs?” “Should workers be afraid of making themselves more efficient?” These are treated as general questions, like asking whether fire is good or bad.

But the answer depends a lot on what you’re standing in.

If you’re in a giant company, automating your own work can feel vaguely self-destructive. Suppose you find a way to do in two hours what used to take two days. Congratulations: you have increased the amount of useful work the system can get from one employee. This is, from the perspective of civilization, a good thing. It is less obvious that it is good from the perspective of the specific employee who just demonstrated that the old staffing assumptions were maybe a little generous.

This is not paranoia. Large organizations are, among other things, machines for turning exceptional effort into standardized process. If Alice has a clever workflow, the institution would ideally like to make that workflow not just available to Alice, but legible, repeatable, and transferable to everyone else. That is part of what large organizations are for. They absorb local improvements into the wider machine.

So if you are in that environment, there is a real sense in which automating your own work can mean reducing the degree to which the system needs you specifically. The better your work becomes as process, the less it remains as personal advantage.

A startup lives under almost the opposite logic.

The usual problem in a startup is not “we have discovered that one person can do the work of three, let us now update the org chart accordingly.” The usual problem is more like “we are nine people, we are being chased by twelve deadlines, fifteen competitors, three infrastructure failures, and a payroll date, and if we do not somehow become preposterously effective in the next six months we may cease to exist.”

This changes the meaning of automation rather a lot.

In a small, focused startup, the binding constraint is rarely excess labor. It is usually insufficient leverage. There is too much work relative to the number of competent hands available to do it. There are too many opportunities to pursue and too little time to pursue them. There are things that need to be built, fixed, sold, designed, tested, explained, integrated, supported, and rethought, usually all at once, usually by the same small cast of characters.

In that world, automation is less like “proving the human is unnecessary” and more like “giving the human a longer lever.”

If an engineer automates a bunch of repetitive work, that usually does not cause the startup to say “Excellent, now we need fewer engineers.” It causes the startup to say “Great, now maybe we can actually ship the other five things that were impossible last week.” If a designer gets much faster, that does not usually lead to “we can eliminate design.” It leads to “we can finally explore the ten directions we previously had to collapse into two.” If leadership gets better tools, it does not mean leadership becomes optional. It means there is now some faint hope of coordinating the rest of the machine without dying of context switching.

This is why I think people sometimes import the wrong fear from the big-company world into the startup world.

In a startup, the main risk is usually not that you automate yourself away. The main risk is that the startup fails.

That is the background fact against which everything else should be evaluated. If the company does not find product-market fit, or cannot deliver, or cannot move fast enough, or cannot absorb the opportunities in front of it, then the whole discussion about whether automation made particular roles more or less legible becomes somewhat academic. You are not being made redundant by your own efficiency. You are being ejected into the general labor market by the failure of the enterprise.

And the general labor market, notably, is the very place where everyone agrees the bar is rising.

Which creates a nice symmetry.

Upskilling and automation inside a startup produce two different kinds of return.

The first is internal. They increase the odds that the startup itself succeeds. A small team with better tools can punch above its weight. It can take on opportunities that would otherwise be too risky. It can survive complexity that would otherwise crush it. It can move from “in principle this is a good idea” to “in practice we actually shipped the thing and customers want it.”

The second is external. Even if the startup fails, the people inside it become stronger. They learn how to use AI tools well, how to build leverage into their workflows, how to redesign systems instead of merely inhabiting them, how to get more output from the same hours, how to coordinate humans and tools together. These are not trivial skills. These are increasingly close to the core skills of modern knowledge work.

So even in the bad case — the startup dies, the team disperses, everyone is thrown back into the larger ecosystem — the people who spent that time getting more leveraged are in better shape than the people who spent that time carefully preserving their old workflow in amber.

Or, put slightly more starkly: if the startup succeeds, leverage creates upside. If the startup fails, leverage comes with you.

That seems like a pretty good trade.

None of this means every startup should mindlessly automate everything. There are many stupid forms of automation. Some automations are brittle, some destroy quality, some merely move work around and call it progress, and some are the corporate equivalent of buying an industrial robot arm to butter toast. Judgment still matters. In fact, judgment probably matters more, because the faster your tools become, the more damage they can do when pointed in a dumb direction.

But the broad distinction still holds.

In a giant company, automation can feel threatening because it makes the institution less dependent on any one person.

In a startup, automation is often exactly what allows a small number of people to become important enough to matter at all.

That is why the emotional valence is so different. In one context, becoming more legible and efficient may make you easier to replace. In the other, becoming more leveraged may be the thing that keeps the entire project alive.

And if it doesn’t keep the project alive, you still get to keep the leverage.

Which, in a world like this one, is not the worst consolation prize.


Image: Forging the Shaft (1874–77), John Ferguson Weir. Courtesy of The Metropolitan Museum of Art, New York. Public domain, via Wikimedia Commons.