There is a common mistake in periods of rapid capability growth. People assume that because execution has become easier, judgment has become less important. This is backwards. Cheap execution does not remove the need for judgment. It exposes whether there was any judgment there to begin with.

That is the tension between Richard Hamming's The Art of Doing Science and Engineering and the world described by The Scaling Era. Hamming insists that careers are shaped by the quality of the problems one decides to care about. The scaling view, by contrast, emphasizes a world in which progress compounds through compute, capital, data, and organizational scale. One sounds almost moral, the other almost industrial. But the conflict is less severe than it first appears.

The synthesis is this: scaling increases leverage; leverage increases the cost of choosing badly. If a team, a founder, or an operator can now move faster than before, then the decision about what to move toward matters proportionally more.

Working Thesis

A world with cheap cognition does not reduce the value of taste. It reprices it upward.

What scaling actually changes

The scaling era changes the production function. More of the stack becomes programmable, composable, and available on demand. Research assistance, coding support, content generation, translation, analysis, design exploration, and orchestration all become cheaper at the margin. The supply of competent execution rises.

When the supply of execution rises, scarcity migrates. It moves from downstream labor to upstream selection. Which user problem is worth instrumenting? Which workflow deserves automation rather than documentation? Which data asset becomes more valuable as models improve? Which product gets stronger when inference costs fall by another order of magnitude? Those are not implementation questions. They are questions of taste, framing, and strategic attention.

This is why many AI products feel simultaneously impressive and trivial. They demonstrate capability, but not necessarily judgment. A demo can show that something is now possible. It says much less about whether it is worth building into a durable system.

Hamming's question still bites

Hamming kept returning to a severe but useful standard: are you working on the important problems of your field? The point was not to romanticize grandiosity. It was to reject aimless busyness. Many intelligent people stay permanently occupied with local optimizations because important questions feel risky, vague, or difficult to begin.

That pattern has not disappeared. It has become easier to hide inside it. Modern tools make it possible to look productive while remaining strategically inert. One can ship prototypes, wire together agents, summarize documents, and automate fragments of work without ever confronting the larger question of whether the target matters enough.

Hamming's advice matters more under these conditions because the opportunity cost of dabbling has increased. If one person can now produce what once required a small team, then the set of forgone alternatives grows larger. Better tools widen the penalty for weak selection.

Why leverage pushes value upstream

In ordinary environments, weak problem selection is often hidden by friction. Everything is slow, so bad choices do not reveal themselves quickly. In high-leverage environments, they do. A team can spend six weeks building the wrong thing with extraordinary efficiency. This is progress only in the bureaucratic sense that movement occurred. The status update will still call it momentum, which is part of the problem.

The real gain from leverage is not that it lets everyone do more. It lets the people with strong taste, domain context, and strategic patience do disproportionately more. That is why the best use of AI is rarely replacing all human judgment. It is concentrating human judgment on the few decisions that compound.

For a software business, those compounding decisions often sit in a small set of places: which customer pain is chronic rather than cosmetic, which process has enough volume to justify automation, which interface becomes more valuable as the underlying model improves, and which workflow creates proprietary data or trust rather than a thin wrapper.

The same logic holds for research. Once tools can accelerate literature review, experimentation, coding, and drafting, the differentiator shifts toward the choice of inquiry. What should be measured? Which hypothesis, if true, changes what comes next? Which question gets more interesting as capability rises? Hamming's instincts were well tuned for precisely this kind of environment.

A practical filter for choosing work

If the bottleneck has moved upstream, then the operating question becomes: how does one pick better? No filter will remove uncertainty, but a good filter can eliminate large classes of unserious work.

1. Prefer problems that worsen if ignored

Recurring operational pain is usually more valuable than theatrical novelty. If a workflow breaks every week, costs real money, or delays decisions, it is a stronger candidate than a clever feature looking for a use case.

2. Prefer work that compounds with usage

Good AI systems improve as you accumulate examples, edge cases, process knowledge, and user trust. If the system does not get stronger with repetition, it may remain a service forever. Sometimes that is fine, but one should know the difference.

3. Prefer contexts where you have asymmetrical context

Hamming's advice can sound universal, but in practice important problems are not equally legible to everyone. The useful question is not only whether a problem matters, but whether you are unusually positioned to see it clearly. In my case, healthcare, nonprofits, and operational software are better terrain than generic consumer apps because the context is real rather than borrowed.

4. Prefer problems that survive model improvement

A fragile AI business depends on current model limitations. A stronger one gets better as models get cheaper, faster, and more capable. The right target is often not a single magical feature, but a workflow whose economics improve every time the underlying models do.

5. Prefer work that clarifies the organization

Many automation efforts fail because the process being automated is itself incoherent. Good problem selection often begins by locating where ambiguity, responsibility, and data quality are already constraining the system. If a project forces an organization to become legible to itself, it has second-order value.

Decision Rule

If a project becomes less interesting as models improve, it is probably a feature. If it becomes more interesting, it may be a platform.

What follows from this

The practical conclusion is unfashionable. The right response to the scaling era is not maximal activity. It is selective ambition. More prototypes should be killed early. More attention should be spent on identifying structural pain, durable asymmetries, and workflows that compound under better models.

Hamming's standard remains useful because it keeps the pressure on the first decision. Before asking whether something can be built, ask whether it deserves to be built. Before optimizing execution, ask whether the direction is worth acceleration. The hardest part of technical work was never motion. It was orientation.

The scaling era rewards people who can combine both books' instincts at once: Hamming's insistence on important problems, and the scaling worldview's appreciation for systems that magnify the consequences of good choices. Put differently, leverage is only impressive when attached to a target worth hitting.