On dramatic results and incremental changes (Spilling Ink #8)
Spilling Ink is the monthly newsletter from from Curious Squid, Dan Brown’s IA and UX design agency based in Washington, DC

“We should work toward dramatic results, not incremental improvements.” (Empire of AI, page 143.)
This quote from Sam Altman, as reported by Karen Hao in Empire of AI, jumped out at me. In the litany of misconceptions and toxic ideology spewing from the billionaire class, this one seems relatively innocuous.
But it’s still a problem.
It epitomizes a series of statements and actions that betray a fundamental misunderstanding of how we humans build things.
“Incremental improvement” is one of the best definitions of progress. This may be hard to hear, flying in the face of a couple decades of digital engineering methodology, but turning ideas into meaningful outcomes takes time. And not just time.
No shortcuts
Dramatic jumps in progress is sometimes how we experience the world. There were no smartphones, and then smartphones were everywhere. One of the myths of innovation from Scott Berkun's book of that name is the myth of epiphany: that the greatest inventions spring miraculously from someone’s mind. This myth persists because we often don’t get to see the work behind the outcome.
Behind this perception of progress is the assumption that quick wins can be big wins, and that no shortcut is too risky or too costly. When someone says that they want to focus exclusively on dramatic results, it suggests they’re willing to do anything to make every win a big win. It suggests that taking time is wasting time. It suggests that their ideology does not permit them to celebrate new insights or small contributions or even well-founded course-corrections.
The behaviors of OpenAI founders documented by Hao corroborate this belief. There is perhaps no better illustration than the relentless pursuit of training data, for which these companies scraped data from every corner of the internet. Every. Corner.
No sharing
When you work only toward dramatic results, you hide your efforts from your peers, lest they build on your efforts to achieve their own dramatic results. OpenAI started as a non-profit organization whose purpose was to share its findings and inventions. This openness came from the fear that working on AI behind closed doors would make it dangerous. But leadership lost the plot – whether from external pressure or greed or ignorance – and “OpenAI” became doublespeak: a label for a tech organization established to be open that was not open at all.
Hao reports on the increasing in-fighting within OpenAI, secretiveness between leaders, and competition between different ideological “clans” within the organization. This, too, is a logical outcome of focusing exclusively on “dramatic results.” When you need to make a splash, pursuing competing objectives merely dilutes your impact.
But gone are the days when innovations come from a single person working alone: technology is too complex to make progress without collaboration. In fact, we’ve long understood that the best ideas emerge from environments where ideas flow freely, what Steven Johnson calls “the adjacent possible.” Like the ecosystems of ancient Earth that gave rise to life, information ecosystems in which ideas are free to interact give rise to novel inventions.
No friction
When you work only toward dramatic results, your instinct is to pour as much money into the effort as possible. Greater investment surely yields greater results, no? You aim to reduce the friction, to give your team as much leeway as possible, to make sure nothing stands in their way. You can’t get big results if you have to conform. This means no constraints: no tight budgets, no resource limits, even no ethical quandaries. In this view of innovation, considering right from wrong is just another constraint on creativity.
Here again is a fundamental misunderstanding of creativity – that to innovate requires a friction-free environment. In fact, the best innovations come not from total freedom, but well-understood constraints. By establishing boundaries, the real world imposes its will on unfettered imaginations, a framework for creators to apply their sense of quality and utility and value. In a previous essay, I’ve referred to those constraints as enablers: not holding back creative endeavors, but instead giving them purpose and fit.
To remove all perceived friction is to remove the very things that make your endeavor sustainable and humane, and that give it meaning.
You’re doing it wrong
If your business strategy prioritizes dramatic results over incremental progress, you’re doing it wrong. It’s not an either-or: Dramatic results come from incremental progress.
Marinating ideas in environments that incorporate constraints, that promote exploration, and that encourage sharing give those ideas the best chance to become meaningful inventions. To think that scientists and technologists can simply jump to a dramatic result – especially when encouraging secrets and shortcuts – seriously misconstrues how humans have made any progress at all.
Before telling the team to focus on dramatic results, leadership had identified building scale as the primary strategy to achieve artificial general intelligence, their central objective. The problem is that no one really had a good definition of artificial general intelligence, so no one had a clear idea of the finish line. Many modern inventions didn’t start out with a clear objective, of course. But they started with an itch that needed scratching, and as they proceeded their objective came more clearly into focus.
For OpenAI, in the absence of a well-defined vision or a clear articulation of the problem they were trying to solve, the company’s vision was simply “scale”. This quote from a strategy document following the departure of several key personnel to form Anthropic captures this mindset: “more scale was its ‘most reliable way of achieving new capabilities’” (176). They were not particularly clear on what those capabilities were. Or to what end. Or for whom.

