Words by Adam Hu, SVP, Product
Mar 18 2026
5 mins

I sadly couldn’t make GDC this year but still ended up with a ton of questions about AI and Product Strategy, so I’ll share some of my thoughts.
There’s a split in the Artificial Intelligence (AI) industry playing out right now in real time. What I call the “Stargate” position—$500 billion announced at the White House—bets that Artificial General Intelligence (AGI) is what you get when you scale compute far enough. Thus, the Large Language Model (LLM) era is the Artificial General Intelligence (AGI) era. Just keep adding Graphics Processing Units (GPUs) and High Bandwidth Memory (HBM), keep driving.
Maybe they’re right. But we can’t build products on maybe, and we can’t build the roadmaps on unknown timelines.
A second group, headed by scientists like Yann LeCun, Demis Hassabis, and Zhu Songchun, believes that LLMs are a genuine breakthrough, but they have an inherent ceiling. The models hallucinate not because the engineering is immature, but because the math guarantees it. They don’t get reliably better by getting bigger. This isn’t a flaw to be patched in the next release. It’s a property of the architecture.
So, since we don’t do AI research at Coda, if we accept LLM limitations like every other machine learning tool before it, the choice is: wait for the God Model that may never arrive, or get obsessively good at extracting value from the tools that exist right now.
I’ve taken to calling the second path LLM-maxxing. I’m an LLM-maxxer, and it’s how we’re building every AI product at Coda.
If you have a model that is 99% accurate per step, a 100-step task will fail 63% of the time. A 1,000-step task will fail 99.99% of the time. Hallucination isn’t an engineering flaw to be patched — OpenAI’s own researchers proved it’s an irreducible mathematical property of the architecture. Current state-of-the-art models hallucinate at a minimum 1% on simple tasks, and their newer “reasoning” models actually hallucinate more – up to 33% on some tasks – not less.
But this isn’t a dead end. The MAKER (maximal agentic decomposition) paper showed that instead of building a smarter model, you build a smarter system: decompose tasks into atomic steps, let swarms of cheap agents iterate and vote, and achieve flawless completion across a million steps. Better reasoning by reasoning less, lower cost.
The pattern: LLM intelligence is hitting a scaling plateau, and inference is commoditizing. What matters is the human intelligence around it.
The most underappreciated development in AI right now is Deepseek’s Engram architecture, published in January 2026. The core finding: when you inject knowledge directly instead of making the model reason its way there, the model doesn’t just get cheaper — it actually thinks better, because it stops wasting computation on remembering and spends it on reasoning. Claude’s recent step-change with model-level Skills is a convergent validation of this exact mechanism.
But we also can’t wait for Deepseek V4, and we can’t wait for them to open engram injection at an organizational level, not just a model level. Instead, we’re already building on the same principle: encoding domain knowledge into structured lattices and injecting it into cheap, fast agents, rather than letting expensive models figure it out from scratch every time. When engrams ship at scale later this year, the performance will validate what LLM-maxxers have been doing all along.
This translates into three principles:
At Coda, launching a merchant site was a massive chore — manual merchant IDs, payment tokens, and localized configurations across 70 stores and 30+ languages. Hours of humans clicking buttons after we already knew exactly what to launch. The perfect problem for LLM agents.
All three layers can be edited by humans. Only the first two can be altered by agents. The hierarchy of intelligence is enforced by design.
We aren’t building for the day the God Model arrives and solves everything. We’re building the machine that makes it irrelevant if it never does.
We’re not waiting for perfect models, we’re building systems that work now.
If you want to explore what that looks like for your business, talk to us: coda.co/contact.
Related Posts
© 2026 Coda Payments Pte. Ltd
Site Credits