TWO WEEKS AGO… A four-foot-tall robot walked the night ward of a Japanese hospital, as part of the nation’s first humanoid clinical deployment. It guided patients to blood collection during the day, ferried lab specimens between floors, and patrolled corridors after outpatient hours. The three-day trial at University of Tsukuba Hospital ran on a Unitree G1 — which starts at $16,000 in the base configuration — and an OS called Omakase — Japanese for “I’ll leave it to you,” the phrase you use when you trust the chef. The hospital director watched the final demo personally, and staff praised the smoothness of its movement: the robot didn’t fall, didn’t collide with anyone, and nurses didn’t have to adjust a thing.

A brief history of generalist-vs.-specialist
We’ve all asked ourselves at some point: What do I want to be when I grow up? Which has likely involved some version of this question: Do I go deep or wide? Domain expert or T-shaped? This same dynamic runs through every major technology transition of the past 40 years. In silicon, ASICs dominated compute for decades before a programmable processor — the Nvidia GPU, originally a specialist built for graphics but accidentally extremely well-suited for matrix multiplication that machine learning demands – came along to eat all of the epoch’s most important workloads. In AI, it used to be that you’d build separate, narrow models for every task — translation, summarization, sentiment analysis, Q&A — until the transformer arrived and a single architecture (e.g., GPT-3) absorbed the jobs of dozens of purpose-built systems.
Even as the generalist wave crests, the cycle turns back to specialists.
The real history is messier than this simplified retelling of a clean relay race. Systems concurrently coexist and compete at every stage. ASICs never went away and FPGAs occupy their own layer. And the specialists were regrouping before the generalist wave had even crested.
Google began designing TPUs back in 2015, in the early innings of the GPU’s comeuppance and a full decade before Nvidia’s world domination. (It’s now reaping the rewards of this early, prescient call.) Amazon, Meta, and Microsoft followed suit with their own custom silicon programs tuned to their stacks and workloads.
In AI, fine-tuned and domain-adapted models are reclaiming the long tail of tasks where precision earns a premium over versatility. The frontier labs ship specialist model lines alongside their flagships optimized for high-volume tasks where speed and cost beat raw capability, because frontier inference is becoming prohibitively expensive to run for narrow tasks, making fine-tuned specialists the rational choice. Meanwhile, the most valuable application-layer AI companies roll their own software on top of foundation models, wrapped around deep domain expertise (as they’d tell you, specialization is the moat.)
Robotics, the latest (and last?) arena
As we wrote a few weeks back:
We’re seeing something like an intellectual holy war emerge in robotics — humanoids vs. specialized bots — with each camp hell-bent on proving their approach is the right one, and Team ‘Specialized Bots’ watching with dismay as humanoids inflect their way up the robotics hype cycle. Let this Gecko/Navy contract serve as a useful reminder that it’s unlikely to resolve cleanly in either direction. Yes the built world is designed for humans. Yes many automatable jobs are not human-shaped. Both can be true!
The humanoid is the end-all, be-all generalist’s bet: one body + brain for any task in any environment. Human spaces are designed for human bodies, so a machine in human form should be able to navigate and actuate any of them without redesigning the world around it. And to our earlier point, humanoid bulls would say: narrow deployment is the strategy. You deploy on a single station today to generate the sensorimotor training data that eventually enables true generalism tomorrow. Hospital corridors now, general-purpose manipulation later…pre-training in meatspace!
To this, Team Specialist would say: 60 years of mechanization history cautions otherwise! From the combine harvester to the excavator to the spot welder on a factory line, every successful piece of industrial equipment was purpose-built around the physics of a single task. Not because engineers lacked imagination, but because the physical world punishes generality in a way that software does not (every joint, actuator, and degree of freedom that isn’t needed for thte task at hand is weight, cost, failure surface, and wasted energy).
Plus, every example of the generalist winning comes from IT, where bits are cheap to rearrange — and the entire history of mechanical engineering – tractors, power tools, factory equipment – suggests the physical world doesn’t follow that pattern. It shows specialists permanently dominating physical domains.
A case study
To take an under-penetrated automation domain, look to construction. It has, to use scientific terms, a ginormous TAM (~$16T globally) where the U.S. has stagnated since the 1960s (productivity has decreased). Offsite prefab has been attempted since the Gold Rush in the 1800s and failed each time, because the logistics of moving large, heavy, site-specific assemblies have defeated the dividends of factory economics. (Here’s a good argument from Nick Durham, an investor at Shadow Ventures, on why specialized > humanoids for large-scale construction.)
Hospital wards, warehouse aisles, and factory floors each have their own physics, edge cases, and reasons to punish a machine that tries to do everything. And it’s not as though these specialist machines are waiting to be invented — industrial robotics is already a $60B+ market, with incumbents like ABB, Fanuc, and Kuka that have been operating for decades.
Lessons to be had
- The specialist wins first. In silicon, AI, and now robotics, purpose-built beats general-purpose until the generalist crosses a cost/capability threshold that makes “good enough,” well, good enough.
- The generalist doesn’t win by being better. It wins by making better matter less — winning on flexibility and cost, with a marginal tradeoff for precision.
- The cycle turns back, but doesn’t resolve cleanly. The specialist arrives first, the generalist conquers the market, and the specialist reclaims the economics. In practice they coexist at every stage (e.g. the real winners in silicon are vertically integrated stacks that blur both categories). Robotics may fragment permanently into specialist/generalist coexistence across different domains.
- The physical world extends the specialist’s advantage. Software doesn’t penalize generality like physics. Today, in the real world, the domains where automation is most widely deployed are the ones where the machine was designed for the trade, not the other way around.
- A crossover point is looming. Every deployed humanoid today is doing fairly narrow work, and purpose-built machines seem destined to hold the physical domain longer than specialists held silicon or AII. But generalist platforms are following the same arc of GPUs and GPTs. And when a $16,000 humanoid can do 80% of what a $200,000 purpose-built system does…what do you think happens next?
To return to where we started — to the hospital in Japan — a crossover is taking shape. Humanoids are cheap and getting cheaper (at the entry level from China). The brains are getting smarter (though there doesn’t appear to quite yet be a clear path to solving embodied learning at the level true generalism demands). Generalizing from narrow tasks to arbitrary physical manipulation is an open problem. Dexterity, supply chain, manufacturing scale – the gaps we mapped last year in The Last Hardware Problem – form the cost-times-capability matrix standing between here and there, and won’t move as neatly or predictably on a schedule like Moore’s Law. We wouldn’t bet against the triumphal arrival of a truly generally capable humanoid, nor would we bet on this resolving cleanly as a clear Western gunfight with one side walking away with the whole market. Maybe, this time, the least entertaining outcome is the most likely…