A Song of Silicon and Light

PUBLISHED
READ TIME
4 MIN READ
679 WORDS
DOWNLOAD IMAGES

A decade ago, the House of Google made a strategic bet. Across the Kingdom — Search, Maps, Photos, Translate, and early neural nets — leaders saw two things: (1) torrents of matrix math that conventional chips struggled to handle, and (2) a shift from general-purpose compute toward something heavier and spikier. The lords convened in Mountain View and decided that the time had come to forge their own custom silicon. The Tensor Processing Unit (TPU) program launched without much fanfare, with TPUv1 chips entering production circa 2015.

A decade later, all eyes are now on the TPU. The latest generation, Ironwood (TPUv7), is designed for frontier inference, boasts major performance and efficiency gains, introduces a new 3D topology, sizes up the superpod, and, for the first time, is being marketed and sold to other realms. With Gemini 3 trained on Ironwood, $GOOG at all-time highs, and a defensive House of Nvidia issuing cagey, cringey social media posts, the AI Compute War has a new front.


But something’s missing from the story…how are all of these chips linked?

Doing anything useful with AI today requires delicate orchestration. Thousands of chips must work in concert, passing data back and forth constantly. As scaling laws demand larger clusters, chips stack into racks, racks scale into superpods, and superpods stretch across the datacenter. What’s often lost in chip-level discourse is that moving data (communication) matters as much as doing math (compute).

Copper wiring has been the default interconnect (communication layer) in datacenters for decades. It is cheap, reliable, and effective across short distances. But as clusters grow, copper starts fighting you (signal degradation, heat generation, etc) and requires more power. At frontier scale, you might eventually end up spending as much on movement as math.

Photonics is seen as the heir apparent. Where copper carries electrical signals, optical systems encode data in photons that travel through fiber or bounce off tiny mirrors. These pulses of light move with very low loss, minimal heat, and stable timing over distance, precisely the properties that copper lacks at scale.

Paraphrased from here.

Critically, House of Google parlayed its TPU bet with a second wager: Mission Apollo, a massive, multi-$B investment into the dark magic of using light to move data. Since then, the hyperscaler has deployed tens of thousands of custom optical circuit switches (OCS) across its datacenters. Copper still handles local links, but the long roads increasingly now run on light.


The parlay paid out.

Google claims 30% higher throughput, 40% lower power consumption, and 70% lower capex for OCS vs. electrical switching. Both Houses (Google and Nvidia) recognize optical as essential.

  • But Google architected its entire fabric around optical switching, creating a flatter network that directly connects 9,216 TPUs in a single superpod — and up to 147,000 TPUs at the datacenter level.
  • Nvidia is optimizing their existing multi-layer architecture with advanced optics, which adds cost and complexity compared to Google’s unified approach.

SemiAnalysis recently estimated Ironwood’s total cost of ownership (TCO) as being 44% lower than Nvidia’s comparable system (TCO = silicon, networking, power, cooling, etc). Even when Google adds margin to sell TPU externally, customers could see 30-40% lower TCO.

A Tale Of Two Kingdoms, with Google-exposed names vs. OpenAI ecosystem players. But this is a mere moment in time, and these curves can and likely will cross again, many times, given how quickly the AI narrative shapeshifts. Case in point: Coatue, the creator of this chart, didn’t even include Google in its “Fantastic 40” AI companies a few months ago!

Other fiefdoms are taking notice and lining up at Google’s gates. The Dynasty of Dario (Anthropic) has pledged tens of billions of dollars for up to 1M TPUs (and 1+ GW of capacity in 2026), while the Monarchy of Mark Zuckerberg is also reportedly in alliance talks.

The House of Google, once a search empire with a stable of side projects, has lived to see the day where many of its Other Bets are maturing into indispensable parts of the armory. From custom silicon to optical interconnects, a decade of vertical integration is compounding into a formidable full-stack advantage.