America’s AI Future

PUBLISHED
READ TIME
7 MIN READ
1,295 WORDS
DOWNLOAD IMAGES

From Washington to Silicon Valley to NYC

Three tomes landed within a single news cycle last week: the White House’s AI Action Plan; Anthropic’s Build AI in America; and Goldman Sachs’ Powering the AI Era report. Each emerged from a distinct enclave of American power, but all three narratives ultimately converged on a shared, deceptively simple reality: Electrons determine America’s AI future.


I. Anthropic’s energy reckoning

Maker of Claude and the careful, often cautious voice among frontier model labs, crystallized this reality with a headline takeaway: by 2028, the U.S. will require at least 50 gigawatts (GW) of additional electrical capacity solely to sustain its current trajectory in AI leadership. (To put that in perspective: 50 GW is nearly double New York State’s peak power demand, the equivalent of building over 30 full-scale nuclear reactors, or more than half the output of the entire U.S. nuclear fleet. OpenAI and Oracle’s Stargate project alone aims to consume nearly a tenth of that total, underscoring just how real and immediate this energy buildout is.)

AI workloads are split into two critical categories:

  • Frontier training runs: developing new AI models, particularly large foundation models, is exceptionally energy-intensive. Training a single SOTA model by 2028 could require a datacenter with 5 GW capacity.
  • AI inference: Operating trained models demands a broader network of smaller datacenters distributed nationally to maintain low latency and responsiveness.

To visualize this critical distinction, consider the existing footprint of American datacenter infrastructure. This dense web of datacenters, fiber optic lines, and high-capacity transmission lines blankets the country, concentrated primarily in urban cores and strategic network intersections.

While this dense, interconnected network works exceptionally well for AI inference tasks (serving rapid, low-latency responses directly to end users) it’s not optimized for the monumental power demands of frontier AI training runs. Inference thrives in proximity to consumers, benefiting from existing fiber backbones and population-dense areas. Training, by contrast, demands far fewer but dramatically larger facilities, often sited away from urban constraints, close to abundant and affordable power sources. (Image source: NREL)

The existing infrastructure reveals a potential mismatch: urban datacenters are constrained by limited space, strained electrical grids, and higher costs. Training at frontier scales (requiring multi-GW capacity) demands a fundamentally different approach. So…where, exactly, should America site its future AI training clusters? We ran a back-of-the-napkin modeling exercise for the U.S., looking at industrial electricity prices, available grid capacity, interconnection speeds, and queue times across all 50 states.

The heatmap reveals compelling opportunities in the Midwest, Mountain West, and select Southern states that offer abundant, affordable electricity; permissive regulation; and physical space. But increasingly, “behind-the-meter” solutions are changing the siting game entirely (co-locating datacenters with small modular reactors (SMRs), dedicated solar-plus-storage arrays, or other on-site energy sources, like natural gas). As full-stack AI companies and datacenter developers blend large-scale grid power with novel behind-the-meter schemes, America’s energy-rich hinterlands could soon help anchor our AI-driven future.


II. Diplomatic and Industrial Urgency

The White House’s newly released 90-point AI Action Plan outlines a muscular, full-spectrum push for “winning the AI race,” with three pillars:

  • Pillar 1: Innovation (Cut the Red Tape) Slash regulatory barriers, back open-weight models, and launch/fund national AI innovation hubs.
  • Pillar II: Infrastructure (“Build, Baby, Build!”) Fast-track datacenters, chip fabs, SMRs, and aggressive grid modernization (via permitting fast lanes and NEPA shortcuts).
  • Pillar III: Diplomacy & Security (Dominate Globally, Secure Locally) Lock down strategic technology, harden supply chains, and export American-made AI stacks with tight controls. Prioritize defense AI deployment, guarantee Pentagon access to compute, and turn military colleges into elite AI talent incubators.

The message is clear: America will build faster, regulate smarter, and defend harder. But there’s an implicit admission here: none of this works without power. No amount of deregulation or policy orchestration can overcome a grid bottleneck.


III. Goldman Sachs: Financing America’s AI Energy Boom

Electrons don’t flow from policy papers or executive actions alone. Enter Goldman Sachs, which outlines the capital mechanics behind AI’s power appetite:

  • $1T in private hyperscaler investment by 2027
  • 160% growth in datacenter power consumption this decade
  • $5T total in required energy and compute infrastructure by 2030

So, what’s a nation to do? Goldman has thoughts — and, of course, capital pathways. The investment bank maps the emerging capital stack:

  • Private credit vehicles: Just this last week, we’ve seen reports of Elon’s Musk xAI moving to raise $12B through a private credit structure led by Valor Equity. The plan: purchase a massive tranche of Nvidia GPUs, lease them back to xAI, and collateralize the deal with chips and Grok’s core IP.
  • Long-term PPAs: Meta, Microsoft, Amazon, and Google are directly underwriting advanced nuclear (SMR, and eventually, fusion) and renewable energy projects. The former agreements guarantee stable baseload electrons, which are critical for energy-hungry frontier AI workloads.
  • Structured Financial Instruments: Green bonds, yield-cos, sovereign wealth fund syndicates…financial structures once reserved for power plants and ports are being retooled for AI. The scale, urgency, and depreciation curve of modern datacenters demands capital that moves fast, compounds slowly, and carries conviction through volatility.

And a fourth, we’d add, is vertical carveouts.

  • Vertical carveouts: JVs like OpenAI and Oracle’s Stargate project signal a shift toward bespoke, vertically integrated infrastructure. Rather than relying solely on traditional hyperscalers or just neoclouds, labs are increasingly building their own AI datacenter stacks from the ground up.

America’s Boldest Bet Yet?

From the railroad surge of the late 19th century to the highway building boom after World War II and the fiber-driven growth of the dotcom era, U.S. infrastructure waves have set the stage for step-changes in productivity and global influence. The chart below offers a stylized comparison…not a strict historical ledger, but a harmonized look at how America’s boldest bets have stacked up in capital terms across eras.

While training clusters grab headlines, the real story will eventually play out elsewhere. As models are embedded into everything, AI inference (not training) will become a dominant driver of power consumption. It’s the rollout, not just the build, that will strain the grid. And this story is just getting started.

In dollars and cents terms, this buildout is unlike anything America has attempted. If current projections hold, we may be standing at the base of the largest peacetime infrastructure investment wave in American history. Which, inevitably raises hard but necessary questions:

  • Can permitting systems move at the speed of capital?
    Right now, it can take 5–7 years to approve a substation or transmission line. That’s a nonstarter in an era where GPU clusters double every quarter and datacenters rise in under 120 days.
  • Can the grid scale as fast as compute?
    Training one frontier model can now require the output of an entire power plant. Inference is going exponential. Do our transmission networks, load forecasts, and interconnect protocols even begin to match that curve?
  • Can Wall Street underwrite assets that depreciate faster than they amortize?
    GPUs lose value every 18 months. Datacenters are obsolete in years. Can the capital stack adapt to an infrastructure cycle that churns faster than the financing models used to build it?
  • What gets left behind?
    As trillions in dry powder pour into AI infrastructure, what happens to sectors that aren’t plugged into the boom? Will industrials, life sciences, and public infrastructure benefit, or will they be starved of capital?

These numbers are wild, but no longer theoretical.

When a single training run can draw as much power as an entire city, ideas once dismissed as fringe start entering the Overton Window. We saw where this was heading early, and published full-length Antimemos on two of the most consequential, possible new shifts: 1) orbital AI datacenters beyond terrestrial limits, and 2) light-speed architectures as a post-electron paradigm.