
Good morning, it’s Ryan Duffy. This past week has been all about American AI.
Three major institutions weighed in. Different arenas, same conclusion: the future of AI hinges less on models than on energy. This is bigger than tech. It’s industrial policy. It’s national security. It’s the next great infrastructure race…measured not in chips, but in electrons.
We break it all down below, and what it could mean for our collective work ahead across tech, finance, and policy. Would love to hear your thoughts once you’ve read through.
And, as always, thank you for driving the Great American Renaissance 🇺🇸🇺🇸🇺🇸.
P.S. Were you forwarded this email? Subscribe here.



From Washington to Silicon Valley to NYC, three tomes landed within a single news cycle last week: the White House’s AI Action Plan; Anthropic’s Build AI in America; and Goldman Sachs’ Powering the AI Era report. Each emerged from a distinct enclave of American power, but all three narratives ultimately converged on a shared, deceptively simple reality: Electrons determine America’s AI future.
I. Anthropic’s energy reckoning
Maker of Claude and the careful, often cautious voice among frontier model labs, crystallized this reality with a headline takeaway: by 2028, the U.S. will require at least 50 gigawatts (GW) of additional electrical capacity solely to sustain its current trajectory in AI leadership. (To put that in perspective: 50 GW is nearly double New York State’s peak power demand, the equivalent of building over 30 full-scale nuclear reactors, or more than half the output of the entire U.S. nuclear fleet. OpenAI and Oracle’s Stargate project alone aims to consume nearly a tenth of that total, underscoring just how real and immediate this energy buildout is.)
AI workloads are split into two critical categories:
Frontier training runs: developing new AI models, particularly large foundation models, is exceptionally energy-intensive. Training a single SOTA model by 2028 could require a datacenter with 5 GW capacity.
AI inference: Operating trained models demands a broader network of smaller datacenters distributed nationally to maintain low latency and responsiveness.
To visualize this critical distinction, consider the existing footprint of American datacenter infrastructure. This dense web of datacenters, fiber optic lines, and high-capacity transmission lines blankets the country, concentrated primarily in urban cores and strategic network intersections.

While this dense, interconnected network works exceptionally well for AI inference tasks (serving rapid, low-latency responses directly to end users) it's not optimized for the monumental power demands of frontier AI training runs. Inference thrives in proximity to consumers, benefiting from existing fiber backbones and population-dense areas. Training, by contrast, demands far fewer but dramatically larger facilities, often sited away from urban constraints, close to abundant and affordable power sources. (Image source: NREL)
The existing infrastructure reveals a potential mismatch: urban datacenters are constrained by limited space, strained electrical grids, and higher costs. Training at frontier scales (requiring multi-GW capacity) demands a fundamentally different approach. So…where, exactly, should America site its future AI training clusters? We ran a back-of-the-napkin modeling exercise for the U.S., looking at industrial electricity prices, available grid capacity, interconnection speeds, and queue times across all 50 states.

The heatmap reveals compelling opportunities in the Midwest, Mountain West, and select Southern states that offer abundant, affordable electricity; permissive regulation; and physical space. But increasingly, "behind-the-meter" solutions are changing the siting game entirely (co-locating datacenters with small modular reactors (SMRs), dedicated solar-plus-storage arrays, or other on-site energy sources, like natural gas). As full-stack AI companies and datacenter developers blend large-scale grid power with novel behind-the-meter schemes, America’s energy-rich hinterlands could soon help anchor our AI-driven future.
II. Diplomatic and Industrial Urgency
The White House’s newly released 90-point AI Action Plan outlines a muscular, full-spectrum push for “winning the AI race,” with three pillars:
Pillar 1: Innovation (Cut the Red Tape) Slash regulatory barriers, back open-weight models, and launch/fund national AI innovation hubs.
Pillar II: Infrastructure ("Build, Baby, Build!") Fast-track datacenters, chip fabs, SMRs, and aggressive grid modernization (via permitting fast lanes and NEPA shortcuts).
Pillar III: Diplomacy & Security (Dominate Globally, Secure Locally) Lock down strategic technology, harden supply chains, and export American-made AI stacks with tight controls. Prioritize defense AI deployment, guarantee Pentagon access to compute, and turn military colleges into elite AI talent incubators.
The message is clear: America will build faster, regulate smarter, and defend harder. But there’s an implicit admission here: none of this works without power. No amount of deregulation or policy orchestration can overcome a grid bottleneck.
III. Goldman Sachs: Financing America’s AI Energy Boom
Electrons don’t flow from policy papers or executive actions alone. Enter Goldman Sachs, which outlines the capital mechanics behind AI’s power appetite:
$1T in private hyperscaler investment by 2027
160% growth in datacenter power consumption this decade
$5T total in required energy and compute infrastructure by 2030

So, what’s a nation to do? Goldman has thoughts — and, of course, capital pathways. The investment bank maps the emerging capital stack:
Private credit vehicles: Just this last week, we’ve seen reports of Elon’s Musk xAI moving to raise $12B through a private credit structure led by Valor Equity. The plan: purchase a massive tranche of Nvidia GPUs, lease them back to xAI, and collateralize the deal with chips and Grok’s core IP.
Long-term PPAs: Meta, Microsoft, Amazon, and Google are directly underwriting advanced nuclear (SMR, and eventually, fusion) and renewable energy projects. The former agreements guarantee stable baseload electrons, which are critical for energy-hungry frontier AI workloads.
Structured Financial Instruments: Green bonds, yield-cos, sovereign wealth fund syndicates…financial structures once reserved for power plants and ports are being retooled for AI. The scale, urgency, and depreciation curve of modern datacenters demands capital that moves fast, compounds slowly, and carries conviction through volatility.
And a fourth, we’d add, is vertical carveouts.
Vertical carveouts: JVs like OpenAI and Oracle’s Stargate project signal a shift toward bespoke, vertically integrated infrastructure. Rather than relying solely on traditional hyperscalers or just neoclouds, labs are increasingly building their own AI datacenter stacks from the ground up.
America’s Boldest Bet Yet?
From the railroad surge of the late 19th century to the highway building boom after World War II and the fiber-driven growth of the dotcom era, U.S. infrastructure waves have set the stage for step-changes in productivity and global influence. The chart below offers a stylized comparison…not a strict historical ledger, but a harmonized look at how America’s boldest bets have stacked up in capital terms across eras.

While training clusters grab headlines, the real story will eventually play out elsewhere. As models are embedded into everything, AI inference (not training) will become a dominant driver of power consumption. It’s the rollout, not just the build, that will strain the grid. And this story is just getting started.
In dollars and cents terms, this buildout is unlike anything America has attempted. If current projections hold, we may be standing at the base of the largest peacetime infrastructure investment wave in American history. Which, inevitably raises hard but necessary questions:
Can permitting systems move at the speed of capital?
Right now, it can take 5–7 years to approve a substation or transmission line. That’s a nonstarter in an era where GPU clusters double every quarter and datacenters rise in under 120 days.Can the grid scale as fast as compute?
Training one frontier model can now require the output of an entire power plant. Inference is going exponential. Do our transmission networks, load forecasts, and interconnect protocols even begin to match that curve?Can Wall Street underwrite assets that depreciate faster than they amortize?
GPUs lose value every 18 months. Datacenters are obsolete in years. Can the capital stack adapt to an infrastructure cycle that churns faster than the financing models used to build it?What gets left behind?
As trillions in dry powder pour into AI infrastructure, what happens to sectors that aren’t plugged into the boom? Will industrials, life sciences, and public infrastructure benefit, or will they be starved of capital?
These numbers are wild, but no longer theoretical.
When a single training run can draw as much power as an entire city, ideas once dismissed as fringe start entering the Overton Window. We saw where this was heading early, and published full-length Antimemos on two of the most consequential, possible new shifts: 1) orbital AI datacenters beyond terrestrial limits, and 2) light-speed architectures as a post-electron paradigm.

While others build a piece of the communications stack, CesiumAstro engineers the entire architecture. Its systems span antennas, radios, SATCOM, and satellites, all designed to work together or on their own. Every product is developed in-house, built vertically, and made to scale. Since 2017, CesiumAstro has tackled the hardest problems in space connectivity. Its phased arrays, processors, and flight-ready spacecraft already support commercial and government missions across air, ground, and orbit. What sets CesiumAstro apart isn’t just what it builds, but how: modular design, commercial components, and a delivery-first culture that skips traditional defense timelines. At CesiumAstro, the stack is the product, the playbook, and the path forward. And it’s shipping now.

Hey, it’s Jeff Crusey, your Resident Investor at Per Aspera.
“HARD TECH =/= HARD TO FUND”
For years, “hard tech” was shorthand for hard to fund. Now it’s where the money is flowing. In Q2 2025, hard tech startups claimed six of the ten largest venture deals worldwide, pulling in $8.7B . This includes companies developing next-gen satellites, energy systems, and defense platforms: big, mission-driven bets aimed not just at market share but at national resilience and security. AI has been the hot topic and remains hot, but my point is that it’s no longer the only star on stage.
And here’s the kicker: defense tech companies are now commanding higher revenue multiples than AI firms.
I see this as conviction capital flowing toward foundational technologies and existential challenges (defense, infrastructure, energy, and compute). If we don’t outbuild China, then as Dan Goldin has warned, we’ll be mining coal in China. It seems that for the first time, the U.S. is mobilizing private capital to match state-level ambition.



A Shadow Sovereign Fund? President Trump has secured a $550B commitment from the Japanese government (from Tokyo’s coffers, not corporate pockets) to help rebuild America’s strategic backbone (semis, ships, energy, and so on). This hybrid investment vehicle, operating under executive fiat and sidestepping Congress, would pump equity, loans, and guarantees into industrial projects, with the U.S. pocketing 90% of the upside. Hot on its heels, the U.S. announced a similar EU pact today, dodging a trade war with 15% tariffs on most European exports (down from the threatened 30%, but up from 4.8%), while unlocking $600B in U.S. investments for infrastructure, plus $750B in LNG buys over three years.
PA Take: While the deal signals tight alignment with the Japanese and Europeans against China’s state-capitalist playbook, questions linger: Is the money there? Can it survive domestic blowback, e.g., French grumbles of “submission?” And will this endure as an institution or be a once-cycle anomaly? This sovereign-style war chest would be a seismic shift in U.S. industrial policy, mirroring the state-directed strategies that powered Asia’s techno-industrial boom over recent decades (spawning beasts like TSMC via targeted subsidies and alliances). And, in the context of America’s industrial renaissance… 👇

Tesla 🤝 Samsung. Fresh off the wire today, Elon Musk has confirmed a $16.5B chip deal between Tesla and Samsung Electronics. Their new(ish) Taylor, TX fab will produce Tesla’s AI6 chips, with Musk vowing to personally oversee and accelerate output. The deal runs through 2033 and has upside potential to exceed the stated value. Context: Samsung handles Tesla's AI4 chips now, while rival TSMC handles AI5 duties. Analysts peg an AI6 rollout in 2027/2028.
PA Take: The obvious read is that Tesla’s move here locks in tighter supply chain control amid rising global risks. And when we say tight, we mean tight: the Taylor fab is a ~30-mile drive from Tesla HQ, as opposed to TSMC’s facilities an ocean over. (And, while we’re here, TSMC’s accelerated U.S. fab strategy highlights a clear, urgent shift toward semiconductor sovereignty as fundamental as energy independence once was.)
This is also a key win for Samsung’s foundry division, stung by more than $3.6B in losses in the first half of 2025 as it reels from customer defections and underutilized fabs. The South Korean conglomerate now has critical ammo as it aims to claw back market share from a dominant TSMC, which commands a 67% share of cutting-edge nodes. All the while, U.S. industrial champion Intel keeps slipping further, as it weighs a spinoff of its struggling foundry arm, after repeated restructuring, and trims its workforce by 15% (shedding 24,500 jobs by year’s end). More broadly, this move reinforces the rise of “fab nationalism,” with AI supply chains increasingly dictated by proximity, not just cost or performance. And it’s another big win for the Texas Triangle, an economic powerhouse and emerging megaindustrial zone worth keeping tabs on.

Revere Rides Again. Paul Revere’s 224-year-old copper company, Revere Copper Products (based in Rome, NY), is back in growth mode. After decades of decline, the company, led by a husband-wife duo, has tripled its output of copper bars, a critical component for electrical system and datacenter builds. A $25M expansion has returned the firm to profitability, with $850M in annual sales.
PA Take: For me (Ryan speaking, I was born in Boston) this hits with extra weight. Revere was a silversmith, a manufacturer, and a midnight rider. He built things that last. The fact that his company is helping to supply the electrical backbone? That’s badass. And it’s the kind of industrial energy we’re trying to champion here.
What’d we miss? Have something others participating in the Renaissance should know? Hit reply and drop us a line at [email protected].


