Power Plants of AI
The Energy Infrastructure Behind Artificial Intelligence
Artificial intelligence is often described as a software revolution driven by algorithms and chips. In reality, its most important constraint may be far more physical: electricity. The next phase of the AI boom will be shaped less by models and more by power plants, grids, cooling systems, and the geopolitics of energy supply. This week, three events made that argument undeniable: the White House signed a pledge forcing Big Tech to pay for its own power, China unveiled its 15th Five-Year Plan with AI-plus-manufacturing at its core, and Nvidia invested $30 billion in OpenAI while signaling it may never need to invest again.
The Illusion of the Software Revolution
The public debate about artificial intelligence is almost entirely about software. Which model performs better. Which benchmark was surpassed. Which company shipped the most capable system. The competition is framed as a race between algorithms.
But beneath that software narrative lies a much heavier physical infrastructure — one that is beginning to determine who wins and who waits.
Training and operating frontier AI models requires massive clusters of GPUs housed in hyperscale data centers. These facilities consume extraordinary amounts of electricity, measured not in megawatts but in gigawatts. GPT-3, when it launched, required roughly 1.3 megawatts to train. Next-generation frontier models are projected to require facilities drawing 150 megawatts or more on a continuous basis — the equivalent of powering a mid-sized city. At that scale, the constraint is no longer silicon. It is power.
The Political Signal: Washington Concedes the Problem
On March 4, 2026 — the same day the Senate voted on war powers for Iran — seven of the largest technology companies in the world gathered at the White House to sign what the Trump administration called the Ratepayer Protection Pledge.
Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI committed to building, bringing, or buying new generation resources and covering the full cost of all power delivery infrastructure required for their data centers — ensuring those costs are not passed to American households. The pledge is voluntary and carries no federal enforcement mechanism. Energy experts expressed doubt that voluntary promises can slow fast-rising electricity prices, noting that electricity is mostly regulated at the state level and managed across regions using market structures that vary across the country.
But the structural signal matters more than the legal mechanism. For the first time, the White House and the hyperscalers publicly conceded what energy analysts had been saying for years: large-scale data centers raise everyone's power bills, and someone needs to do something about it. The pledge doesn't resolve the problem. It acknowledges that the problem is real — and that it is now politically unavoidable.
When a president announces at the State of the Union that Big Tech must pay for its own electricity, the software narrative of AI is officially over. What replaced it is an infrastructure problem, an energy problem, and a governance problem — simultaneously.
Electricity Becomes the Bottleneck
Across the United States, developers of new data centers are encountering a simple and increasingly costly problem: the grid cannot connect them fast enough.
Utilities in key computing hubs now quote multi-year wait times for grid interconnections. In some regions, the timeline for energizing new facilities stretches three to seven years. Virginia — which hosts more data center capacity than anywhere else on earth — has watched its grid operator, PJM Interconnection, accumulate a queue of over 2,600 projects seeking connection, representing nearly 700 gigawatts of requested capacity. Most will wait years. Some will never connect.
This mismatch creates a structural tension with no obvious resolution. Compute capacity can scale in months. Electricity infrastructure operates on the timeline of regulated utilities, environmental permitting, and physical construction. A data center can be built and equipped before a single new transmission line is approved.
Microsoft understood this clearly enough that in 2023 it signed an agreement to restart Unit 1 of Three Mile Island — the same facility that became synonymous with nuclear risk in 1979 — specifically to power its AI infrastructure in Pennsylvania. Google followed with contracts to support new nuclear capacity through Kairos Power. The signal was unambiguous: the hyperscalers had stopped waiting for the grid to catch up and started securing generation at the source.
The Architecture of Deregulation: Washington's Bet
While physical infrastructure expands, the political infrastructure around AI is being simultaneously dismantled — at least at the federal level.
In December 2025, President Trump signed an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence," directing the Justice Department to establish an AI Litigation Task Force empowered to challenge state-level AI regulations in federal court. The order also instructed the Commerce Department to withhold federal broadband funding — $42 billion allocated under the BEAD program — from states deemed to have "onerous" AI laws. The explicit logic: a patchwork of fifty different regulatory regimes creates friction that China, operating under a single federal authority, does not face.
The order is legally contested. Florida's governor argued that an executive order cannot preempt state legislation — only Congress can. Colorado, whose algorithmic discrimination law was specifically cited in the order, announced plans to challenge it in court. A bipartisan coalition of 36 state attorneys general had already issued calls against federal AI preemption months earlier.
What the order reveals structurally is less about regulation than about sequencing. The Trump administration has made a deliberate calculation: the speed of deployment matters more than the architecture of accountability. Whether that calculation holds — legally, politically, and in terms of public trust — is one of the open questions of the next four years.
Every Watt Becomes Heat
Electricity is only half of the physical problem. The other half is thermodynamics.
Almost every watt consumed by a processor ultimately becomes heat. A large AI cluster therefore generates enormous thermal loads that must be dissipated continuously — not occasionally, but every second of every hour of operation. A gigawatt-scale facility must remove roughly a gigawatt of heat. The cooling system is not secondary infrastructure. It is as critical as the compute itself.
This is why geography matters in ways that seem almost pre-industrial. Data centers are being sited not just near power plants but near rivers, aquifers, and cold climates. Iceland, Scandinavia, and northern Canada have attracted investment partly because ambient temperatures reduce cooling costs. Water availability has become a site-selection criterion alongside land prices and fiber connectivity.
The physics of heat removal also sets a hard limit on certain speculative architectures. Proposals for orbital data centers, occasionally floated as a way to escape terrestrial constraints, run directly into this wall. In space, there is no atmosphere or water to carry away heat. The only available mechanism is thermal radiation, which requires radiator surfaces of extraordinary size. The physics do not bend.
The Rise of Energy-Anchored Compute
As energy becomes the limiting resource, a new model is beginning to emerge — one that inverts the traditional sequence of infrastructure development.
Instead of building data centers first and connecting them to the grid later, developers are increasingly looking for the opposite arrangement: locating compute directly adjacent to major sources of generation. The logic is straightforward. If the grid cannot deliver power fast enough, go to where the power already is.
This model — energy-anchored compute — changes the geopolitical map of AI infrastructure. The relevant question is no longer where fiber is cheapest or where engineers are most concentrated. It is where electricity is abundant, stable, and available at scale. Hydropower basins in the Pacific Northwest and Quebec. Nuclear facilities with available capacity. Large gas fields in the Permian Basin and the Gulf Coast. Massive solar installations in the American Southwest and the Middle East.
China understood this calculus early. Its current investment in solar capacity — now representing roughly 40 percent of global installed solar — is not only an energy transition. It is preparation for compute dominance. Cheap, controlled electricity generation at scale is the foundation on which the next generation of AI infrastructure will be built. Beijing is constructing that foundation deliberately.
The Nvidia Equation: Supplying the Fuel
This week, Nvidia's position in the AI ecosystem became structurally clearer — and more powerful.
In September 2025, Nvidia and OpenAI announced a letter of intent for a $100 billion infrastructure partnership to deploy 10 gigawatts of Nvidia systems. Five months later, no contract had been signed. On March 4, 2026, Jensen Huang told a Morgan Stanley conference that the $100 billion figure was "probably not in the cards" — because OpenAI is preparing to go public, likely by the end of the year.
What did happen: Nvidia participated in OpenAI's $110 billion funding round with a $30 billion investment — alongside $50 billion from Amazon and $30 billion from SoftBank, at a $730 billion valuation. Huang called it potentially the largest single investment Nvidia has ever made, and suggested it may be the last. He said the same about Nvidia's $10 billion investment in Anthropic.
The logic is precise. Nvidia doesn't need to own equity in the companies that depend on its chips. When you supply the fuel, you don't need to own the engine. Every frontier AI lab — OpenAI, Anthropic, Google DeepMind, xAI — runs on Nvidia GPUs. The competition between them is simultaneously a guarantee of Nvidia's dominance. The chipmaker has positioned itself not as a participant in the AI race, but as the infrastructure layer beneath it. That is a structurally more durable position than any single bet on a single model company.
What this week's moves reveal is that the consolidation phase of AI infrastructure has begun. Nvidia is closing its investment cycle. OpenAI is preparing for public markets. The frontier is becoming industrial — and industrial infrastructure tends toward concentration, not proliferation.
Beijing's Blueprint: The 15th Five-Year Plan
This week, as the U.S. Senate voted on war powers and Iranian drones struck data centers in the Gulf, China's National People's Congress convened in Beijing to approve its 15th Five-Year Plan — the country's economic and technological roadmap for 2026 through 2030.
The document places AI at the center of a broader "AI-plus-manufacturing" strategy, focused on large-scale implementation of artificial intelligence across the industrial sector through public corporations and large-scale investment platforms. The framing is explicitly structural: AI is not a consumer technology — it is an industrial instrument of national modernization.
The Bank of China pledged $137 billion over five years to strengthen the AI supply chain, while provincial governments created computing voucher subsidies specifically designed to offset the impact of U.S. chip export controls. The export controls, intended to slow China's AI development, appear to have accelerated its open-source strategy instead. DeepSeek's R1 model, released in January 2025, matched GPT-4 performance at a fraction of the cost — trained for approximately $5.6 million using older chips that U.S. restrictions had left accessible.
The 15th Five-Year Plan also commits significant resources to semiconductor self-reliance, quantum computing, robotics, and the energy infrastructure that underlies all of it. China's 40 percent share of global installed solar capacity is not incidental to this strategy — it is its foundation. Cheap, state-controlled electricity at scale is the precondition for the compute dominance Beijing is building toward.
The contrast with Washington is architectural. The United States is deregulating to accelerate private investment. China is directing state capital toward the physical infrastructure — energy, compute, chips — that determines who can sustain frontier AI development over decades.
The Race Beyond Two Powers
The United States and China dominate this competition, but they do not define its entire geometry.
The European Union has chosen a different path — regulatory architecture first, infrastructure investment second. The EU AI Act, already partially in force, establishes a risk-tiered framework for AI deployment. Brussels has also committed to building sovereign compute infrastructure through the EuroHPC Joint Undertaking. The bet is that regulatory clarity will attract long-term investment while protecting democratic values. The risk is that the compliance burden slows deployment relative to less regulated competitors.
The Gulf states represent a third model — energy-anchored compute as geopolitical positioning. Saudi Arabia's sovereign wealth fund commitments to AI infrastructure, the UAE's establishment of the world's first dedicated Ministry of Artificial Intelligence, and Qatar's data center investments all reflect a calculated recognition: abundant cheap energy plus strategic location creates leverage in the AI era. This week's Iranian drone strikes on AWS facilities in the UAE demonstrated the exposure that comes with that positioning — and the degree to which digital infrastructure has become a front line in physical conflict.
Brazil occupies a more complex position. President Lula's administration launched a $4 billion AI investment plan titled "AI for the Good of All," committing resources to develop a Portuguese-language large language model and upgrade the Santos Dumont supercomputer toward the global top five. In late 2025, ByteDance announced a $38 billion investment for a data center in Ceará, while a U.S. consortium including BlackRock, Microsoft, and xAI invested $40 billion in Brazilian data center infrastructure. Brazil's structural advantage is real: abundant hydropower, a large domestic market, and a renewable energy grid that makes its data centers among the lowest-carbon in the world. Its structural constraint is equally real: only 40 percent of Brazilian data is processed within the country. The rest flows abroad, outside the reach of Brazilian law — a dependency that no investment plan alone resolves without also resolving the regulatory and political conditions that keep infrastructure investment competitive.
India represents a fifth model — and perhaps the fastest-moving of all. In less than twelve months, the country has become the most contested market for AI infrastructure in the world. Microsoft, Google, Amazon, and OpenAI have committed tens of billions to data centers; Reliance is planning the world's largest facility in Jamnagar, Gujarat, with 3 gigawatts of capacity; the Adani Group announced $100 billion in renewable-powered infrastructure by 2035. India's structural advantage is not only energy — it is talent at scale, the scarcest asset in the AI ecosystem. Hosting the world's second-largest developer community creates a form of leverage that no petrodollar can buy. But the constraints are proportional to the ambition: installed capacity currently sits below 2 gigawatts, water stress affects the majority of existing data centers, and the electrical grid still operates below what projected demand requires. What New Delhi is betting on is that aggressive incentives — two-decade tax exemptions, subsidized land, deliberately light regulation — can turn the country into the preferred destination for infrastructure capital as other markets close themselves off. It is a race between execution capacity and demand velocity. For now, demand is winning.
In East Asia, the pattern is different. Japan is not only building domestic infrastructure — it is financing America's. A $550 billion commitment covers data centers, power plants, and manufacturing on American soil, positioning Tokyo as an industrial co-leader with its ally rather than a competitor. At home, Japan faces the same paradox as the United States in aggravated form: data center energy demand is set to triple by 2034, but generation projects take twice as long as hyperscalers have available. The response was the "Watt-Bit Collaboration" — a policy of co-locating electricity generation with compute that is, in essence, the most honest plan any country has produced about the real problem. South Korea plays a different piece: it controls 50 percent of the global market for HBM memory, the critical component that no frontier GPU can do without. Without Korean chips, there is no American model. That position — irreplaceable supplier of an input that everyone needs but no one wants to admit depending on — is structurally more powerful than any data center. Further south, Southeast Asia is absorbing global capital at accelerating speed, but risks becoming rental infrastructure: hosting compute that serves strategies defined elsewhere.
The pattern that emerges across these six distinct models is a division of roles that no country chose deliberately, but that is now consolidating: who controls generation, who controls hardware, who controls talent, who controls regulation — and who simply hosts someone else's compute.



The Hidden Industrial System
The language surrounding artificial intelligence consistently suggests a purely digital transformation — clean, weightless, post-industrial. The reality is the opposite.
Power plants feed electricity into transmission networks. Substations route energy toward hyperscale facilities. Cooling towers dissipate the heat generated by thousands of processors operating simultaneously. Concrete, steel, copper, water — the material inputs to AI infrastructure are the same material inputs that built the industrial economy.
Artificial intelligence is not replacing industrial infrastructure. It is becoming one of its heaviest consumers. And this week, when the President of the United States gathered the CEOs of Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI at the White House to sign a pledge about electricity bills, that reality became impossible to ignore.
The Next Constraint
For the last decade, the central bottleneck in AI was compute — specifically, the availability of advanced semiconductors. Chip shortages were the story. TSMC was the chokepoint. Export controls on Nvidia GPUs were the policy lever.
For the next decade, the constraint is shifting. Chips are becoming more available. Power is becoming the scarce resource. And wherever electricity is scarce, the growth of AI will slow with it — regardless of how many models are trained, how many chips are manufactured, or how much capital is deployed.
The future of artificial intelligence will not be determined only in data centers and research labs. It will be decided in power plants, transmission corridors, permitting offices, and energy markets. It will be shaped by the countries and companies that understood early that the most advanced technology of the digital age runs on the same infrastructure as a steel mill.
Nvidia understood it. Beijing understood it. Washington, this week, was finally forced to admit it.
The intelligence is new. The foundation it requires is not.
— Global Drafts









