
Over the past several months, the AI infrastructure boom has been a powerful driver in the trajectory of both equity and fixed income markets. While enthusiasm for the technology’s potential has sparked market rallies, concern that the U.S. is headed toward a severe overbuild of data center capacity by the end of this decade has had the opposite effect. This bearish view, however, relies largely on top‑down comparisons between aggregate utility interconnection backlogs and third‑party forecasts of AI-related power demand.
We think this indirect approach – and thus its conclusion – is directionally flawed. The apparent AI infrastructure overbuild causing intermittent market ructions is largely an accounting artifact. When you shift from paper capacity to physically deliverable capacity, the dominant risk is not excess supply, but constrained execution.
To arrive at what we consider a more accurate assessment of the infrastructure landscape, we applied a bottom‑up approach focused on projects with a realistic path to energization by 2030. This methodology indicates that only about 85 gigawatts (GW) of new data center power capacity is likely to come online this decade. Importantly, that comprises roughly 60% of currently signed or committed utility capacity slated for AI infrastructure. More worryingly, it represents roughly 10% of the total utility pipeline quoted in the financial media. Consequently, the gap that should be vexing investors is not surplus power but aggregate planned generating capacity shrinking – or never arriving.
This perspective has investment implications well beyond power utilities. It reshapes how to think about AI infrastructure economics, scarcity pricing, capital expenditure (CapEx) efficiency, and the durability of returns across a broad swath of the AI and energy value chain.
Why top‑down overbuild math breaks down
The standard overbuild argument compares two large numbers:
- Third‑party forecasts of AI and data center power demand in 2030
- Utility‑disclosed interconnection backlogs, adjusted for power usage effectiveness (PUE)
On paper, utilities appear to be connecting nearly twice the capacity implied by consensus demand forecasts, implying that supply would overwhelm demand. Interconnection backlogs, however, are not a measure of deliverable capacity; they are a measure of intent. Queue position does not build substations, transformers, pipelines, or cooling infrastructure.
Furthermore, our analysis indicates that even the “signed” portion of the backlog overstates reality, in that signed does not necessarily mean financed, constructed, or physically feasible within the stated timeline. What matters for investors is not what is requested but what can be operationalized.
Back to reality: A bottom‑up view of deliverable capacity
To pressure‑test the overbuild thesis, we constructed a bottom‑up capacity ledger that deliberately strips out speculative projects and focuses only on those with the highest probability of coming online. We created this by leveraging AI models to identify potential projects by scouring corporate reports, government and utilities filings and other publicly available records. Inclusion criteria were strict and included:
- Signed interconnection service agreements or equivalent commitments
- Verified behind‑the‑meter generation or municipal power contracts
- Physical evidence of progress such as site work, equipment orders, or permitting
This filter narrowed the universe dramatically. Instead of roughly 157 GW of nameplate capacity spread across 259 discrete projects, we estimate that approximately 85 GW of capacity will be delivered by 2030. This represents a paltry 54% conversion rate of the announced projects driving the overcapacity narrative.
Exhibit 1: The capacity ledger
When filtering the announced AI-related power generation projects to identify those that have a feasible chance of becoming operational by 2030, we conclude that only about 54% of “nameplate” gigawatts will be online by that date.
| Metric | Value |
| Unique projects tracked | 259 |
| Total nameplate capacity | 157.4 GW |
| Total expected (PoE-weighted) | 112.1 GW |
| Deliverable capacity (2030) | 84.7 GW |
| Conversion rate (paper to physical) | 54% |
| Geographic coverage | Continental U.S. |
| Temporal horizon | Through 2030 |
Source: Janus Henderson Investors, as of 31 January 2026. Note: Probability of energization (PoE)-weighted expected capacity identifies the amount of announced energy project that will realistically be deployed. This can be further filtered by year of deployment.
The real bottleneck is physical, not procedural
Our analysis finds that physical infrastructure is the limiting factor in this undershoot, rather than the more surmountable obstacle of queue position. We see three constraints consistently dominating outcomes:
- Substations and transformers
Large power transformers now carry lead times measured in years, not months. Many projects with excellent queue positions are stalled simply because the equipment does not yet exist or is being allocated elsewhere. - Fuel, water, and cooling access
Gas, water permits, and thermal discharge rules increasingly determine which projects can proceed at scale. These are highly local, non‑fungible constraints that top‑down models cannot capture. - Skilled labor and engineering, procurement, and construction (EPC) capacity
High‑voltage electricians, commissioning crews, and specialized EPCs represent a significant bottleneck, creating a throughput ceiling even in regions with ample fuel and land.
When incorporated into our ledger, we arrive at a roughly 40% reduction to signed capacity. This is not a bear-case scenario, but a realistic one.
Why 2027 could be the peak delivery year
We estimate that 2027 could be the peak year for new capacity delivery, with roughly 25 GW likely coming online, followed by lower annual throughput thereafter.
Forces that are pulling capacity forward include regulatory fast‑tracking and co‑location clarity and the rise of behind‑the‑meter generation that bypasses grid queues.
Behind‑the‑meter power alone accounts for more than 10 GW of 2027 delivery – nearly half of the total. These projects avoid traditional interconnection bottlenecks by pairing data centers directly with on‑site gas, nuclear, or hybrid generation. The result is a front‑loaded construction cycle followed by a period of scarcity.
Exhibit 2: Forecasted GW delivery
A series of forces (e.g., regulatory fast-tracking and behind-the-meter generation) will likely pull forward several announced projects into 2027, with subsequent years seeing fewer GWs coming online.
| Year | Utility GW | BTM GW | Total GW | Cumulative GW |
| 2026 | 9.4 | 7.3 | 16.7 | 16.7 |
| 2027 | 14.2 | 10.7 | 24.9 | 41.6 |
| 2028 | 11.0 | 3.7 | 14.7 | 56.3 |
| 2029 | 12.9 | 2.2 | 15.1 | 71.4 |
| 2030 | 7.2 | 6.1 | 13.3 | 84.7 |
Source: Janus Henderson Investors, as of 31 January 2026. Note: BTM or “behind the meter” indicates projects that directly match or power source with a customer, thus bypassing typical public utilities.
Not your parents’ utilities
Our analysis reveals the changes AI enablement has wrought upon the utilities landscape. Accordingly, this has implications that extend well beyond traditional utility analysis. Importantly, as supply and demand dynamics shift –in most cases dramatically – traditional utility sector analysis, in our view, proves insufficient in capturing these changes. The most important takeaways we see include:
- Power remains a scarce input, not a commoditized one.
Scarcity pricing for generation, capacity contracts, and infrastructure access is more likely than rate compression. Assets that control physical power delivery should, in our view, retain pricing leverage. - AI CapEx efficiency matters more than gross expenditure.
If power is gated, hyperscalers with better locations, cooling, and power strategies should out‑execute peers. Returns would, therefore, converge around execution quality, not budget size. - Bottlenecks shift up the stack.
As graphics processing units (GPUs) and power are constrained, pressure migrates to memory, advanced packaging, cooling systems, transformers, and power semiconductors. We see these as natural second‑order beneficiaries. - Optionality shrinks as commitments harden.
Regulatory regimes are moving toward longer‑term take‑or‑pay structures and direct infrastructure contributions. This could reduce speculative optionality and concentrate value in committed assets. - Replacement versus growth is under‑discussed.
Gross builds mask meaningful replacement of older, inefficient computing capacity. From a grid and fuel perspective, net additions matter more than headline announcements.
Bottom line
We believe the narrative of data center overbuild rests on a false equivalence between requested capacity and deliverable power. When physical constraints are layered in, the apparent surplus will likely disappear, in our view.
We view the dominant risk through the end of this decade not as an excess of data center capacity but rather under‑delivery relative to demand. Rather than being distracted by inflated backlog numbers, we believe investors across the utilities and technology sectors should revert to the bottom-up fundamentals of scarcity, execution, and control of real assets.