OpenAI and Oracle halt Texas Stargate expansion as AI power race shifts

Última actualización: 03/07/2026
  • OpenAI and Oracle have shelved plans for a 600 MW expansion of the Stargate campus in Abilene, Texas after prolonged financing and operational disagreements.
  • The existing Abilene site, designed for up to 1.2 GW and 450,000 Nvidia GB200 Blackwell GPUs, remains under construction and is expected to be completed by mid‑2026.
  • OpenAI is pivoting new capacity toward Nvidia’s next‑generation Vera Rubin platform across other Stargate locations with a target of 10 GW of AI compute.
  • Texas aims to lead the global data center market by 2030, but pressure on power, water and grid reliability is intensifying as AI infrastructure scales.

AI data center infrastructure

The plan by OpenAI and Oracle to expand their flagship Stargate campus in Abilene, Texas has been put on ice after months of complex talks over money, infrastructure and evolving technical needs. The move affects a massive 600‑megawatt build‑out that was supposed to turn the site into one of the most powerful artificial intelligence hubs in the United States.

While the existing facilities in Abilene remain in place and under active construction, the additional expansion has been formally dropped from the immediate roadmap, according to people familiar with the matter and details first reported by Bloomberg. The decision illustrates how quickly demand forecasts, financing conditions and power constraints can shift around hyperscale AI projects, even for headline‑grabbing initiatives like Stargate.

Texas Stargate expansion plan stalls after tough negotiations

The cancelled portion of the project revolved around a planned 600 MW increase in capacity at the Abilene campus, originally announced in September 2025 as part of a larger rollout backed by Oracle, OpenAI and SoftBank. That add‑on would have pushed Abilene further toward its intended role as a central pillar in the Stargate program.

According to the reporting and people involved, financing structure, operational adjustments and shifting AI needs became sticking points during negotiations. Talks dragged on for months without a final agreement, leading the partners to halt the proposed expansion rather than lock themselves into a configuration that no longer fit OpenAI’s updated roadmap.

Bloomberg described OpenAI’s stance as shaped by a “constantly evolving demand forecast” for its AI services. In practice, that meant the company became less convinced that pouring additional capital into more Blackwell‑based capacity in Abilene was the best way to meet its next wave of computing requirements.

Despite the pause, the broader Stargate initiative has not been scrapped. The Abilene site continues as a major construction project, and OpenAI, Oracle and their partners are pressing ahead with other locations where power, financing and timelines may line up more cleanly.

The abandoned extension is being seen in the industry as more of a strategic reset than a total retreat from large‑scale AI infrastructure in Texas. It reflects the reality that multi‑billion‑dollar data center bets are now highly sensitive to small shifts in technology roadmaps, energy markets and capital costs.

What stays in Abilene: 1.2 GW campus and 450,000 Blackwell GPUs

For now, the core Abilene campus remains intact. The existing design envisions up to 1.2 gigawatts of electrical capacity, putting it among the largest single loads on the Texas grid. Within that footprint, the site is slated to host as many as 450,000 Nvidia GB200 Blackwell GPUs, distributed across eight data center buildings.

Construction has already advanced considerably. Developer Crusoe is scheduled to complete the final building in November 2025, with the overall campus on track to be fully built out by around mid‑2026 under the current plan. That timeline would still give Oracle and OpenAI a vast amount of AI compute, even without the extra 600 MW module that has now been shelved.

The project has not been without growing pains. Earlier this year, several buildings at Abilene experienced multi‑day outages after winter weather disrupted parts of the site’s liquid cooling infrastructure. The incident highlighted the challenges of operating densely packed AI hardware in harsh or variable climates, and it raised questions about the resilience of such large‑scale installations.

Those reliability issues, combined with the sheer scale of the power draw, have put pressure on the collaboration between OpenAI, Oracle and their partners. Ensuring stable operations at 1.2 GW is difficult enough; layering on another 600 MW before the existing systems are fully proven would have added further complexity and risk.

At the same time, industry observers note that the physical footprint already underway at Abilene remains attractive to other hyperscale players. Meta Platforms is reportedly exploring the option of leasing part of Crusoe’s capacity at the campus, with Nvidia said to be helping facilitate conversations, which could reshape how the facility is ultimately used and by whom.

Other Stargate sites move forward as planned

Importantly for the overall program, the Stargate project outside Abilene is still moving ahead. The freeze applies specifically to the proposed Abilene expansion and does not touch the rest of the multi‑state portfolio that Oracle, OpenAI and SoftBank have been assembling.

Projects in Shackelford County, Texas; Doña Ana County, New Mexico; Milam County, Texas; Lordstown, Ohio; and sites in Wisconsin remain on track, according to planning documents and people familiar with the initiative. Together with Abilene’s existing footprint, those locations make up the current backbone of Stargate’s infrastructure strategy in North America.

Across all locations combined, Stargate is targeting nearly 7 GW of capacity in its initial wave, representing more than 400 billion dollars in investment over roughly three years. The consortium sifted through over 300 site proposals from more than 30 U.S. states before narrowing the list down to the current set of campuses.

The partners have indicated that the portfolio is not closed. As the project advances toward an eventual goal of around 10 GW, additional sites could be added in regions that offer the right mix of affordable power, available land, favorable regulation and access to talent. In that sense, the setback in Abilene is just one chapter in a much larger, still‑expanding plan.

From a strategic vantage point, the decision to cancel the 600 MW extension is being interpreted as a reallocation of future capacity rather than a simple cut. OpenAI is redirecting its next generation of compute deployments to other Stargate sites and to partners capable of supplying power and capital on timelines that match its product pipeline.

Vera Rubin takes center stage over Blackwell at new locations

A key driver of OpenAI’s change in direction is its pivot toward Nvidia’s forthcoming Vera Rubin platform. Instead of layering more Blackwell‑based racks on top of Abilene’s existing configuration, the company is now steering new capacity toward data centers that can launch directly with Rubin‑class hardware.

On September 22, 2025, OpenAI and Nvidia signed a letter of intent to deploy at least 10 GW of Nvidia systems tied to the Vera Rubin architecture. Under that framework, Nvidia is expected to invest up to 100 billion dollars into OpenAI over time, with funding ramping as each gigawatt of capacity is brought online and activated for AI workloads.

The first 1 GW of Vera Rubin capacity is scheduled for the second half of 2026. Each Rubin deployment is designed around superchips connected via sixth‑generation NVLink, offering bandwidth of up to 260 terabytes per second per NVL72 rack. That represents a substantial jump over the Blackwell systems already being installed in Abilene.

Major cloud providers including AWS, Google Cloud, Microsoft Azure and Oracle Cloud are among the early adopters planning to bring Rubin‑based instances to market. For OpenAI, leaning into that ecosystem means its future models can run on what is expected to be one of the most advanced AI compute platforms available.

As OpenAI CEO Sam Altman has framed it, “everything starts with compute”. In presenting the partnership with Nvidia, he argued that the computational backbone now being built will underpin “the economy of the future,” with OpenAI aiming to use that infrastructure to unlock new generations of AI capabilities and make them available broadly to individuals and businesses.

Why shifting new capacity away from Abilene made sense

Within that context, the choice to prioritize Vera Rubin at other sites instead of expanding Blackwell capacity in Abilene starts to look less like a retreat and more like a timing decision. Launching Rubin from the ground up in locations where power and financing are already lined up can be faster and less risky than re‑negotiating a big expansion at a campus still working through reliability questions.

By routing its next phase of compute build‑out to sites aligned with Rubin, OpenAI can run its next‑generation AI models on the most advanced hardware from day one. That may shorten development cycles and improve performance per watt, a crucial metric as energy prices rise and regulators watch the sector more closely.

It also reduces the pressure to upgrade or retrofit Abilene in the near term. Rather than trying to force the campus into a configuration that serves both Blackwell and Rubin at hyperscale, the company can treat it as a primarily Blackwell‑based hub and introduce newer architectures at separate, purpose‑built sites.

For Oracle, which operates a global cloud platform, the recalibration provides an opportunity to spread AI compute more evenly across multiple regions. Tying too much of its future AI capacity to a single Texas campus would concentrate risk in one geography, one grid and one set of local constraints.

What remains uncertain is whether the 600 MW expansion is permanently dead or simply deferred. People close to the project say a revival would depend on how quickly Abilene’s power, cooling and financing landscape can scale to match the original ambition without repeating earlier reliability problems.

Power, money and timelines: the bottlenecks behind AI megaprojects

The difficulties in Abilene echo a broader challenge facing the AI industry: data center ambitions are running into the limits of power supply, capital and construction schedules. Building multi‑gigawatt campuses is not just a question of pouring concrete and ordering chips; it requires utilities, regulators, investors and local communities to move in sync.

In Texas, lawmakers and grid planners have voiced concerns that massive new data centers are driving load forecasts higher than utilities can comfortably meet with new generation and transmission. A single site like Abilene, at 1.2 GW, already ranks among the largest individual loads on the state’s grid.

The full Stargate program, if executed to its envisioned 10 GW scale, would consume enough electricity to power roughly 7.5 million homes. That comparison has become a talking point in energy debates, as policymakers weigh the economic upside of attracting AI infrastructure against long‑term pressure on grids and climate commitments.

OpenAI has responded by diversifying its compute supply chain. Beyond Oracle and Nvidia, the company has struck agreements with Cerebras to secure about 750 MW of low‑latency AI capacity through 2028, and it continues to work closely with Microsoft, SoftBank, CoreWeave and other partners to spread workloads across different hardware platforms and regions.

The scramble for power is not unique to OpenAI. Microsoft, Google and Meta are all racing to lock in energy deals to support their own data center expansions, from dedicated renewable projects to long‑term contracts with utilities. The pace of AI adoption has turned electricity into a central strategic asset for technology companies, not just a cost item.

Meta circles the site as Texas eyes global data center leadership

One immediate consequence of the paused expansion is that Abilene’s unused capacity may not sit idle for long. Bloomberg’s reporting indicates that Meta Platforms is evaluating a potential deal with Crusoe, the site’s developer, to lease land or capacity originally earmarked for the OpenAI‑Oracle add‑on.

Crusoe, known for its work on high‑performance computing and innovative energy solutions, has emerged as a key player in repurposing stranded or underutilized power for data centers. A lease to Meta could give the social media giant a foothold in a campus already wired for large‑scale AI workloads.

Such a move would fit within a wider trend in which Texas is positioning itself to become the world’s largest data center market by 2030, potentially overtaking long‑dominant Northern Virginia. Industry analyses cite abundant land, comparatively flexible regulation and access to various energy sources as factors drawing hyperscale operators to the state.

However, the boom comes with trade‑offs that communities and environmental groups are increasingly vocal about. Large data centers typically require significant amounts of electricity and cooling water, raising questions about long‑term water stress and emissions in a state already dealing with drought cycles and grid strain during extreme weather.

Local debates around projects like Abilene illustrate how public scrutiny of AI infrastructure is sharpening. While many welcome the jobs and tax base that come with megaprojects, others worry that the benefits are unevenly distributed compared with the environmental and grid risks borne by residents.

For developers, this means that securing community support and clearly articulating benefits—along with credible mitigation plans for water and energy use—are becoming as critical as obtaining zoning approvals or power purchase agreements.

Against that backdrop, the decision by OpenAI and Oracle to pause their Abilene expansion looks less like an isolated setback and more like a snapshot of a sector in flux, where capital, power, technology and local politics all have to line up before the next concrete slab is poured.

As the Stargate portfolio continues to grow in other states and OpenAI shifts incoming capacity to Nvidia’s Vera Rubin platform, the abandoned extension in Texas stands as a reminder that even the most ambitious AI infrastructure plans remain subject to fast‑moving financial realities, technological leaps and the physical limits of the grids and communities that host them.

Related posts: