Top 10 Liquid Cooling Projects: Meta’s $65 B Overhaul, Softbank’s 10, 000 MW Plan, and Major Deployments (2024-2026)
The transition to data center liquid cooling is no longer a future trend; it has become the mandatory baseline for all new artificial intelligence infrastructure. Driven by the thermal demands of next-generation GPUs, the market has moved decisively past the pilot phase into full-scale commercial deployment. Data shows that as rack power densities surge past 50 k W and approach 100 k W, traditional air cooling is technically and economically unviable. This has triggered a complete redesign of data center thermal management, with the AI data center liquid cooling market projected to grow from $6.6 billion in 2025 to $61.8 billion by 2034. The dominant theme for 2025 and 2026 is the shift to “liquid-first” designs, where advanced cooling is a day-one requirement, not an afterthought, as hyperscalers and AI specialists race to build out capacity.
1. Softbank Hyperscale AI Data Center, Ohio, USA
Company: Softbank Group
Installation Capacity: 10, 000 MW (10 GW) planned
Applications: Large-scale AI computing, requiring advanced liquid-to-liquid thermal management for an estimated initial phase of over 1, 000 MW.
Source: Planned 10-gigawatt Softbank data center in Ohio might be the …
2. G 42 / Oasis AI Data Center Campus, UAE
Company: G 42, Oasis
Installation Capacity: 5, 000 MW (5 GW) planned
Applications: A dedicated AI campus where the entire 5, 000 MW IT load is expected to be liquid-cooled, establishing a major global AI hub.
Source: [PDF] Omdia Analyst Summit: Where is AI really headed? – Data Center Asia
3. Meta Prineville Data Center, Oregon, USA
Company: Meta Platforms, Inc.
Installation Capacity: Estimated 300-500 MW of liquid-cooled capacity within the 1, 289 MW campus.
Applications: AI model processing (e.g., Llama) using custom Open Rack v 3 designs with integrated direct-to-chip liquid cooling.
Source: Meta’s Infrastructure Evolution and the Advent of AI
4. Firmus Project Southgate, Australia
Company: Firmus
Installation Capacity: Over 200 MW of initial liquid-cooled capacity, part of a 1, 600 MW campus.
Applications: High-performance computing (HPC) and AI utilizing full immersion cooling technology.
Source: Five Startups Reducing Data Center Water Consumption
5. Meta Fort Worth Data Center, Texas, USA
Company: Meta Platforms, Inc.
Installation Capacity: Estimated 200-300 MW of liquid-cooled capacity within the 729 MW campus.
Applications: AI expansion featuring facility-level liquid cooling infrastructure to support high-TDP processors.
Source: Data Center & Large Load Center Siting Guide – Enverus
6. Nebius AI Factory, Lappeenranta, Finland
Company: Nebius
Installation Capacity: 310 MW
Applications: Purpose-built AI factory where the entire 310 MW IT load is designed for direct-to-chip liquid cooling from the ground up.
Source: Nebius to construct 310 MW AI factory in Finland
7. Meta Mesa Data Center, Arizona, USA
Company: Meta Platforms, Inc.
Installation Capacity: Estimated 150-250 MW of liquid-cooled capacity within the 701 MW campus.
Applications: AI-driven expansion using a closed-loop liquid cooling system to reduce water consumption in a hot climate.
Source: [PDF] Artificial Intelligence Data Centers and United States … – JScholarship
8. Vantage Data Centers “Frontier” Campus, Texas, USA
Company: Vantage Data Centers
Installation Capacity: At least 100 MW of liquid-cooled capacity in initial phases of a campus expected to exceed 300 MW.
Applications: Next-generation hyperscale campus designed to offer flexible liquid cooling solutions (rear-door, direct-to-chip) to AI and cloud clients.
Source: the top 10 questions on data centers answered – Wood Mackenzie
9. Yotta Data Center Park, India
Company: Yotta
Installation Capacity: Initial 50-100 MW of liquid-cooled capacity in a campus scalable to 1, 000 MW.
Applications: High-density AI workloads supported by “intelligent water cooling systems” in one of Asia’s largest data center parks.
Source: Yotta Data Center India | Hyperscale, Secure & Scalable Infrastructure
10. Foxconn AI Data Center, Kaohsiung, Taiwan
Company: Foxconn
Installation Capacity: 40 MW
Applications: A high-density supercomputing center for AI development, pairing liquid cooling with advanced 800 VDC power distribution.
Source: NVIDIA, Partners Drive Next-Gen Efficient Gigawatt AI Factories in …
Table: Top 10 Data Center Liquid Cooling Deployments 2024-2026
| Company | Installation Capacity | Applications | Source |
|---|---|---|---|
| Softbank Group | 10, 000 MW (planned) | Large-scale AI computing | Tom’s Hardware |
| G 42, Oasis | 5, 000 MW (planned) | Dedicated AI campus | Data Center Asia |
| Meta Platforms, Inc. | 300-500 MW (Prineville) | AI model processing | Meta Engineering |
| Firmus | 200+ MW (initial) | HPC and AI (immersion) | Net-Zero Insights |
| Meta Platforms, Inc. | 200-300 MW (Fort Worth) | AI expansion | Enverus |
| Nebius | 310 MW | Purpose-built AI factory | Nebius |
| Meta Platforms, Inc. | 150-250 MW (Mesa) | AI in hot climate | JScholarship |
| Vantage Data Centers | 100+ MW (initial) | Hyperscale AI/cloud | Wood Mackenzie |
| Yotta | 50-100 MW (initial) | High-density AI workloads | Yotta |
| Foxconn | 40 MW | AI supercomputing | NVIDIA Blogs |
Data Center Liquid Cooling, Meta’s $65 B AI Overhaul Signals Market Shift
The adoption of liquid cooling is no longer confined to niche high-performance computing (HPC) but is now the standard for hyperscale AI. Meta’s global $65 billion investment to overhaul its data centers in Prineville, Fort Worth, and Mesa for AI workloads is the clearest signal of this industry-wide pivot. These projects involve retrofitting existing facilities and constructing new data halls with facility-wide liquid distribution to support custom Open Rack v 3 hardware. This move by a market leader forces the entire supply chain to adapt. Beyond hyperscalers, purpose-built “AI Factories” like the 310 MW Nebius facility in Finland are being designed with liquid cooling as a core, non-negotiable component from day one, demonstrating that greenfield projects will be 100% liquid-cooled. The diversity of applications, from massive social media model training to specialized supercomputing at Foxconn, indicates that any organization deploying dense GPU clusters is now a potential liquid cooling customer.
USA and UAE Lead with Gigawatt-Scale AI Cooling Projects from Softbank, Meta
Geographically, the United States is the epicenter of large-scale liquid cooling deployment, driven by massive domestic investment in AI infrastructure. Projects like Softbank’s planned 10 GW campus in Ohio and Vantage Data Centers’ “Frontier” campus in Texas represent a new class of gigawatt-scale development where liquid cooling is essential for the business case. Meta’s multi-state expansion further solidifies U.S. leadership. However, strategic national initiatives are creating new global hubs. The UAE’s planned 5 GW campus by G 42 and Oasis aims to establish the region as a primary AI power, leveraging liquid cooling to overcome the hot climate. Similarly, Nebius’s choice of Finland highlights a strategy of pairing liquid cooling with cold climates for maximum efficiency. The emergence of major projects in India (Yotta) and Australia (Firmus) shows that this trend is global, with deployments following the demand for sovereign AI compute capacity.
$61 B Market Forecast, Vertiv and Cool IT Lead Data Center Liquid Cooling Supply
These large-scale deployments confirm that data center liquid cooling technology is commercially mature and scaling rapidly. The market has moved beyond demonstrations into multi-megawatt contracts and gigawatt-scale planning. Direct-to-chip (DTC) cooling appears to be the dominant approach for hyperscalers, as seen in Meta’s OCP-based designs. Simultaneously, immersion cooling is proving commercially viable at scale with Firmus’s Project Southgate, which secured $327 million in funding for a full-scale build-out. The supply chain is robust but consolidating around key leaders. Vertiv and its subsidiary Cool IT Systems are frequently cited as the go-to suppliers, with products like high-capacity Coolant Distribution Units (CDUs) capable of supporting multi-megawatt loads. The fact that project announcements now focus on total capacity and investment, rather than the novelty of the cooling technology itself, indicates liquid cooling has become a standard, mature component of the data center technology stack.
Softbank’s 10 GW Plan, Liquid Cooling Supply Chain Bottlenecks in 2026
The single most critical factor for the data center industry in the coming year will be the supply chain’s ability to deliver liquid cooling hardware at the unprecedented scale required by AI build-outs. If massive projects like Softbank’s 10 GW campus move from planning to procurement, it will place immense strain on suppliers of specialized components like CDUs, cold plates, and high-flow quick-disconnect fittings. This will likely create a significant bottleneck for deploying AI capacity. Watch for the following signals:
- Strategic Partnerships and Acquisitions: Expect hyperscalers and large data center operators to forge long-term, high-volume procurement agreements directly with cooling vendors like Vertiv to secure their supply, potentially locking out smaller players. Further consolidation in the cooling market is also likely.
- Standardization Acceleration: The pressure to deliver at scale will accelerate the adoption of standards like those from the Open Compute Project (OCP). Meta’s influence here is paramount; as they scale their liquid-cooled designs, their specifications will become de facto industry standards, forcing component manufacturers to align or be left behind.
- Integration of Power and Cooling: Look for tighter integration between power and thermal systems. The Foxconn project’s use of 800 VDC power alongside liquid cooling is an early indicator. This holistic approach is necessary to maximize efficiency at the rack and facility level.
- Shift in Vendor Focus: Cooling vendors will increasingly prioritize high-volume, standardized solutions for hyperscale AI customers over custom, smaller-scale enterprise projects, potentially leading to longer lead times for the latter.

