London

The Rise of Liquid Cooling in Data Centres: A Guide to High-Density Thermal Management

Data centres are vital to the digital economy, enabling services like online banking and e-commerce. Their energy consumption is rapidly rising—7.4 gigawatts in 2023, accounting for 4% of global usage, with European demand expected to more than double by 2030 due to AI and digitisation. Continuous power and cooling are essential to prevent failures and data loss.

Optimizing HVAC Systems for Data Centre Sustainability

Heating, Ventilation, and Air Conditioning (HVAC) systems play a pivotal role in ensuring that data centres operate reliably. However, cooling alone is estimated to account for somewhere between 20% and 50% of a data centre’s total energy use.

With such a significant share of power allocated to thermal management, today’s HVAC solutions must do more than simply control temperature. They must be highly efficient, scalable, and capable of supporting increasingly power-dense IT environments, all while aligning with carbon reduction goals and sustainability mandates.

As the global appetite for digital infrastructure accelerates, data centres are under increasing pressure to deliver more computing power with less environmental impact. One advancement supporting this evolution is liquid cooling – a technology that, while not new, is now gaining serious traction as a practical and scalable option.

The Benefits of Liquid Cooling Technology

Liquid cooling’s emergence is a timely one. Although the technology has existed for years, it is now entering a period of rapid adoption, driven by the limitations of air cooling and the escalating demands of high-performance computing (HPC), machine learning, and hyperscale data operations.

Traditional air-based systems are beginning to hit their limits, physically and economically. Server racks drawing over 30kW are becoming increasingly common, yet the upper threshold of air cooling lies close to that figure. Liquid cooling, on the other hand, enables far more efficient heat transfer, supporting rack densities of 50, 80, or even 100kW without requiring massive increases in infrastructure or power usage.

Cooling Distribution Units (CDUs) circulate a glycol-water blend through closed-loop networks, absorbing heat directly from the most critical and thermally intensive parts of the server, such as chipsets, via cold plates and manifolds.
Some CDU systems can offer up to four times the cooling capacity of air-based alternatives. Typically, air can cool a rack density of 25-30kW, at acceptable temperatures. However, racks are now pushing 100kW+, so CDU's are designed to work at these densities.

Air Cooling vs. Liquid Cooling: Complementary, not Competitive

While liquid cooling represents a transformative leap in thermal management, it’s not a wholesale replacement for air-based systems, yet. Instead, a hybrid approach is becoming more common. Many data centre operators are deploying a mix of cooling strategies based on workload demands, with liquid cooling handling high-density or GPU-intensive racks and air cooling supporting more conventional equipment.

These integrated systems can be dynamically controlled via smart building management systems and software platforms, enabling real-time adjustment of CDU flow rates, chiller output, and airflow. This level of orchestration can be efficient and help operators to reduce their Power Usage Effectiveness (PUE) to around 1.2, with some aiming for even more ambitious figures.

Heat Recovery and Reuse: Turning Waste into Energy

One of the unique advantages of liquid cooling is its potential for heat recovery and reuse. The warm liquid exiting server chipsets, typically at around 30°C, can be run through a heat pump and increased in temperature which would be suitable for district heating networks or industrial applications.

This reuse can help to:

  1. reduce overall energy waste
  2. improve PUE
  3. generate additional value from what was once considered waste

Pushing the Boundaries of Cooling Efficiency

As liquid cooling matures, innovation is shifting toward improving performance within the system itself. One key area is the reduction of approach temperature – the temperature gap caused by inefficiencies in heat exchange. While some CDUs might have a 4°C approach temperature, Carrier’s advanced dual heat exchanger design brings this down to just 2°C. The integrated highly advanced controls also enable a quick response to varying IT load while maintaining acceptable temperature variation.

Looking ahead, further reductions to 1°C are on the horizon, offering even greater efficiency and laying the groundwork for supporting multi-megawatt deployments with minimal thermal overhead. As the scale and density of data centres continue to grow, such refinements will be critical to meeting future demand without proportionally increasing environmental impact.

Future Trends in Data Centre Cooling

Direct-to-chip liquid cooling is expected to remain the dominant cooling architecture for high-performance environments over the next five to ten years. Although, immersion cooling, where entire servers are submerged in thermally conductive, electrically non-conductive liquid, may eventually become viable at scale.

In the meantime, Carrier continues to lead in the development, testing and real-world deployment of next generation data centre cooling technologies, including its QuantumLeap™ suite of solutions and the certification standards of its product through Eurovent Certified Performance.

Please get in touch with one of Carrier’s experts to find out more about Carrier’s thermal lifecycle management solutions.