Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The adoption of cloud computing has witnessed unprecedented growth globally as customers look for more options, flexibility, and connectivity from cloud service providers to leverage the offerings of advanced technologies.

To achieve this end, cloud service providers are dependent on data centers. Latest market intelligence predicts the global data center market to touch $174 billion by 2023. To put it simply, data centers are rooms of computer servers that provide networking and internet-based services. Their sizes vary from a small single room serving an organization to huge spaces processing the data for internet giants such as AWS, Google, and Facebook. New data centers are opening each year with the increase in the use of the internet and remote services to store, access, and stream data. With this growing trend, they must run as efficiently as possible. As the data centers operate 24/7, they consume vast amounts of electricity to power the servers and process data, resulting in a lot of heat generation. If this heat is not removed, the electrical components can overheat and fail.

The energy consumption for a typical data center would be split with around 50% being used by IT equipment, 35% on cooling and HVAC, 10% on electrical infrastructure and support, and 5% on lighting. The electrical demand for data centers varies from just a few kilowatts up to megawatts depending on the size and location. As per the International Energy Agency (IEA) report, data centers account for about 1% of global electricity demand. Due to this, data centers are in the spotlight in the global decarbonisation push. Moreover, the ideal data center PUE (power usage effectiveness), an indicator for measuring the energy efficiency of a data centre, is 1; however, on average, data centres have a PUE of 2.5. Keeping the server farms cool is the right direction to energy efficiency.

What keeps data centers cool?

A common method used in data centers is a combination of raised floors and computer room air conditioner (CRAC) or computer room air handler (CRAH) infrastructure. The server racks are placed on a raised floor whereby the CRAC/CRAH units distribute conditioned air into the server racks. For extra efficiency, the CRAC/CRAH units use energy-efficient filters, electrically controlled fans, raised floor pressure sensors to control air supply rate as well as server rack inlet temperature sensors to control air supply from the CRAC unit. The cyclic air collects the heat and reconditions the hot air through the units. Most data centers would set the CRAC/CRAH unit’s return as the main control point for the entire data floor environment.

While this method is suitable for smaller, low-density deployments with low power requirements, the larger and higher-density data floors require effective solutions. Modern data centers use a variety of innovative cooling technologies to maintain ideal and efficient operating conditions.

Some common and new cooling technologies

Since the cooling factor is of major importance in running data centers, here are some of the common methods and new cooling technologies.

Cold aisle/hot aisle design

Data center server rack placement uses alternating rows of ‘cold aisles’ and ‘hot aisles’ wherein cold aisles have cold air intakes on the front of the racks, while the hot aisles consist of the hot air exhausts on the back of the racks. This technique is designed to stop the cold air from mixing with the hot. Hot aisles expel hot air into the air conditioning intakes to be chilled and then vented into the cold aisles. Empty racks are filled by blanking panels to prevent overheating or wasted cold air.

Direct-to-chip cooling

A cooling technology that uses liquid to remove heat from the air, direct-to-chip cooling utilizes liquid coolant via tubes directly to the chip (processor), where it absorbs heat and removes it from the data hall. The extracted heat is fed into a chilled-water loop and taken to a facility’s chiller plant. Since this system cools processors directly, it’s one of the most effective forms of server cooling system.

Calibrated vectored cooling (CVC)

This data center cooling technology is used for high-density servers that produce a large amount of heat. CVC optimizes the airflow path through equipment to allow the cooling system to manage heat more effectively, making it possible to increase the ratio of circuit boards per server chassis and requires less internal cooling fans.

Chilled water system

A data center cooling system commonly used in mid-to-large-sized data centers that use chilled water to cool air being brought in by air handlers (CRAHs). A chiller plant located in the IT premise supplies the water. Chilled water CRAH units are usually less expensive, contain fewer parts, and have greater heat removal capacity than CRAC units with the same footprint.

Evaporative cooling

Operating on the same cooling effect we feel when leaving a swimming pool when water evaporates from the skin, this method exposes hot air to water, which causes the water to evaporate and draw the heat out of the air. The water is sprayed or introduced through a wet filter or mat. While this system is very energy efficient since it doesn’t use CRAC or CRAH units, it does require a lot of water. Datacenter cooling towers are often used to facilitate evaporations and transfer excess heat to the outside atmosphere.

Move to intelligent cooling

Efficiency in power and cooling will continue to be a priority for data centers in the future. New generations of processors for machine learning (ML), artificial intelligence (AI) and analytics programs will require massive energy demands and generate substantial amounts of heat.

To address the energy consumption issue, Huawei has leveraged ML to develop its iCooling intelligent thermal management solution for data centre infrastructure. The iCooling system incorporates deep learning to analyse historical data and identify key factors which affect energy consumption and create a PUE prediction. An optimisation algorithm then establishes the ideal parameters which are transmitted to various control systems.

The deployment of iCooling at Huawei’s cloud data centre Langfang in China resulted in a PUE that is 8% lower than it previously was and saved a big chunk of annual power costs. According to Huawei, as data centre loads increase and AI learning capability improves, six million kWh of electricity will be saved in the data centre every year, the equivalent to a reduction of about three million kilogrammes of carbon dioxide emissions.

Considering the environmental impact, the data center cooling ecosystem is witnessing significant development from various companies. The global data center cooling market is expected to grow from US$ 10,271. 0 million in 2021 to US$ 25,552. 2 million by 2028 and is estimated to grow at a CAGR of 14. 0% from 2021 to 2028.

Many big cloud players such as Microsoft and Google have pledged to make their operations 100% carbon-free by 2030. However, such pledges should not only be the prerogatives of the hyperscalers. Each organization in the cloud computing businesses must be increasingly conscious of its environmental impact and achievement of its sustainability goals.

Also read:  Data centre revenues to hit $948bn by 2030: report
Also read:  The rise of cloud communications
Pin It