Data centres in the age of AI

Create Illustration: Serena

Data centres are power-hungry, heat-generating beasts and AI is making them hungrier and hotter than ever. So how are engineers working to keep energy demands in check?

Data centres have an outstanding history of efficiency gains. In the 10 years between 2010 and 2020, for example, internet traffic leapt 17-fold, and data-centre workloads soared 10-fold. And data centre energy use? It stayed flat.

But the industry’s ability to keep energy demands in check will be intensely challenged by the rapid growth in complexity and use of the new generation of AI models. Already, the energy needed to train new large-language models is estimated to be doubling every 10 months.

Chips such as NVIDIA’s H100 GPUs are the workhorses of training current AI models, and they are incredibly hungry. They have a peak power consumption of 700 W, using more energy than many Australian households and generating prodigious amounts of waste heat.

By the end of this year, more than 3.5 million of NVIDIA’s H100s alone will be deployed, using more energy than entire nations like Georgia and Costa Rica for computation alone, not to mention cooling.

On top of that, the integration of AI tools in ever more digital interactions is pushing data centre workloads significantly higher. For instance, an AI-assisted Google search is estimated to consume 10 times the energy of a standard search.

Clearly, meeting the energy demands of the AI future requires major engineering achievements. With cooling accounting for an average of 40 per cent of a data centre’s energy consumption, major advances are needed.

The power crunch

AI currently represents 4.5 GW of global power consumption and, according to Schneider Electric, this is set to grow annually by 25 to 33 per cent. Schneider predicts total consumption to reach between 14 GW and 18.7 GW by 2028. This growth is three times that of overall data centre power demand.

“AI applications have a higher load per rack, so we’re trying to get the same amount of power into a smaller footprint,” said Stefan Sadokierski, Principal at Arup, which is constructing data centres across the world. “With cloud computing, we’re fairly close to the limit.

To go much further, you need a different way of cooling the IT equipment within the space. And there’s really two ways; one is direct to chip and the other is immersion cooling, where the equipment is all immersed in a bath of dielectric non-conductive fluid.”

Variations in load are another issue.

“Most current data centres have a fairly constant load but with AI, they have very big loads turning on and off very quickly,” Sadokierski said.

These step changes can affect the equipment and the grid.

Cooling solutions

Given this power consumption, the pressure is on data centres to boost their green credentials.

“There is a tendency to jump straight to offsetting carbon and importing green electricity and not looking for the opportunities on the ground,” Sadokierski said.

“There are foreseeable changes in the data centre industry, such as increasing load density and the transition to liquid cooling. We need to, at some point, get away from fossil fuels, stop using diesel for standby generators, and look for an alternative.”

While engineers work out ways to deal with the heat, this could also present an opportunity. John Vollugi CPEng, Director at ADP Consulting, is working on a large 300 MW data centre in NSW. At this size, the data centre generates a fair amount of waste heat.

“It’s a bit depressing to think of all that wasted energy, especially when it is not far from residential areas,” Vollugi said. “That prompted us to learn more about emerging technology and we assisted two pioneers, Submer and ResetData, in a pilot project in Sydney in the basement of the 151 Clarence Street commercial tower.

“We’ve got immersion pods down there that feed into the building’s condenser water loop, We look forward to reviewing this data as the year goes on, as this should improve the National Australian Built Environment Rating System (NABERS) energy rating as the waste heat from the immersion racks pre-heats the hot water system in the building. So there’s a reciprocal relationship here.”

Australian-owned ResetData is an Infrastructure-as-a-Service provider that has pioneered a cooling system that not only reduces carbon footprints by 45 per cent but also eliminates water waste entirely.

In Submer’s system, servers or other IT components are submerged in a thermally conductive dielectric liquid or coolant. The coolant always remains in a liquid form and gets pumped to a heat exchanger where heat is transferred to a cooler water circuit.

Vollugi and his team have published a white paper, entitled Think … Liquid Cities, where he proposes that data centres be connected by a network of underground pipes that transport heated water across the city.

“Imagine pools and surf parks, office end-of-trip facilities and manufacturing plants all sourcing low-cost, zero-emissions heating and water,” Vollugi said. He believes it’s possible to turn today’s high-energy consumer data centres into truly circular energy sources.

“With cloud computing, we’re fairly close to the limit. To go much further, you need a different way of cooling the IT equipment within the space.”
Professor Vute Sirivivatnanon

Australia as a data centre hub

According to Brisbane’s Cloudscene, Australia had 306 data centres at the start of 2024, with Sydney home to the most intensive concentration of facilities. Numbers are growing rapidly too, with Microsoft alone investing $5 billion in hyperscale data centres in Australia in the next two years.

Environmental concerns are shaping the next generation of data centres. GreenSquareDC has positioned its WAi1, currently under construction in Perth, as the country’s greenest data centre. WAi1 will source energy from 100 per cent renewable sources and run its backup generators on hydrotreated vegetable oil. It will also be water-positive by using aquifer-free cooling.

Mark Lommers FIEAust CPEng, Managing Director at Nequinn Consulting, is something of a pioneer in this area, having been granted a patent in 2018 for his innovative liquid cooling system for computers.

“My claim to fame was an immersion cooling system that I designed and is now rolled out to data centres around the world,” Lommers said.

In 2019, he won DCD Magazine’s Enterprise Data Centre Design Award. Designed in-house, the DUG 15 MW High Performance Cluster is entirely cooled by complete liquid immersion and is used for seismic processing.

“WAi1 does not have any legacy constraints and has been built with this ‘green’ philosophy from the ground up,” Lommers said. “The lowest common denominator of an air-cooled system determines all the systems that go into a data centre.

“WAi1 is designed for subsets of equipment, so 20 to 30 per cent is designed for air cooling and 70 to 80 per cent is designed for direct cooling methods.”

Using your footprint effectively

As the national subject matter expert for three Ph power systems at Schneider Electric, Jason Deane has been involved with data centre design, particularly in uninterruptible power supply (UPS) design.

In terms of heat management, he sees it as a race between direct-to-chip and immersion cooling, and is betting on the former winning in the short term.

“When I was a teenager, people were overclocking their PCs for gaming and developing bespoke heat exchangers for the CPUs in computers,” Deane said. “So, the concept is not new. But I think direct-to-chip will take off because it’s probably a lower-cost implementation. In a data centre every square metre might be worth about $90,000. If you’ve got to build immersion bathtubs you can’t make use of the traditional 48RU [rack unit] height. You can only have them half that high – like 24RU – and you’re already losing half your IT footprint.”

Deane also sees benefit in the Chilldyne negative pressure technology; Schneider Electric has a new partnership with the company. The Chilldyne solution creates a vacuum to pull water through the cooling ecosystem.

“The system won’t leak even if the hose is cut since both sides will be negative with respect to normal atmospheric pressure and will suck up the liquid,” Deane said.

Vollugi noted that Meta is gradually shifting to a water-cooled AI infrastructure while Microsoft is investigating liquid immersion for high-performance computing applications such as AI, showing that liquid cooling is increasingly seen as a viable solution.

Bespoke solutions

Donna Bridgman, FEng, Chartered Engineer, Global Board member of iMWomen (Infrastructure Masons Women) and Trainer at DCD Academy in the EMEA and APAC regions, sees a heightened consideration around designing more bespoke solutions to suit the technology available.

The adoption of modularised solutions to accelerate speed to market, and better match end customer and client equipment technical requirements, as well as to optimise capex financing and deployment are becoming much more business-as-usual.

“Smart companies are adopting modular designs that are pre-manufactured to suit a variety of technology and engineering specifications,” Bridgman said. “These can be leveraged at speed and scale with precision quality control, and optimised for price at the onset, with supplier agreements and long-term manufacturing slots secured to also provide a competitive edge.”

Hardware shortage

Currently though, energy and cooling demands might not be the main limiting factor to the growth of AI – it might be a shortage of GPU chips. There is a “huge sucking sound” coming from businesses representing the unrivalled demand for AI, Raj Joshi told CNN recently.

Joshi is a Senior Vice President at Moody’s Investors Service and tracks the chips industry.

“Nobody could’ve modelled how fast or how much this demand is going to increase,” he said. “I don’t think the industry was ready for this kind of surge in demand.”

Exit mobile version