HP Supercomputer at NREL Garners Top Honor

Oct. 13, 2014 | Contact media relations

 

This photo shows three men in suits standing in front of a data center.

GSA Administrator Dan Tangherlini (left), NREL Associate Lab Director Bryan Hannegan, and NREL Director Dan Arvizu discuss the high performance computer Peregrine during a tour of the ESIF. NREL collaborated with HP and Intel to develop the innovative warm-water, liquid-cooled supercomputer, which recently won an R&D 100 award. Peregrine is the first installation of the HP Apollo 8000 platform, which uses more than 31,000 Intel Xeon processors providing a total compute capability of 1.19 petaflops.
Photo by Dennis Schroeder, NREL

A supercomputer created by Hewlett-Packard (HP) and the Energy Department's National Renewable Energy Laboratory (NREL) that uses warm water to cool its servers, and then re-uses that water to heat its building, has been honored as one of the top technological innovations of the year by R&D Magazine.

Supercomputers are hot—literally and figuratively.

The behemoths that can crunch a quadrillion calculations each second are needed to simulate and model everything from weather patterns to high finance to the movement of nanoparticles and celestial objects—and to analyze big data almost everywhere. All of these calculations heat things up. A typical supercomputing data center has rack after rack of servers, and each of those servers burns hot inside if not for a cooling mechanism—usually forced air driven by fans, which requires significant electricity.

When NREL outgrew its old data center and was drawing up plans for its Energy Systems Integration Facility (ESIF), ideas for a new data center were met with thoughts on how to live up to NREL's mission of being a living laboratory for energy efficiency and sustainability.

"Computers generate significant quantities of waste heat that is typically just thrown away," said Steve Hammond, director of the Computational Science Center at NREL. "Our vision was to build a showcase facility, to integrate the computer and data center with the building and do it with a holistic view toward energy efficiency.

"We spent a lot of time talking with people in the computer industry, telling them where we were headed," Hammond added. "'If we want to do this, you might want to consider the following…,' that type of thing."

NREL's Desire to Go Green Fit with HP's Plans for Liquid-Cooled Supercomputers

As planners were drafting specifications for the ESIF building, "some people from HP came to us saying they had an idea about how to cool supercomputers efficiently with liquid cooling," Hammond said.

HP Distinguished Technologist Nic Dube picks up the timeline from there.

"At the same time that NREL was ramping up the effort to build a new facility that would be a world leader in energy efficiency, we at HP had been working on a project called Apollo—a liquid-cooled supercomputer platform," Dube said. "Availability was initially targeted for a year later than Steve's timeline, but we decided to accelerate the program to meet NREL's goals."

The NREL data center would be an ideal showcase for the technology HP was proposing. Key to NREL's mission is to be a model for energy efficiency, and HP wanted to demonstrate that there could be a broad market for liquid-cooled high performance computers. "We went very aggressively after the bid," Dube said.

The result is the high performance computer called Peregrine at the ESIF. Peregrine is the first installation of the HP Apollo 8000 platform, which uses more than 31,000 Intel Xeon processors providing a total capacity of 1.19 petaflops.

Peregrine provides sufficient heat to meet the needs of the 182,500-square-foot ESIF, and combined with an energy-efficient data center is saving NREL about $1 million a year in energy costs. In all, the ESIF consumes 74% less energy than the national average for office buildings. It has been designated a LEED Platinum building and was named 2014 Laboratory of the Year by R&D Magazine.

There were plenty of hurdles to clear in designing the first system in the HP Apollo 8000 series, but the thermodynamic fundamentals are quite straightforward and easily replicable, Dube said. "The big picture is simple. You take heat from something that generates heat and send it to something that requires heat."

The challenge was to build liquid cooling not just in an exotic way, but in a way that was simple, reliable, and cost effective enough that it could work for a wide array of large computers—not just those in federal labs, but computers with a broad range of customers and applications.

The Apollo system, which uses liquid cooling rather than forced air, packs amazing computational capacity into a small space. "For heat exchange [e.g., cooling], liquids are orders of magnitude more effective than air, and the pump energy needed to circulate the liquid cooling is much less than the fan energy to move the equivalent amount of air," Dube and Hammond noted. Using liquid cooling allowed HP to pack the servers more densely and still keep them cool, rather than having to spread the servers out in a data center measured in acres in order to cool them sufficiently with air. Within a standard rack footprint of 2 feet by 4 feet, the HP Apollo 8000 platform can pack as many as 288 processors. That's four times the density of typical racks for high performance computers—and it means a much smaller footprint and lower cost.

Capturing Heat, Using It to Warm the Entire Building

This photo shows a woman standing by computer racks in a data center. Enlarge image

Peregrine, the state-of-the-art liquid-cooled supercomputer at NREL, provides sufficient heat to meet the needs of the 182,500-square-foot ESIF, and combined with an energy-efficient data center is saving NREL about $1 million a year in energy costs. In all, the ESIF consumes 74% less energy than the national average for office buildings.
Photo by Dennis Schroeder, NREL

Because the servers are cooled with warm water, rather than cold, the HP Apollo system doesn't need to be in a data center supported by compressor-based chillers, which are both energy hungry and expensive. Pipes carry the water right to the critical components, exploiting the thermal advantage of water over traditional air-cooled systems that force chilled air through heat sinks. If a supercomputer drawing a megawatt of power needs chillers for cooling, there may be an additional 500 kilowatts of energy needed to power the chillers, just to cool the supercomputer. The evaporative cooling used at the ESIF calls for about one-tenth of that cooling cost, because the water supplied for cooling can be 75°F, not 45°F or 50°F.

Water flowing to the servers is about 75°F. While it cools the servers, the servers in turn heat the water, so that by the time the liquid finishes a pass of the data center, its temperature has risen to 95°F or warmer. That's a sufficient temperature to serve as the primary source of heat for the ESIF's office and lab spaces. The waste heat that warms the building via the hot water in the pipes circulates back to cool down the racks of servers, completing the loop. The HP Apollo system is designed so that maintenance on servers can be performed without opening any liquid connections. That's an important safety feature, as it keeps expensive electronics away from water.

But that's not all. The water heated by the data centers is also used under the front plaza and walkway outside the building to melt snow and ice. And that heat isn't just wasted in the summer—it's used to complete the loop for the cooling system that lowers the building's temperature during the hot days of June, July, and August.

Knowing Specs, Goals Helped Lower Cost

This is a photo of large-diameter green, blue, and white pipes arranged horizontally from just above the floor to just below the ceiling of what looks like a basement. Enlarge image

On the level below the Peregrine supercomputer in the ESIF is its liquid cooling system (pumps and pipes) that interface the data center and the supercomputer. They connect the cooling supplied to the supercomputer and the hot water returned from the supercomputer to the systems for heating in the winter and heat rejection in the summer (through evaporative cooling towers).
Photo by Dennis Schroeder, NREL

HP's goal was to demonstrate that liquid cooling can be simple; NREL's aim was to build an energy-efficient data center and integrate the supercomputer with the data center—and the potential energy savings—into the ESIF building as a whole. Before the pipes were routed, the team learned everything it needed to know about the dynamics of the building—the height of the ceilings, flow rates, supply and return temperatures, locations of the freight elevators, strength of the floor. Dube said the final product was enhanced because NREL knew exactly what it wanted, and that challenged HP to meet hard goals in a short timeline. "Because NREL was able to give us detailed specs like that, we were able to deliver a product far above our original target. Steve and NREL had really done some good analysis of where the industry needed to get."

One key time saver during installation was modular plumbing, 6-foot lengths with flanges on either end. The pipes were pre-assembled and pre-tested in the factory, and they employ quick-disconnect stainless connectors and flexible hoses. "That allowed us to put in 18 racks in four days, instead of four weeks," Hammond said.

The HP Apollo system has very sophisticated control systems as well; it's not actually as simple as treating the supercomputer as a furnace. The building requires a minimum temperature of water coming out of the data center for a heat source. High-tech engineering allows a varying flow rate within the servers, which maintains a constant water output temperature whether the computer is running at full load or idle, while also allowing for a range of temperature at the system's water inlet.

Collaboration Was Key, Say HP and NREL

This is an illustration with arrows pointing to heat pipes and locations where water flows in and out. Enlarge image

The HP Apollo platform brings liquid cooling without the risk. As shown on this top view, each server tray is equipped with the dry-disconnect technology that provides the performance of liquid cooling without ever making or breaking a water connection, a service event that could introduce contaminants or cause a water leak.
Courtesy of Hewlett-Packard Development Company

Dube praised the collaboration. "You always encounter hurdles in a project like this, but we would sit down with the NREL team and work out the challenges—'This is the metric we need to meet; now how do we make that happen?'"

Hammond said NREL is very pleased with the system. "We took delivery of the first racks in August of 2013, had the ribbon-cutting with the Energy Secretary in late September, passed the acceptance test in November, and were in production in January. That's an impressive timeline considering this is a first-of-its-kind system.

"HP got to showcase its state-of-the-art platform, and NREL has an energy-efficient, showcase data center that cost less to build than if we had built something less energy efficient," Hammond said. "We didn't have to look at how many years it would take us to recoup our investment. It cost less to build and less to operate from day one."

Learn more about high performance computing at NREL.

— Bill Scanlon

Tags: