Swift Computing System
NREL's Swift high-performance computing (HPC) system supports projects funded by the U.S. Department of Energy's Vehicle Technologies Office (VTO) in the Office of Energy Efficiency and Renewable Energy (EERE).
Allocations
Any VTO-funded project is eligible for an allocation on Swift, which is well suited for parallel jobs up to 64 nodes and offers better queue wait times for eligible projects. EERE makes allocation decisions through the annual allocation cycle.
System Architecture
Swift is an AMD-based HPC cluster comprising 484 nodes with Dual AMD EPYC 7532 (Rome) CPU's. Nodes are connected via. a 200Gb/s InfiniBand HDR network, in a conventional spine and leaf topology. Swift utilizes Rocky Linux with a peak performance of 8 PetaFLOPS.
The nodes are connected to each other and to storage by a high-speed 100Gb/s EDR InfiniBand network. All nodes and storage are connected using an enhanced 8-dimensional hypercube topology that provides a bisection bandwidth of 26.4 terabytes/s.
Swift has a Network File System with a capacity of 3PB. It is a performant system with multiple read and write cache layers and redundancies for data protection, but is not a parallel filesystem like those available on Kestrel or Eagle. I/O performance and utilization of /scratch space behaves differently.
Learn about the Swift cluster (GitHub).
System Configuration
To learn about Swift's hardware, storage, and networking, see NREL systems (GitHub).
Modules
Learn how to activate and use the modules available on Swift (GitHub).
Running on the System
Learn about running on Swift (GitHub).
Filesystems
See the Swift filesystem architecture overview (GitHub).
Applications
Learn how to utilize Swift applications (GitHub).
Share