Skip to main content

Peregrine System Configuration

Learn about the Peregrine system configuration.

The Peregrine system is a High Performance Computing (HPC) system with different types of servers (nodes) configured to run compute intensive and parallel computing jobs. All of the nodes run the Linux operating system, RedHat Linux or the derivative CentOS distribution. The nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an NFS file system for /home and a high speed parallel Lustre file system for /scratch. The home directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief description of the configuration and features of the nodes, interconnect and file systems is provided below.

Compute Nodes

Peregrine has 2592 compute nodes. A variety of different node types exist:

144 of these nodes have two 8 core Intel SandyBridge Xeon chips. 88 of these nodes have 16 processor cores and 32 GB of memory. 56 of these nodes have 16 processor cores and 256 GB of memory.
288 of these nodes have two 8 core Intel SandyBridge Xeon chips plus two Intel Xeon Phi coprocessors. These nodes have 32 GB of memory.
1008 of these nodes have two 12 core Ivy Bridge Xeon processors. 288 of these have 64 GB of memory, the remainder have 32 GB of memory.
1152 of these nodes have two 12 core Haswell Xeon processors with 64 GB of memory.
All nodes are connected to the high speed Infiniband network and and a management ethernet. The /home, /scratch, /projects and /nopt file systems are mounted on all compute nodes.

DAV Nodes

The data analysis and visualization nodes are each equipped with two Intel E5-2670 processors running at 2.3 GHz with 8 cores per processor, 384 GB of memory and an NVIDIA Quadro 6000 GPU card. These nodes supports OpenCL and CUDA programming models. These nodes support hardware-accelerated remote visualization of data on the parallel file system using VirtualGL/TurboVNC.

Users may connect to dav.hpc.nrel.gov. This will connect to one of the 3 DAV nodes. Users also have the option of connecting directly to an individual DAV node using one of the following:

  • dav1.hpc.nrel.gov
  • dav2.hpc.nrel.gov
  • dav3.hpc.nrel.gov

Login Nodes

There are four login nodes on the system, HP Proliant DL380 G8 servers with Intel E5-2670 processors, 64GB memory with local disk drives for the operating system. The /home, /scratch, /projects, /mss and /nopt  file systems are mounted on all login nodes.

Users may connect to peregrine.hpc.nrel.gov. This will connect to one of the 4 login nodes.  Users also have the option of connecting directly to an individual login node using one of the following names:

  • hpc-login1.hpc.nrel.gov
  • hpc-login2.hpc.nrel.gov
  • hpc-login3.hpc.nrel.gov
  • hpc-login4.hpc.nrel.gov

Service Nodes

There are seven nodes referred to as service nodes that perform system administration functions, support file system access for the compute nodes and manage the file system images that get deployed to the diskless nodes.

Interconnect

The system is a collection of tightly connected nodes we call "Scalable Units". The high speed InfiniBand network operates at 4X FDR speeds. There are 16 Mellanox spine switches and 8 scaleable unit (SU) leaf switches configured that are fully connected (non-blocking between servers within an SU). Bisection bandwidth is 1.9 Terabytes/sec while latency is minimized to three or fewer switches hops within an SU, and no more than 7 hops across the entire system. The connection to the parallel file system is capable of 40GB/s, and the interconnect is designed for future expansion.

Filesystems

/home

The /home file system on Peregrine is a robust NFS file system that is intended to hold small files. These include shell startup files, scripts, source code, executables and data files. The capacity of /home is 10 TB.

/scratch

/scratch is a parallel Lustre file system intended for high-performance I/O. Use /scratch for running jobs and any other intensive I/O activity.

The capacity of the /scratch file system is  1.5 PB. This capacity is provided by 108 Object Storage Targets (OSTs) that are attached to 24 Object Storage Servers (OSSs). The default stripe count is 1 and the default stripe size is 1 MB.

/projects

/projects is a parallel Lustre file system intended for high performance I/O associated with files that are shared by members of a project.

The capacity of the /projects file system is 768 TB. This capacity is provided by 54 OSTs that are attached to the 24 Lustre OSSs. The default strip count is 1 and the default stripe size is 1 MB.

/nopt

The /nopt file system is a robust NFS file system where NREL-specific software, module files, licenses and licensed software is kept. The capacity of the /nopt file system is 2 TB.