Skip to main content

Eagle System Configuration

 Learn about the Eagle system configuration.

Architecture Description

The Eagle system is a high-performance computing (HPC) system with different types of servers (nodes) configured to run compute intensive and parallel computing jobs. All nodes run the Linux operating system: RedHat Linux or the derivative CentOS distribution. The nodes and storage are connected by a high-speed 100GB/s EDR InfiniBand network. A brief description of the configuration and features of the nodes, interconnect and file systems is provided below. 

Compute Node Hardware Details

Eagle has 2114 compute nodes. A variety of different node types are available:

Number of Nodes Memory Processors Accelerators Local Storage
1728 96GB Dual Intel Xeon-Gold Skylake 6154 (3.0 GHz, 18 core) processors N/A 1TB SATA
288 192GB Dual Intel Xeon-Gold Skylake 6154 (3.0 GHz, 18 core) processors N/A 1TB SATA
48 768GB Dual Intel Xeon-Gold Skylake 6154 (3.0 GHz, 18 core) processors N/A

10 nodes with 25.6TB SSD

38 nodes with 1.6TB SSD

50 768GB Dual Intel Xeon-Gold Skylake 6154 (3.0 GHz, 18 core) processors Dual NVIDIA Tesla V100 PCIe 16 GB Computational Accelerator

10 nodes with 25.6TB SSD

40 nodes with 1.6TB SSD

Login Nodes

There are three login nodes on the system.  The /home, /nopt, /scratch, /projects, /shared-projects, /datasets and /mss file systems are mounted on all login nodes. 

Users may connect to eagle.hpc.nrel.gov from the NREL network. This will connect to one of the three login nodes. Users also have the option of connecting directly to an individual login node using one of the following names: 

  • el1.hpc.nrel.gov
  • el2.hpc.nrel.gov
  • el3.hpc.nrel.gov

For external users wishing to access Eagle from a remote location, the direct login will be eagle.nrel.gov.

Data Analysis and Visualization Nodes

The data analysis and visualization (DAV) nodes are each equipped with Dual Intel Xeon-Gold Skylake 6154 (3.0 GHz, 18 core) processors and Dual NVIDIA Tesla V100 PCIe 16 GB Computational Accelerators. These nodes support OpenCL and CUDA programming models. These nodes support hardware-accelerated remote visualization of data using the FastX remote desktop and visualization software.

Users may connect to ed.hpc.nrel.gov. This will connect to one of the 3 DAV nodes. Users also have the option of connecting directly to an individual DAV node using one of the following:

  • ed1.hpc.nrel.gov
  • ed2.hpc.nrel.gov
  • ed3.hpc.nrel.gov

For external users wishing to access Eagle for DAV/FastX work, the login will be eagle-dav.nrel.gov.

Interconnect

All nodes and storage are connected using an enhanced 8-dimensional InfiniBand Enhanced Data Rate (EDR - 100 Gigabits/s) hypercube topology that provides a bisection bandwidth of 26.4 terabytes/s.

Home File System

The Home File System (HFS) subsystem on Eagle is a robust NFS file system intended to provide highly reliable storage for user home directories and NREL-specific software. The capacity of HFS is 182 terabytes. Snapshots (backup copies) of files in the HFS filesystem are available up to 30 days after change/deletion.

/home

The /home directory on Eagle resides on HFS and is intended to hold small files. These include shell startup files, scripts, source code, executables, and data files.  Each user has a quota of 50 gigabytes.

/nopt

The /nopt directory on Eagle resides on HFS and is where NREL-specific software, module files, licenses, and licensed software is kept.

Parallel File System

The Parallel File System (PFS) on Eagle is a parallel Lustre file system intended for high-performance I/O.  Use PFS storage for running jobs and any other intensive I/O activity. The capacity of 14PB is provided by 28 Object Storage Servers (OSSs) and 56 Object Storage Targets (OSTs) with 3 Metadata Servers, all connected to Eagle's Infiniband network with 100Gb/s EDR. The default stripe count is 1 and the default stripe size is 1 MB.

The PFS hosts the /scratch, /projects, /shared-projects, and /datasets directory.

There are no backups of PFS data.  Users are responsible for ensuring that critical data is copied to Mass Storage or other alternate data storage location.

/scratch

Each user has their own directory in /scratch. Data in /scratch directories is subject to purge: files in /scratch are subject to deletion after 30 days of inactivity.

/projects

Each project/allocation has a directory in /projects intended to host data, configuration, and applications shared by the project.

/shared-projects

Projects may request a shared-project directory to host data, configuration, and applications shared by multiple projects/allocations.

/datasets

The /datasets directory on Eagle hosts widely used datasets. 

Node File System

Each Eagle compute node has a local solid-state drive (SSD) for use by compute jobs. They vary in size; 1 TB (standard), 1.6 TB (bigmem), and 25.6 TB (bigscratch), depending on the node feature requested. There are several possible scenarios in which a local disk may make your job run faster. For instance, you may have a job accessing or creating many small (temporary) files, you may have many parallel tasks accessing the same file, or your job may do many random reads/writes or memory mapping.

/tmp/scratch

The disk will be mounted at /tmp/scratch and will be set under the $LOCAL_SCRATCH environment variable during a job. A node will not have read or write access to any other node's local scratch, only its own. Also, this directory will be cleaned once the job ends. You will need to transfer any files to be saved to another file system. 

For more information about requesting this feature, please see Resource Request Descriptions on the Eagle Batch Jobs page.