Eagle Software Toolchains

Learn about the software toolchains available for the Eagle System. 

The NREL Computational Science Center (CSC) provides several compiler and message-passing interface (MPI) communication frameworks. It is possible to combine them; however, often subtle or not-so-subtle dependencies require matching up a compiler suite with an MPI library built with, or at least explicitly tested against, that same suite.

The CSC suggests two baseline toolchains. The Intel toolchain refers to the commercial Intel Parallel Studio XE Cluster Edition suite, and includes the intel-mpi, comp-intel, and mkl modules. The openmpi-gcc toolchain is an open-source alternative against which many technical applications are natively developed and tested.

Other combinations are available: modules named openmpi/.../intel* refer to OpenMPI libraries built with the Intel compilers. Similarly, loading the intel-mpi module on top of a gcc module should create a hybrid "Intel-gcc" toolchain. Nonetheless, such explorations are primarily in the user's hands.

We also support the Portland Group compilers via pgi32 and pgi64 modules to enable OpenACC coding for Eagle's GPGPU complement.

Use of toolchains is enabled by loading modules. See section on Environment Modules.

Compilers

Intel C/C++ and Fortran

The Intel compiler suite offers industry-leading C, C++, and Fortran compilers, which include optimization features and multithreading capabilities.

GNU C/C++ and Fortran

The GNU compiler collection includes front ends for C, C++, Objective-C, Fortran, Java, Ada and Go, as well as libraries for these languages (libstdc++, libgcj, etc.).

Portland Group C/C++ and Fortran

The Portland Group (PGI) compiler suite includes NVIDIA GPU support via CUDA and the directive-based OpenACC programming model, as well as full support for NVIDIA CUDA C extensions.

Message-Passing Interface Libraries

Intel 

Intel's MPI library enables tight interoperability with its processors and software development framework, and is a solid choice for most HPC applications.

Open 

The Open MPI framework is a free and open-source communications library that is commonly developed against by many programmers. As an open-source package with strong academic support, the latest ideas may appear as implementations here prior to commercial MPI libraries.

Note that the Slurm-integrated builds of OpenMPI do not create the mpirun or mpiexec wrapper scripts that you may be used to. Ideally you should use srun (to take advantage of Slurm integration), but you can also use OpenMPI's native job launcher orterun. Some have also had success simply symlinking mpirun to orterun.

OpenMPI implements two Byte Transfer Layers for data transport between ranks in the same physical memory space: sm and vader. Both use a memory-mapped file, which by default is placed in /tmp. On Eagle, the node-local /tmp filesystem is quite small, and it is easy to fill this and crash or hang your job. Non-default locations of this file may be set through the OMPI_TMPDIR environment variable.

  • If you are running only a few ranks per node with modest buffer space requirements, consider setting OMPI_TMPDIR to /dev/shm in your job script.
  • If you are running many nodes per rank, you should set OMPI_TMPDIR to /tmp/scratch, which holds at least 1 TB depending on Eagle node type.

Hewlett-Packard Enterprise

Hewlett-Packard Enterprise (HPE)—Eagle's creator—offers a very performant MPI library as well, built on top of and colloquially known via its underlying Message Passing Toolkit high-performance communications component as "MPT."

GPUs

We are still working to get our MPI libraries to interoperate fully with the GPU nodes. At the moment, we have OpenMPI 3.1.3 working. To learn more, see running MPI jobs on Eagle GPUs.


Share