Skip to main content

Mass Storage System - Gyrfalcon

The Mass Storage System, known as Gyrfalcon, is the primary system for long-term storage of user data in the NREL’s high-performance computing (HPC) data center.

Gyrfalcon is designed to keep the most frequently used data quickly accessible and to economically store data that is accessed less often. This is achieved by employing high-performance disks for the freshest working data and by moving older data to economical tape storage with the help of software algorithms and the system configuration.

This mass storage system is available to all Peregrine users and projects. Users without a Peregrine account may also request access.

Policies

  • Both the amount of data and the number of files will be managed with quotas, and quotas are enforced by the group of the files.
  • Users with accounts on any NREL HPC system will be provided with accounts on this system for personal storage. Every Peregrine user receives a default 1 terabyte, 1 million file quota automatically when their account is created. To request additional space, contact us.
  • Projects that receive an allocation on the Peregrine system will be provided with space on this system, with the project quota determined as part of the allocation process.  Allocations receives a default 2 million file quota when the allocation is created. To request additional space or file quota, contact us.
  • Gyrfalcon is not backed up. Two copies of all data are stored on different media; however, if a user deletes their files, there is no backup from which to restore them.

System Configuration

The Gyrfalcon system includes a 1 petabyte disk storage system as well as an Oracle StorageTek robotic tape library with seven high-performance T10000C tape drives and seven T10000D tape drives. It uses Oracle's QFS file system and SAM archiving software, which allows users to simply copy data to and from the file system without knowing or worrying about which tier the data is stored on. It has a capacity of over 3 Petabytes of user data, with an architecture that allows the capacity to be easily expanded at relatively low cost.

Frequently Asked Questions

  1. Navigate to Allocations.
  2. Select the appropriate allocation request template document for your request. When making allocation requests for only Mass Storage leave the Peregrine Node Hours and Projected Storage Space on Peregrine in TeraBytes sections blank. Only fill out the Long-Term Data Storage on Mass Storage System in TeraBytes section.
  3. Attach the completed document to an email message to hpc-proposals.

The disk system capacity is 756 TB and the tape capacity of this system is 5.2 PB. By policy, two copies of each file will be stored on different media, with copies being handled automatically by the archive software. This provides over 2.5 PB of long term data storage.

The /mss file system is mounted on the Peregrine login nodes at "/mss". Each user has a user directory under /mss/users/$USER.

Because /mss is an regular file system, ordinary Linux commands such as cp, mv and rsync may be used to transfer data from /home or /scratch file systems to the /mss file system. 


At the command line of one of Peregrine's login nodes, enter one of the following commands to copy files to mass storage. Please note that these commands may take several minutes to several hours to complete.

Option 1: The first command will create a list of files in a directory. The second command (tar) will use that list to gather all the files in the directory into a single file which resides in the MSS directory indicated. The files in the original directory are left unchanged.

 $  lfs find directory > directory.txt 
$  tar –czf  /mss/<MSS directory>/directory.tgz -T directory.txt

Option 2: The first command (tar) will gather all files found in a directory into a single file. The second command will copy the resulting tar file onto the mass storage system. Alternatively, replace "cp" with "mv" to move the resulting tar file from the original directory to the MSS directory. As with option 1, the files in the original directory are left unchanged.

$ tar –czf directory.tgz  directory 
$ cp directory.tgz /mss/<MSS directory>

Option 3: The rsync command compares one directory to another and makes the destination directory equal to the source directory.

$ rsync –av directory /mss/<MSS directory>

Option 4: The simple Linux cp command can be used to copy a file from one directory to another directory. This command is best used for small numbers of files.

$ cp filename /mss/<MSS directory>

Users may access their /mss directories via a server - mss1.hpc.nrel.gov.

ssh mss1.hpc.nrel.gov

You may review your mass storage system quota on a per user and a per project basis. On any of the Peregrine login nodes use one of the following commands to view your quota.
Option 1: Per User

$  /usr/local/bin/usrquota.check

Option 2: Per Project

$ /usr/local/bin/grpquota.check

Example output:

username 
username
Block size is 512 bytes
                                  Online Limits                   Total Limits
        Type           ID              In Use     Soft       Hard                 In Use                Soft     Hard
Files  group     120004     75445     100000              100000                75445         100000   100000
Blocks group   120004    400560    4294967296   4294967296     556051791 4294967296 4294967296

The output shows the number of files (inodes) that are in use and the amount of storage (number of 512 byte blocks) that is in use. It also shows your current quotas.