Storage

All compute nodes have a local SSD disk which can be used for local storage during computations. On top of this users can use our network storage which is a Lenovo GSS26 system running GPFS.

Except for the local node storage, all user accessible nodes can access all file systems mentioned below.

Local node storage – $LOCALSCRATCH

  • Can be used to store files and perform local I/O for the duration of a batch job.
  • 330 GB (150 GB on gpu nodes) available on each compute node on a local disk
  • It is some times more efficient to use and store files directly in $SCRATCH (to avoid moving files from $LOCALSCRATCH at the end of a batch job).
  • Files stored in the $LOCALSCRATCH directory on each node are removedimmediately after the job terminates.

Home directory – $HOME

  • This is your home directory. Store your own files, source code and build your executables here.
  • At most 10 GB (150 k files) per user. Plus 10 GB for snapshots up to ten days back
  • Note: if you change and delete many files in $HOME during the 10 days, the amount of snapshot space may not suffice, and you will only have snapshots fewer days back.
  • Snapshots are taken daily.

Work directory – $WORK

  • Store large files here. Change to this directory in your batch scripts and run jobs in this file system.
  • Space and number of files in this file system is setup for each project based on each project’s requirements. Quota is shared among all users within a project.
  • The file system is not backed up.
  • Purge Policy: the $WORK filesystem is purged 3 months after the end of the project, i.e., all the information stored on this filesystem relative the specific project will be automatically deleted with no possibility of recovery.

Scratch directory – $SCRATCH

  • Temporary storage of larger files needed at intermediate steps in a longer run. Change to this directory in your batch scripts and run jobs in this file system.
  • This file system is not backed up.
  • HPC staff may delete files from $SCRATCH, if the file system becomes full, even if files are less than 10 days old. A full file system inhibits use of the file system for everyone. The use of programs or scripts to actively circumvent the file purge policy will not be tolerated.