File systems

This document gives an overview of background storage systems available on the LRZ Linux Cluster. Usage, special tools and policies are discussed.

Available disk resources and file system layout

The following table gives an overview of the available file system resources on the Linux Clusters.

Recommendation: LRZ has defined an environment variable $SCRATCH which should be used as a base path for reading/writing large scratch files. Since the target of $SCRATCH may change over time, it is recommended to use this variable instead of hard-coded paths.

of the Linux
File system type
full name
How the user should access the filesSpace AvailableApprox. aggregated
Backup by LRZLifetime and deletion strategy.

Globally accessible Home and Project Directories

User's Home Directories




100 GByte
by default
per project

up to a few 100 MB/s

Yes, backup to tape and

Expiration of LRZ project.
NFS quotas apply

Project file system




up to 5 TByte per project
available on request

up to 1 GB/s


No guarantee for data integrity, disk quota

Temporary/scratch File Systems

Scratch file system

all except CooLMUC2


(<vol> is a one or two digit number uniquely determined from the user ID)


several TByte

up to 1 GB/s


Sliding window file deletion.
No guarantee for data integrity.

Scratch file system





1,400 TByte

up to 30 GB/s


Sliding window file deletion.
No guarantee for data integrity.

Note: $SCRATCH_LEGACY points to the SCRATCH from the older cluster segments, but is available only in read mode. This enables copying of still-needed data which must be done on the CooLMUC2 login nodes lxlogin5 or lxlogin6.

Local File Systems, not recommended to be used

Node-local temporary user data


local disks, if available
/tmp, /scratch


8-200 GByte

30 MB/s


Compute nodes:  Job duration only.
Files should be deleted by user job script at the end of a job.
Login Nodes:
files are removed if older than 4 weeks

Pfeil nach oben

Backup and Archiving

User's responsibility for saving important data

Having (parallel) filesystems of several hundreds of Terabytes ($WORK, $SCRATCH), it is technically impossible (or too expensive) to backup these data automatically. Although the disks are protected by RAID mechanisms, other severe incidents might destroy the data. In most cases however, it is the user himself who incidently deletes or overwrites files. Therefore it is within the responsibility of the user to transfer data to more safe places (e.g. $HOME) and to archive them to tapes. Due to the long off-line times for dump and restoring of data, LRZ might not be able to recover data from any type of file outage/inconsistency of the scratch or work filesystems. The alias name $WORK and the intended storage period until the end of your project should not be misguided as an indication for the data safeness!

Please consult the HPC Backup and Archiving document for how to handle backups or achives and how to use the TSM tape system.


For all files in $HOME backup copies are kept and made available in the special (read-only) subdirectory $HOME/.snapshot or in any directory as .snapshot. Please note that the .snapshot directories are not visible by simple ls command. A file can be restored by simply copying the file from the appropriate snapshot directory to its original or other location. Example:

cd .snapshot
ls -l
cp daily.YYYY-MM-DD_hhmm/missingfile ..

Further Information

Pfeil nach oben

Details on the usage and on the configuration of the storage areas

Project directories

If your project requires processing large data sets (50+ GB) with a timeframe of several months, the LRZ file deletion strategy in the pseudo-temporary file systems might become a problem. In this case you might be interested using the file system pointed at by the $WORK environment variable. Please note that we cannot guarantee data integrity over the full lifetime of your project, so you need to take the safety measure of archiving all important data to tape after placing them in the project directory. Finally, a group quota is imposed on this area. If you need resources in this area, please contact LRZ HPC support. Note that $WORK is not available by default.

Metadata on scratch and project directories

While for both scratch and project directories the metadata performance (i.e., performance for generating, accessing and deleting directories and files) is improved compared to previously used technologies, the capacity for metadata (e.g., number of file entries in a directory) is limited. Therefore, please do not generate extremely large numbers of very small files in these areas; instead, try to aggregate into larger files and write data into these e.g. via direct access. Violation of this rule will lead to LRZ blocking your access to the $SCRATCH or $WORK area since otherwise user operation on the cluster may be obstructed. Please also note that there is a per-directory limit of 10 MBytes which are available for storing i-node metadata (directory entries and file names); this limits the number of files which can be put into a single directory.

File deletion strategies and data integrity issues

To prevent overflow of the large scale storage areas, LRZ has implemented various deletion strategies. Please note that

  • for a given file or directory, the exact time of deletion is unpredictable!
  • the normal tar -x command preserves the modification time of the original file instead of the time when the archive is unpacked. So unpacked files may become one of the first candidates for deletion. Use tar -mx if required, or perform touchon a file or
    find mydir -exec touch {} \;
    on a directory tree mydir.

Due to the deletion strategies described in the subsections below, but also due to the fact that LRZ cannot guarantee the same level of data integrity for the high performance file system as compared to e.g., $HOME, LRZ urges you to copy, transfer or archive your files from temporary disks as well as from the $PROJECT areas to safe storage/tape areas!

  • High Watermark Deletion: When the filling of the file system exceeds some limit (typically between 80% and 90%), files will be deleted starting with the oldest and largest files until a filling of between 60% and 75% is reached. The precise values may vary.
  • Sliding window file deletion: Any files and directories older than typically 30 days (the interval may be shortened if the fill-up rate becomes very high) are removed from the disk area. This deletion mechanism is invoked once a day.

NAS based file systems ($HOME, $SCRATCH, $WORK) quotas

The file systems reside on dedicated Network Attached Storage systems ("filers") and are accessed via NFS. Filers offer high I/O-Performance - also with smaller files - and excellent reliability. These file systems can be uniformly accessed from any node in the cluster.

You can check the quota which applies to the sum of all HOME directories of your UNIX group by using the df command:

df $HOME/..

which will give you an output like

Filesystem Size Avail (MiB)
nashpc-lx.nas:/home/cluster/<gid>/<uid>/.. 25800 11926

The first number in each line is the total quota (in Mebibytes, 220 Bytes), and the second number is the amount used.


  • Quotas are assigned to projects and not to individual accounts. In case of quota overflow please check your own usage with du and then first contact your colleagues if your own usage is not responsible for filling up the quota.
  • Some applications or installations programs try to query the free disk space just with the "df" or "quoata" command. This will not work with the NAS-based file systems. Modification of these applications is neccessary..

Pfeil nach oben

Large scale transfer of data to the outside world / to SuperMUC

The preferred method of transferring data to other compute systems outside LRZ is to use grid-ftp. Please consult the LRZ specific document on using the grid facilities. For data transfers between the cluster and SuperMUC there exists a separate setup.

Pfeil nach oben