How to access the LRZ Linux Clusters
Table of contents
- Validation for Access
- Support via Service Desk
- Login and Security
- LRZ-specific configuration and policies on the clusters
Validation for Access
In principle, scientists and students from Munich Universities as well as Bavarian Universities can obtain access to the LRZ Cluster Systems. The following steps need to be performed before attempting the first login to the systems:
- A valid LRZ account is required, which can e.g. be obtained by contacting the responsible master user located at your institution. Further details on LRZ accounts are described on the LRZ web server.
- Your master user will need to fill in a project proposal, providing details about your intentions and needed resources. He may well ask you for text describing these details.
If your project is approved, you will receive access information after at most a few working days.
Support via Service Desk
Questions concerning the usage of the Linux Cluster should always be directed to the LRZ Service Desk. A member of the LRZ HPC support team will then attend to your needs.
Login and Security
Only the login nodes can be accessed interactively from the outside world. Two mechanisms are provided for logging in to the system; both incorporate security features to prevent appropriation of sensitive information by a third party.
Access via Secure Shell
Details on how to configure ssh for usage with the LRZ clusters are available in a separate document.
From the UNIX command line on the own workstation the login to an LRZ account xyyyyzz is performed via one of the commands given in the following table.
Opteron (MPP cluster) login node
Opteron (MPP cluster) login node
Opteron (MPP cluster) login nodes
Haswell (CooLMUC2) login node
Haswell (CooLMUC2) login node
|gsissh -Y lxgt2.lrz.de||login node for Gsi-SSH|
The login nodes are meant for preparing your jobs, developing your programs, and as a gateway for copying data from your own computer to the cluster and back again. Since this resource is shared among many users, LRZ requires that you do not start any long-running or memory-hogging programs on these nodes; production runs should use batch jobs that are submitted to the SLURM scheduler. Our SLURM configuration also supports semi-interactive testing. Violation of the usage restrictions on the login nodes may lead to your account being blocked from further access to the cluster, apart from your processes being forcibly removed by LRZ administrative staff!
The -Y option of ssh is responsible for tunneling of the X11 protocol, it may be omitted if no X11 clients are required, or if you already have otherwise configured X11 tunnelling in your ssh client.
The HOME directory on the Linux Cluster is an NFS mounted volume, which is uniformly mounted on all cluster nodes.
The login nodes have various architectures. In particular, a program built on an Itanium system will not run on an Opteron/EM64T system and vice versa.
Secure Shell Public Keys
The Secure Shell rsa public keys for the interactive nodes are given in the following link (please add these to ~/.ssh/known_hosts on your own workstation before logging in for the first time):
An alternative way of accessing the cluster is to use GSI-SSH, which is a component of the Globus toolkit and provides
- terminal access to your account
- a single sign-on environment (no password required to access other machines)
- easy access to a number of additional functionalities, including secure and parallel file transfer
The prerequisites for using it are
- a Grid certificate installed on your machine and acknowledged by LRZ, as described on the LRZ Grid Portal. Please note that TUM, LMU, and LRZ members can use the new and easy short lived credential service (SLCS) of the DFN as an alternative: it allows you to immediately obtain a certificate for Grid usage
- an installation of a GSI-SSH client on your own workstation, either the command line tool gsissh or the multi platform Java tool Gsissh-Term, as described on the LRZ Grid Portal.
Environment settings are controlled via the LRZ module system. Such settings are needed to access specific application program packages, or to properly establish a development enviornment.
LRZ-specific configuration and policies on the clusters
Changing of Password and Shell
Please always use the web interface on the LRZ server to change your login password or your login shell for the cluster systems. Cluster-local commands cannot be used for this purpose.
Passwords must be changed at least once in 6 months. We are aware that this measure imposes some overhead on users, but believe that it is necessary on security reasons, having implemented it based on guidelines of BSI (federal agency for information security) and the IT security standard ISO/IEC 27001. You are able to determine the actual invalidation date for your password by logging into the above-linked web interface and selecting the menu item "Person -> view" or "Account -> view authorizations". In order to prevent being surprised by a password becoming invalid, you will be notified of the need to change your password via e-mail. Even if you miss the deadline for the password update, this only implies a temporary suspension of your account - you will still be able to log in to the ID portal and make the password change.
Complete German text of the authentication regulations (PDF)
- Complete English text of the authentication regulations (PDF)
Changing the password is also necessary after it has been newly issued, or reset to a starting value by a master user or LRZ staff. This assures that actual authentication is done with a password known only to the account owner.
Using the cron or at commands
This is not allowed on the LRZ cluster. Please submit SLURM batch jobs for performing computations.
Moving data from/to the cluster
ftp access to the cluster from outside (and also within the clusters) is disabled for security reasons. Please use
scp (Secure Copy) or
grid-ftp to move data between platforms.
User accounts are personalized
User accounts are always assigned to a particular person. For a number of reasons, sharing of user accounts between different persons is not permitted; if noticed, it will lead to the account being deactivated by LRZ. All involved parties (including the Master User of the account's project) will be notified with information on the measures needed to rectify the situation.
The cluster is protected from certain types of external attacks by a firewall, the configuration of which may impact the functionality of certain applications as described in the following.
Direct X11 connections (via
xauth) are prohibited, only ssh tunneling is supported.
None of the batch nodes in the cluster are by default routed to the outside world. Please contact LRZ HPC support if you require a particular system to be routed to the batch nodes.
We recommend against using the Linux Cluster for mail purposes (apart from having the batch scheduler send mails to you, occasionally). Please consult the LRZ documentation on how to use eMail on how to properly use this facility.
Protected Documentation for Compilers, Libraries and Tools
Part of the software documentation (especially commercial development software) is password protected. Please log in to the Linux Cluster and type
to obtain the user name and password required for validation.
Documentation for Application Software and Packages
Please start from the Application Software entries on the LRZ web server.
General Linux System Documentation
Typical for Linux systems there are (at least) two formats for the system documentation:
Access to both is seamlessly integrated into the KDE help system. Calling the browser via
causes the following window to appear, in which all required entries can be found under "KDE Help Contents":
Advantage of khelpcenter: Hyperlinks to cited man and info pages are available. Searching only works within the opened page, though.