Access to SuperMUC

Getting Access

This page describes the access to SuperMUC for current projects on SuperMUC Phase 2.

The following documentation is for SuperMUC !!
For SuperMUC-NG see:

Login to SuperMUC

Before you can login to SuperMUC you have to set your own password. Login with your userID and the start password (that we deliver to your project manager) into the ID-Portal ( using your account and the start password that we have delivered to your project manager.

SuperMUC/SuperMIC uses front-end nodes or login nodes for interactive access and the submission of batch jobs. The front-end nodes have an identical environment, but multiple sessions of one user may reside on different nodes which must be taken into account e.g., when killing processes.

Two mechanisms are provided for logging in to the system; both incorporate security features to prevent appropriation of sensitive information by a third party.

Login with Secure Shell

Access via SSH (Secure Shell) is described in detail in the LRZ Document about SSH. In particular, the setup required to use private/public keys for access is described there. From the UNIX command line on one's own workstation the login to an LRZ account xxyyyzz is performed via:

System partLoginArchitectureNumber of
login nodes
behind the

SuperMUC / phase 2
Haswell nodes

ssh -Y -l xxyyyzz
Intel Haswell EP 3

Nodes on SuperMUC phase 2
with connection to the
archive system

ssh -Y -l xxyyyzz
Intel Haswell EP 2
Nodes with connection to the
dedicated PRACE network.
The access is restricted to a
limited number of machines,
such as the login nodes of
JUQUEEN (see:list)
ssh -Y -l xxyyyzz 
ssh -Y -l xxyyyzz
ssh -Y -l xxyyyzz
ssh -Y -l xxyyyzz
Intel Sandy Bridge EP/
Intel Haswell EP


  • The IP address of your front-end machine must be associated with a valid DNS entry, and must be known to us, otherwise your SSH request will not be routed. Additional entries or changes can be submitted via a modification request using the Online Proposal Form.
  • Submission of jobs to the phase 1 system requires logging in to a phase 1 login node; submission of jobs to the phase 2 system requires logging in to a phase 2 login node.
  • LRZ Security Policies demand that the user's private SSH keys that are used to access the system from the outside world are locked and guarded with a non-empty passphrase, therefore it is not allowed to use an empty passphrase during the private key generation. We consider an empty passphrase as a violation of our security policies. Users disregarding this policy will be barred from further usage of LRZ systems.
  • The SuperMUC firewall permits only incoming SSH-connections i.e., ssh from SuperMUC to the outside world is disabled.
  • The legacy names defined for the phase 1 system (,, will continue to work.
  • The PRACE network uses the domain; otherwise the DNS names are the same.
  • The LRZ domain name is mandatory if you want to access the system from outside the Munich Scientific Network.
    The -Y option is required for tunneling of the X11 (windowing) protocol, it may be omitted if no X11 clients are required. 

The Secure Shell RSA public key are given in the following link. Please add these to ~/.ssh/known_hosts on your own local machine before logging in for the first time.

Login via Grid Services using GSI-SSH

An alternative way of accessing the SuperMUC is to use GSI-SSH, which is a component of the Globus toolkit and provides

  • terminal access to your account
  • a single sign-on environment (no repeated password entry required to access SuperMUC and other machines)

The Globus toolkit provides easy access to additional, handy functionality, including fast, secure file transfer with GridFTP. For more information, please read the LRZ Globus documentation, where you will also find details on how to use gsissh via GridMUC to connect to other parts of SuperMUC.

The prerequisites for using it are

  • an installation of a GSI-SSH client on your own workstation, either the command line tool gsissh or the multi platform Java tool Gsissh-Term, as described on the LRZ Grid-Portal.
  • a Grid certificate installed on your machine. There are three easy ways how to obtain such a certificate:
    1. A long lived certificate from DFN, as described on the LRZ Grid Portal.
    2. A short lived credential (SLCS) from DFN: you can immediately obtain such a certificate for Grid usage via a web form.
    3. Please note that anyone with an LRZ account (in LRZ SIM) can use the new and easy myproxy-CA certificate as an alternative: it allows you to immediately use LRZ machines via Grid tools. All you have to do is to use your LRZ username/password with to download a ready-to-use proxy certificate.  However, these certificates are only accepted on LRZ’s machines.
  • Your SuperMUC account must be connected to your Grid certificate. You can do this by asking your project manager to
    • go to
    • scroll to the bottom
    • click on
      "log in as Projectmanager/MasterUser (for adding/modifying information or prolongation of an existing proposal)"
    • click submit
    • on the next page log in
    • then add the DN in Section "1.3 Researchers" to your account by overwriting the line
      -- If you want to use certificates for login, enter the distinguished name of the researcher here --
    • mention what you did (added a DN for account XYZ) also in the comment box in Section "4.3 Other References and Comments"
    • click "Submit the data to LRZ" 
    • Your DN will be associated to your account within 1 or 2 business days.

Gsissh is offered on port 2222 on the SuperMUC login machines listed in the SSH section above. In addition, gsissh is also offered via a gateway machine,, port 2222 (for Phase 1) and port 22222 (for Phase 2), that is reachable world-wide without prior registration of your IP number. A detailed documentation of how to use gsissh can be found on the LRZ Grid Portal. Please note that gsiscp, gsisftp and plain sftp are not supported via GridMUC – please use instead GridFTP for data transfer or gsiscp directly to the login nodes (

In order to be allowed to access LRZ's computers you have to accept the LRZ AUP. As Grid user you simply do this by surfing to

with your certificate loaded in your browser and then read and accept the AUP (Exportkontrollbestimmungen). You have to accept the AUP before you will be given access!

Reporting problems with Login

To report a problem use the ssh or gsissh command with the "-vvv" option and include the verbose information when submitting an incident ticket.

Pfeil nach oben

Programming Environment

For controlling the environment settings the LRZ module system is used. You are strongly urged to read that document, because some additional configuration on your side may be needed and/or required to reliably perform environment setup within batch jobs.

Pfeil nach oben

Password Policies

Passwords must be changed at least once in 12 months. We are aware that this measure imposes some overhead on users, but believe that it is necessary on security reasons, having implemented it based on guidelines of BSI (federal agency for information security) and the IT security standard ISO/IEC 27001. You are able to determine the actual invalidation date for your password by logging into the ID portal ( and selecting the menu item "Person -> view" or  "Account -> view authorizations". In order to prevent being surprised by a password becoming invalid, you will be notified of the need to  change your password via e-mail. Even if you miss the deadline for the password update, this only implies a temporary suspension of your  account - you will still be able to log in to the ID portal and make the password change.

Changing the password is also necessary after it has been newly issued, or reset to a starting value by a project manager or LRZ staff. This assures that actual authentication is done with a password known only to the account owner.

Changing password or login shell, viewing user account data

The direct use of the passwd and chsh commands to change passwords and login shells, respectively, has been disabled.

Please use the ID-Portal instead:

  • Log in to the web interface using your account and password.
  • Totoggle between English and German: use the little flags
  • For changing your password, select  "Self Services/modify password". In the main window you are then prompted for your old password once and for the new password (needs to have between 6 and 20 characters) twice.
  • For changing your login shell select "Self Services/change login shell". For the platform "SuperMUC" select the new login shell from the drop-down menu. Please only use one of the following shells: bash, ksh, sh, csh, tcsh. Other shells will run into problems with the scheduler.
  • The ID portal also offers functionality to view your user account data.

Pfeil nach oben

Budget and Quotas

CPU time budget and file system quotas are displayed at login or at the start of a batch job.

However you can query them with the following commands:

module load lrztools

Pfeil nach oben

Moving data from/to SuperMUC

FTP access to the high performance systems from outside is disabled for security reasons; therefore, you have to use scp, sftp, or GridFTP.

Important Notes:

  1. For good bandwidth we recommend to use the Haswell login nodes (, for large scale data transfers.
  2. Transfers of files with scp or sftp from SuperMUC to the outside world can only be initiated from the outside, e.g. you cannot copy files from SuperMUC to the outside, but you can fetch files from SuperMUC from the outside.



scp  localfile
scp  localfile


SSH File Transfer Protocol (also Secure File Transfer Protocol, Secure FTP, or SFTP) is a network protocol that provides file access, file transfer, and file management functionality over any reliable data stream.


put localfile remotefile
get remotefile localfile


GridFTP is the fastest and most reliable way to transfer data in and out of SuperMUC. It is especially well suited to transfer large amounts of data in the Gigabyte to Terabyte range. We recommend to use GridFTP together with Globus Online and a 1-click Globus Connect client on your side. You can find a worked out example here.

Note that for data transfers between the Linux cluster and SuperMUC there exists a separate setup.

As with all Grid tools, you need a Grid certificate, which can easily be obtained through one of three ways:

  1. A long lived certificate from DFN, as described on the LRZ Grid Portal.
  2. A short lived credential (SLCS) from DFN: you can immediately obtain such a certificate for Grid usage via a web form.
  3. Please note that anyone with an LRZ account (in LRZ SIM) can use the new and easy myproxy-CA certificate as an alternative: it allows you to immediately use LRZ machines via Grid tools. All you have to do is to use your LRZ username/password with to download a ready-to-use proxy certificate.  However, these certificates are only accepted on LRZ’s machines.

For details see the GridFTP documentation.


  • Go with your web browser to
  • Log in (create a new, free Globus Online account, or use your EGI SSO account)
  • Install Globus Connect on your end (on your laptop or your local server); no need any more to install the whole Globus Toolkit!
  • Select the pre-defined lrz#SuperMUC endpoint on the one side and your local Globus Connect endpoint on the other side
  • Activate the lrz#SuperMUC endpoint through the LRZ myproxy-CA giving your LRZ credentials (see how to get a private certificate)
  • Select the files you want to transfer
  • You can now log off. Globus Online does the transfer for you in the background.
Under best-case conditions you can expect up to 1,000 MB/s transfer speed with 10 parallel connections, a typical rate is around 600 MB/s. If the physical line to your destination experiences a high packet drop rate (already 1 dropped packet every 3 minutes is a high rate!), your transfer speed will be severely degraded (down to below 1 MB/s!). You can improve the situation by enabling the SACK option on your network interface (SACK is enabled by default at LRZ). Ask your local network expert for help if you don't know how to set this.

Pfeil nach oben

Access to your subversion (SVN) server

The SuperMUC firewall permits only incoming SSH-connections. You can use port forwarding to establish a connection between the subversion server and SuperMUC, i.e., you may use one of the the following procedures.

You will be prompted for your SuperMUC password (or your ssh passphrase). If you are unlucky the port selected by you (e.g. 10022) is already used by someone else - in this case you will see an error message printed out in advance of the motd; you then need to change your port to a different value. You might need to delete the localhost entry from ~/.ssh/known_hosts if ssh complains about the host key. If you need to change ssh ports (see 1. above), you will probably also need to invoke "svn switch --relocate ..." on your SVN sandboxes because the port number will be encoded in the stored location.

Using SVN with a https svn server

  1. To establish the port forwarding for the ssl/tls port issue the following command to connect from your workstation you normally use to SSH to the SuperMUC:
    ssh -l <LoginName> -R <arbitraryPortNumber>:<svnServer>:443
    ssh -l hk00xyz -R
    (hint: another use may also may the same port, also try another one)
  2. After successful login to supermuc you may then access your repository via:
    svn <svnCommand> https://<remoteLoginName>@localhost:<ForwardedPortNumber>/<svnDirectoryPath>
    svn list https://mySVNUser@localhost:10443/svnroot/pmviewer
    svn co   https://mySVNUser@localhost:10443/svnroot/pmviewer pmviewer

Using SVN+SSH repository access

  1. To establish the port forwarding for the sshport issue the following command to connect from your workstation you normally use to SSH to the system SuperMUC (or SuperMIG accordingly):
    ssh -l <LoginName> -R <arbitraryPortNumber>:<machine-withSVNrepo.>:22
    ssh -l hk00xyz -R
  2. After successful login to supermuc you have to set up a new protocol in your ~/.subversion/configfile. Therefore you enter the following last line to the tunnel section in the config file:
    ### Configure svn protocol tunnel schemes here.  By default, only
    ### the 'ssh' scheme is defined.  You can define other schemes to
    ### be used with 'svn+scheme://hostname/path' URLs.  A scheme
    ### ...
    myssh = ssh -p 10022
    Now you may use the svn+ssh command as usual, with the exception that the newly defined myssh protocol is used instead of the standard sshprotocol:
    svn <svnCommand> svn+myssh://<remoteLoginName>@localhost/<svnDirectoryPath>
    svn list svn+myssh://mySVNUser@localhost/my/svn/repo
    svn co   svn+myssh://mySVNUser@localhost/my/svn/repo

Pfeil nach oben

See also

Pfeil nach oben