-------------------------------------------------------------------------------- Intel(R) MPI Library for Linux* v3.0 Release Notes -------------------------------------------------------------------------------- CONTENTS -------- 0. WHAT'S NEW 1. OVERVIEW 2. PRODUCT CONTENTS 3. KEY FEATURES 4. SYSTEM REQUIREMENTS 4.1 SUPPORTED HARDWARE 4.2 SUPPORTED SOFTWARE 4.3 SUPPORTED LANGUAGES 5. INSTALLATION NOTES 6. KNOWN LIMITATIONS 7. DOCUMENTATION 8. TECHNICAL SUPPORT 9. COPYRIGHT AND LEGAL INFORMATION -------------------------------------------------------------------------------- 0. WHAT'S NEW -------------------------------------------------------------------------------- Intel(R) MPI Library for Linux* v3.0 is a new major release of the Intel MPI Library for Linux. This release includes the following new features compared to the Intel MPI Library v2.0 (see product documentation for more details): - New universal multi-fabric device o Smart fabrics selection o Enhanced dynamic connection establishment o Two phase buffer enlargement - Increased application performance o DAPL* intra-node communication mode o Further optimized collective operations o Enhanced process pinning for multicore processors o Scalable job startup protocol o Static version of libraries built without -fpic flag - New installer capabilities o Distributed software installation - Increased interoperability o Additional thread safe libraries at level MPI_THREAD_MULTIPLE o Backward binary compatibility with Intel MPI Library v2.0 o Enhanced handling of multihomed environment - Enhanced Intel(R) Cluster Tools support o Intel(R) Trace Analyzer and Collector version 7.0 o Intel(R) Math Kernel Library Cluster Edition version 9.0 o Intel(R) MPI Benchmarks version 3.0 - Extended compiler support o Intel(R) C++ Compiler for Linux*, version 9.1 o Intel(R) Fortran Compiler for Linux*, version 9.1 o GNU* Fortran 95 compiler, version 4.0 - Enhanced debugger support o Experimental support for Intel(R) Debugger version 8.1-23, 9.1-23 o Etnus* Totalview* 6.8 and higher o Allinea* DDT* 1.9.2 and higher - Integration with the job schedulers o Parallelnavi* NQS* for Linux V2.0L10 and higher o Parallelnavi for Linux Advanced Edition V1.0L10A and higher o NetBatch*, version 6.x and higher - Extended operating system support o Experimental support for SLES* 10 - Deprecated environment variables o I_MPI_DAPL_PROVIDER - Unsupported environment variables o I_MPI_LIBRARY o I_MPI_VERSION -------------------------------------------------------------------------------- 1. OVERVIEW -------------------------------------------------------------------------------- Intel(R) MPI Library for Linux* v3.0 is a multi-fabric message passing library based on ANL* MPICH2* and OSU* MVAPICH2*. Intel MPI Library for Linux* v3.0 implements the Message Passing Interface, v2 (MPI-2) specification. To receive technical support and updates, you need to register your Intel Software Product. See section 8, TECHNICAL SUPPORT. -------------------------------------------------------------------------------- 2. PRODUCT CONTENTS -------------------------------------------------------------------------------- Intel(R) MPI Library Runtime Environment contains the tools you need to run programs including MPD daemons and supporting utilities, shared (.so) libraries, Release Notes, Getting Started guide, and Reference Manual. Intel(R) MPI Library Development Kit includes all of the Runtime Environment components plus compilation tools including compiler commands (mpicc, mpiicc, etc.), include files and modules, static (.a) libraries, debug libraries, trace libraries, and test codes. -------------------------------------------------------------------------------- 3. KEY FEATURES -------------------------------------------------------------------------------- This release of Intel MPI Library supports the following major features: - MPI-1 and MPI-2 specification conformance with some limitations. See section 6, KNOWN LIMITATIONS - Support for any combination of the following interconnection fabrics: o Shared memory o RDMA-capable network fabrics via DAPL*, such as InfiniBand* and Myrinet* o Sockets, for example, TCP/IP over Ethernet*, Gigabit Ethernet, and other interconnects - Support for IA-32 and Itanium(R) architecture clusters using o Intel C++ Compiler for Linux* version 7.1 through 9.1 o Intel Fortran Compiler for Linux* version 7.1 through 9.1 o GNU* C, C++ and Fortran 95 compilers - Support for Intel(R) Extended Memory 64 Technology (Intel(R) EM64T) using: o Intel C++ Compiler for Linux* version 8.1 through 9.1 o Intel Fortran Compiler for Linux* version 8.1 through 9.1 o GNU* C, C++ and Fortran 95 compilers - C, C++, Fortran 77 and Fortran 90 language bindings - Dynamic or static linking - Clusters with homogeneous processor architectures only -------------------------------------------------------------------------------- 4. SYSTEM REQUIREMENTS -------------------------------------------------------------------------------- The following sections describe supported hardware and software. ---------------------- 4.1 SUPPORTED HARDWARE ---------------------- IA-32-based systems: A system based on a Pentium(R) 4 processor Dual-Core Intel(R) Xeon(R) processor recommended 1 GB of RAM recommended 100 MB of free hard disk space Itanium(R)-based systems: Itanium(R) processor Itanium(R) 2 processor recommended 1 GB of RAM recommended 100 MB of free hard disk space Systems using Intel Extended Memory 64 Technology (Intel EM64T): Intel(R) Xeon(R) processor Dual-Core Intel(R) Xeon(R) processor recommended 1 GB of RAM recommended 100 MB of free hard disk space ---------------------- 4.2 SUPPORTED SOFTWARE ---------------------- Operating Systems: IA-32-based systems: Red Hat* Linux* version 8.0, or Red Hat Enterprise* Linux 3.0, or Red Hat Enterprise Linux 4.0, or Red Hat Fedora Core* 4, or SuSE* Linux* version 9.0 through 10.0, or SuSE Linux Enterprise Server 9, or SuSE Linux Enterprise Server 10, or HaanSoft* Linux 2006 Server, or Mandriva*/Mandrake* v10.1, or Miracle* Linux v4.0, or Red Flag DC* Server 5.0, or Turbo Linux* 10 Itanium(R)-based systems: Red Hat Enterprise Linux 3.0, or Red Hat Enterprise Linux 4.0, or Red Hat Fedora Core 4, or SuSE Linux version 9.0 through 10.0, or SuSE Linux Enterprise Server 9, or SuSE Linux Enterprise Server 10, or SGI* Propack* v3.0, or SGI Propack v4.0, or HaanSoft Linux 2006 Server, or Miracle Linux v4.0, or Red Flag DC Server 5.0 Systems using Intel EM64T: Red Hat Enterprise Linux 3.0 Update 3, or Red Hat Enterprise Linux 4.0, or Red Hat Fedora Core 4, or SuSE Linux version 9.1 through 10.0, or SuSE Linux Enterprise Server 9, or SuSE Linux Enterprise Server 10, or HaanSoft Linux 2006 Server, or Miracle Linux v4.0, or Red Flag DC Server 5.0, or Turbo Linux 10 Compilers: GNU*: C, C++, Fortran 77 version 2.96 or higher, Fortran 95 version 4.0 or IA-32-based systems: Intel C++ Compiler for Linux* version 7.1, 8.0, 8.1, 9.0, 9.1. Intel Fortran Compiler for Linux* version 7.1, 8.0, 8.1, 9.0, 9.1. Itanium(R)-based systems: Intel C++ Compiler for Linux* version 7.1, 8.0, 8.1, 9.0, 9.1. Intel Fortran Compiler for Linux* version 7.1, 8.0, 8.1, 9.0, 9.1. Systems using Intel EM64T: Intel C++ Compiler for Linux* version 8.1, 9.0, 9.1. Intel Fortran Compiler for Linux* version 8.1, 9.0, 9.1. Additional Software: - Python* version 2.2 or higher, including the python-xml module. Python* distributions are available for download from your OS vendor or at http://www.python.org (for Python* version 2.2.3 etc. source distributions). - An XML parser such as expat or pyxml. - If using InfiniBand*, Myrinet*, or other RDMA-capable network fabrics, a DAPL* v1.1 or DAPL* v1.2 standard-compliant provider library/driver is required. DAPL* providers are typically provided with your network fabric hardware and software. ----------------------- 4.3 SUPPORTED LANGUAGES ----------------------- For GNU* compilers: C, C++, Fortran 77, Fortran 95 For Intel compilers: C, C++, Fortran 77, Fortran 90, Fortran 95 -------------------------------------------------------------------------------- 5. INSTALLATION NOTES -------------------------------------------------------------------------------- This section is intended for system administrators and describes the steps required to install Intel MPI Library for Linux*. To install Intel MPI Library, do the following: 1. Obtain a license key. See Section 8, TECHNICAL SUPPORT, if you do not have a license key. 2. The installer provides an ability to install Intel MPI Library on every node of your cluster. To utilize this feature create a machine.LINUX file that lists the nodes in the cluster using one host name per line before starting the installation process. Make sure that the cluster has ssh connectivity. 3. Install the l_mpi_p_3.0..tar.gz package by using the following commands: # cp l_mpi_p_3.0..tar.gz /tmp # cd /tmp # tar -xzf l_mpi_p_3.0..tar.gz # cd l_mpi_p_3.0. # cp .lic . (alternatively, you can copy the license to /opt/intel/licenses) # ./install.sh NOTES: - This installation does not overwrite any pre-existing Intel MPI Library you may have installed. After installing Intel MPI Library v3.0 Development Kit, you can continue to use the pervious versions of Intel MPI Library Development Kit by referring to the original installation directory. - The ./install.sh script allows you to avoid modification of the /etc/ld.so.conf file during installation in root mode. Use the --update-ldsoconf option, for example: # ./install.sh --update-ldsoconf no - You can install Intel MPI Library v3.0 as an ordinary user. Run the ./install.sh script and follow the instructions, or add the --nonroot --nonrpm options to the ./install.sh invocation string above to select the non-root, non-RPM installation mode. - You can install Intel MPI Library v3.0 on a system that does not use the RPM* package manager. Add the --nonrpm option to the ./install.sh invocation string above to select the non-RPM installation mode. Alternatively, you can extract Intel MPI Library v3.0 RPM* package and mpiEULA.txt file. Use the --extract option and indicate the desired extraction directory, for example: # ./install.sh --extract /tmp - The ./install.sh script enables you to select the install location on most operating systems. Select the exact same install location when installing Intel MPI Library on each node of your cluster. - Intel MPI Library v3.0 Development Kit is installed by default into the directory /opt/intel/mpi/3.0 as compared to the directory /opt/intel_mpi_10 for Intel MPI Library v1.0 Development Kit and the directory /opt/intel/mpi/2.0 for Intel MPI Library v2.0 Development Kit. - The ./install.sh script has a silent installation mode. In order to use this mode, edit the file SilentInstallConfigFile.ini contained in the unpacked package directory. In particular, edit the line EULA=reject to read EULA=accept Other lines in the file allow you to define the INSTALLDIR, LICENSEPATH, INSTALLMODE, PROCEED_WITHOUT_PYTHON, UPDATE_LD_SO_CONF, INSTALLUSER, ARCH, AUTOMOUNTED_CLUSTER, MACHINES_CONFIG and SKIP_MOUNTED parameters. After editing the file, use the following command instead of the usual # ./install.sh command: # ./install.sh --silent SilentInstallConfigFile.ini 4. Add the following PATH and/or LD_LIBRARY_PATH settings in your .cshrc or .bashrc files so that the settings are visible on all nodes in your cluster: - Ensure that Python* is in your PATH - Source the appropriate mpivars.[c]sh script from the Intel MPI Library bin (or, if applicable, bin64) directory - If using Intel compilers, source any required *vars.[c]sh scripts - Set up additional environment variables as needed. See the Intel MPI Library Getting Started guide for more information. -------------------------------------------------------------------------------- 6. KNOWN LIMITATIONS and TROUBLESHOOTING -------------------------------------------------------------------------------- The following are known issues and possible workarounds for this release: - IMPORTANT! The dynamic connection establishment is turned on by default. Connections are established upon the first communication between each pair of processes. Use the I_MPI_USE_DYNAMIC_CONNECTIONS environment variable to turn off this feature. See the Intel MPI Library Reference Manual for more details. - IMPORTANT! Compiler drivers of Intel MPI Library v3.0 embed the actual Development Kit library path (default /opt/intel/mpi/3.0) and default Runtime Environment library path (/opt/intel/mpi-rt/3.0) into the executables using the -rpath linker option. - IMPORTANT! Intel MPI Library v3.0 enhances message-passing performance on DAPL*-based interconnects by maintaining a cache of virtual-to-physical address translations in the MPI DAPL* data transfer path. Set the environment variable LD_DYNAMIC_WEAK to "1" if your program dynamically loads the standard C library before dynamically loading Intel(R) MPI Library v3.0. Alternatively, use the environment variable LD_PRELOAD to load Intel MPI Library v3.0 first. To disable the translation cache completely, set the environment variable I_MPI_RDMA_TRANSLATION_CACHE to "disable". Note that you do not need to set the aforementioned environment variables LD_DYNAMIC_WEAK or LD_PRELOAD when you disable the translation cache. - Intel MPI Library 3.0 provides thread safe libraries at level MPI_THREAD_MULTIPLE. Stick to the following rules to use this libraries: o Link the thread safe libraries before libc o Do not load Intel MPI thread safe library through dlopen() - Intel MPI Library 3.0 provides an ability to pin processes to the CPUs to prevent undesired process migration. Use the I_MPI_PIN_MODE and I_MPI_PIN_PROCS environment variables to control process pinning. See the Intel MPI Library Reference Manual for more details. - Use the -perhost option to place the indicated number of consecutive MPI processes on every host. This may increase application performance, particularly for clusters with SMP nodes. - Intel MPI Library does not support heterogeneous clusters of mixed architectures and/or operating environments. - Intel MPI Library v3.0 requires Python* version 2.2 or higher for process management. - Intel MPI Library v3.0 requires the python-xml* package or its equivalent on each node in the cluster for process management. For example, the following OS does not have this package installed by default: o SuSE LinuxEnterprise Server 9 - Intel MPI Library v3.0 requires the expat* or pyxml* package, or an equivalent XML parser on each node in the cluster for process management - The following MPI-2 features are not supported by Intel MPI Library v3.0: o Process spawning and attachment o Passive target one-sided communication when the target process does not call any MPI functions o User-defined data representations and the external32 data representation - If installation of the Intel MPI Library package fails and shows the error message: "Intel MPI Library already installed" when a package is not actually installed, try the following: 1. Determine the package number that the system believes is installed by typing: # rpm -qa | grep intel-mpi This command returns an Intel(R) MPI Library . 2. Remove the package from the system by typing: # rpm -e 3. Re-run the Intel MPI Library installer to install the package. TIP: To avoid installation errors, always remove the Intel MPI Library packages using the uninstall script provided with the package before trying to install a new package or reinstall an older one. - Due to an installer limitation, avoid installing earlier releases of the Intel MPI Library packages after having already installed the current release. It may corrupt the installation of the current release and require that you uninstall/reinstall it. - Certain operating system versions have a bug in the rpm command that prevents installations other than in the default install location. In this case, the installer does not offer the option to install in an alternate location. - If the mpdboot command fails to start up the MPD, verify that the Intel MPI Library package is installed in the same path/location on all the nodes in the cluster. To solve this problem, uninstall and re-install the Intel MPI Library package while using the same path on all nodes in the cluster. - If the mpdboot command fails to start up the MPD, verify that all cluster nodes have the same Python* version installed. To avoid this issue, always install the same Python* version on all cluster nodes. - Presence of environment variables with non-printable characters in user environment settings may cause the process startup to fail. To work around this issue, Intel MPI Library does not propagate environment variables with non-printable characters across the MPD ring. - A program cannot be executed when it resides in the current directory but "." is not in the PATH. To avoid this error, either add "." to the PATH on ALL nodes in the cluster or use the explicit path to the executable or ./ in the mpiexec command line. - Intel MPI Library v3.0 supports PMI wire protocol version 1.1. Note that this information is specified as pmi_version = 1 pmi_subversion = 1 instead of pmi_version = 1.1 as done by Intel(R) MPI Library v1.0. - Intel MPI Library requires the presence of the /dev/shm device in the system. To avoid failures related to the inability to create a shared memory segment, make sure the /dev/shm device is set up correctly. - The Fortran 77 and Fortran 90 tests in the /test directory may produce warnings when compiled with the mpif77, etc. compiler commands. You can safely ignore these warnings, or add the -w option to the compiler command line to suppress them. - In order to use GNU Fortran compiler version 4.0, use the mpif90 compiler driver. - There is a known binary incompatibility between GNU C++ compilers version 3.0 and earlier on one hand, and version 3.4 on the other hand. Compiler drivers in Intel MPI Library v3.0 automatically detect version of installed GNU* C++ compiler and use appropriate libraries to provide support for exception handling. - Use the special option -gcc-version for compiler drivers mpicxx and mpiicpc to link an application for running in a particular GNU* C++ environment. o Use the -gcc-version=3 to build an application compatible to GNU* C++ version up to 3.3. o Use the -gcc-version=4 to build an application compatible to GNU* C++ version 3.4 or higher. A library compatible with the detected version of the GNU*C++ compiler is used by default. - There is a known binary incompatibility between GNU Fortran 95 compilers version 4.0 on one hand, and version 4.1 on the other hand. Intel MPI Library v3.0 supports GNU Fortran 95 compilers version 4.0. - Intel(R) MPI Library v3.0 does not provide the MPI module for GNU* Fortran 95 compiler, version 4.1. Certain operating systems have this compiler installed by default, for example: o SuSE Linux Enterprise Server 10 - Certain DAPL* providers may not work with Intel MPI Library for Linux*, for example: o Mellanox* IB Gold version 1.6.1 or later. Contact Mellanox*, or download alternative OpenIB* DAPL* provider at http://sourceforge.net/projects/openib, or use DAPL* providers from Cisco*, SilverStorm*, or Voltaire*. o Myricom* GM DAPL* provider. Contact Myricom* or download alternative GM DAPL* provider at http://sourceforge.net/projects/dapl-gm. - GM DAPL* provider may not work with Intel MPI Library for Linux* using some versions of GM* drivers. Set the I_MPI_USE_RENDEZVOUS_RDMA_WRITE=1 to avoid this issue. - Certain DAPL* providers may not function properly if your application uses fork(2), vfork(2), or clone(2) system calls. Do not use these system calls or functions based upon them, for example, system(3), with: o OpenIB* DAPL* provider with Linux* kernel version earlier than 2.6.15 - Intel MPI Library 3.0 introduces a new debug output format, Intel MPI 2.0.1 output at level I_MPI_DEBUG 2: $ mpiexec -env I_MPI_DEVICE rdma -env I_MPI_DEBUG 2 ./a.out I_MPI: [0] set_up_devices(): will use device: libmpi.rdma.so I_MPI: [0] set_up_devices(): will use DAPL provider: ib0 Hello world: rank 0 of 1 running on svsmpi005 Intel MPI 3.0 output at level I_MPI_DEBUG 2: $ mpiexec -env I_MPI_DEVICE rdma -env I_MPI_DEBUG 2 ./a.out I_MPI: [0] MPIDI_CH3I_RDMA_init(): will use DAPL provider from \ registry: ib0 I_MPI: [0] MPIDI_CH3_Init(): will use rdma configuration Hello world: rank 0 of 1 running on svsmpi005 - In order to use Intel(R) Debugger set IDB_HOME environment variable. It should point to the location of the Intel Debugger. -------------------------------------------------------------------------------- 7. DOCUMENTATION -------------------------------------------------------------------------------- "Getting Started with Intel MPI Library", found in Getting_Started.pdf, contains information on the following subjects: - Using Intel MPI Library. Describes a basic usage model and walks you through the basic steps of setting up MPD daemons, compiling and linking, selecting a network fabric or device, and running an MPI program. - Troubleshooting. Includes post-install testing steps and describes compiling and running a test program. "Intel MPI Library Reference Manual", found in Reference_Manual.pdf, contains information on the following subjects: - Command Reference. Describes compiler commands, options, and environment variables. Includes job startup commands, process placement, and MPD daemon commands. - Tuning Reference. Describes environment variables that influence library behavior and performance. Covers process pinning, device control, collective operations, and other topics. Notation Conventions --------------------- Release Notes and other user documentation use the following notation conventions: Indicates that the items enclosed in angle brackets are placeholders for elements such as installation paths. [items] Indicates that the items enclosed in brackets are optional { item | item } Indicates a set of choices from which you must select one ... (ellipses) Indicates that an argument can be repeated several times -------------------------------------------------------------------------------- 8. TECHNICAL SUPPORT -------------------------------------------------------------------------------- This package is supported via Intel(R) Premier Support. Intel(R) Premier Support issues may be submitted at: https://premier.intel.com General information on Intel(R) product-support offerings may be obtained at: http://www.intel.com/software/products/support Intel(R) MPI Library self-help pages can be found at: http://support.intel.com/support/performancetools/cluster/mpi Requests for licenses can be directed to the Registration Center at: http://www.intel.com/software/products/registrationcenter Before submitting a support issue, see "Getting Started with Intel MPI Library" for details on post-install testing to ensure that basic facilities are working. When submitting a support issue to Intel(R) Premier Support, please provide specific details of your problem, including: - The Intel MPI Library package name and version information - Host architecture (for example, IA-32 or Itanium(R) architecture) - Compiler(s) and versions - Operating system(s) and versions - Specifics on how to reproduce problems. Include makefiles, command lines, small test cases, and build instructions. Use /test sources as test cases, when possible. You can obtain version information for the Intel MPI Library package in the file mpisupport.txt. -------------------------------------------------------------------------------- 9. COPYRIGHT AND LEGAL INFORMATION -------------------------------------------------------------------------------- Intel(R) MPI Library is based on MPICH2* from Argonne National Laboratory* (ANL) and MVAPICH2* from Ohio State University* (OSU). -------------------------------------------------------------------------------- Copyright (C) Intel Corporation 2003-2006. All Rights Reserved. This Intel(R) MPI Library software ("Software") is furnished under license and may only be used or copied in accordance with the terms of that license. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted by this document. The Software is subject to change without notice, and should not be construed as a commitment by Intel Corporation to market, license, sell or support any product or technology. Unless otherwise provided for in the license under which this Software is provided, the Software is provided AS IS, with no warranties of any kind, express or implied. Except as expressly permitted by the Software license, neither Intel Corporation nor its suppliers assumes any responsibility or liability for any errors or inaccuracies that may appear herein. Except as expressly permitted by the Software license, no part of the Software may be reproduced, stored in a retrieval system, transmitted in any form, or distributed by any means without the express written consent of Intel Corporation.