NetCDF - Interface for array-oriented data access
NetCDF (Network Common Data Form) is an interface for array-oriented data access and a freely-distributed collection of software libraries for C, Fortran, C++, Perl and Java that provide implementations of the interface. The netCDF software was developed by Glenn Davis, Russ Rew, Steve Emmerson and Harvey Davies at the Unidata Program Center in Boulder, Colorado, and augmented by contributions from other netCDF users. The netCDF libraries define a machine-independent format for representing scientific data. Together, the interface, libraries, and format support the creation, access, and sharing of scientific data. The following packages are discussed in this document:
- NetCDF libraries and utilities
- parallel invocation of NetCDF: By integration of the NetCDF API with HDF5 as well as pNetCDF, a simple interface for parallel I/O processing is provided.
- NCO: the NetCDF Operators comprise a dozen standalone, command-line programs that take netCDF files as input, then operate (e.g., derive new data, average, print, hyperslab, manipulate metadata) and output the results to screen or files in text, binary, or netCDF formats.
Installation and use of NetCDF on LRZ platforms
Serial NetCDF Library
NetCDF is available on all HPC systems at LRZ; it has been built for usage with the Intel C, C++ and Fortran compilers. To make use of NetCDF, please load the appropriate environment module
module load netcdf
compile your code with
[icc|icpc|ifort] -c $NETCDF_INC foo.[c|cc|f90]
and then link it with
icc -o myprog.exe main.o foo.o $NETCDF_LIB
icpc -o myprog.exe mainC.o foo.o $NETCDF_CXX_LIB
ifort -o myprog.exe mainF.o foo.o $NETCDF_F90_LIB
using a suitable library environment variable which depends on the base language. Note that in Fortran 90 codes you will need to pull the required functionality into your source code via a use statement:
while Fortran 77 codes can use the INCLUDE extension to pull in the
NetCDF in parallel mode
To use NetCDF in parallel mode, please load the environment module
module load netcdf/mpi
The actual version of the loaded library will depend on the MPI module loaded prior to the above command. To compile and link your code, please use the MPI compiler wrappers mpicc, mpif90 etc., and take care to keep the compile and link step separate as illustrated above for the serial case. If this module is used, the parallel extensions (e.g. the additional optional arguments fornf90_open) will work properly instead of returning an error message; and of course MPI_Init() and MPI_Finalize() must be invoked at the beginning and end of your program, respectively.
Additional usage hints
On Linux systems (especially parallel file systems like CXFS or GPFS), the default buffering settings of NetCDF may cause problems (performance degradation and maybe system instability). Therefore, it is recommended to change the default settings as described in the following.
- C interface users should replace the nc_create() andnc_open() by nc__create() and nc__open() (with two underscores), respectively. The latter calls allow for performance tuning. A buffersize bufrsizehintp=1048576 is recommended for IA64-based systems, for x86_64 based systems smaller values (e.g. 32768) may be sufficient.
- C++ interface users should call the NcFile() constructor with a non-null value for the chunksizeptr argument. The recommended buffering values are the same as for the C interface above.
- Fortran 77 interface users should replace nf_create() andnf_open() by nc__create() and nc__open() (with two underscores), respectively. The recommended buffering values are the same as for the C interface above.
- Fortran 90 interface users should specify the bufrsize optional argument for calls to nf90_open() or nf90_create(). The value for bufrsize should be chosen as recommended above for the C interface.
This measure may only be effective for the 3.x releases. More recent versions may have problems even when using this measure.
Run time settings for parallel execution
For running pnetcdf or netcdf/mpi programs there exist system-dependent settings which are discussed here:
- For IBM MPI (module mpi.ibm), please keep the setting "MP_SINGLE_THREAD=no", otherwise your program will probably crash with an error in the MPI_IO subsystem.
- For SGI MPT (module mpi.mpt), please set "export MPI_TYPE_DEPTH=100" (or some other suitably large value) since the default may be typically insufficient. If this is the case, your program will abort with a reasonably informative error message.
NetCDF Operators (NCO)
To make use of the NetCDF operators (NCO), please load the appropriate environment module
module load nco