Name | Last modified | Size | Description | |
---|---|---|---|---|
Parent Directory | - | |||
basic/ | 2024-05-30 10:06 | - | ||
examples/ | 2013-11-24 09:28 | - | ||
tutorials/ | 2024-06-10 09:57 | - | ||
scc1% module avail openmpi
openmpi/3.1.4
module is intended for use with the sytem default gcc, g++, and gfortran compilers which are version 4.8.5. To use other available compilers use the OpenMPI module whose name corresponds to the compiler name and version. For the OpenMPI modules the "gnu" compiler refers to the "gcc" compiler modules. For example, to use the 9.3.0 versions of the GNU compilers use: :scc1% module load gcc/9.3.0
scc1% module load openmpi/3.1.4_gnu-9.3.0
scc1% module load intel/2021.1
scc1% module load openmpi/3.1.4_intel-2021
When running your compiled code in a batch job, it is required that you load the compiler and matching OpenMPI module in the batch script before starting the MPI program.
The OpenMPI modules provide the mpirun command to launch MPI jobs. To allocate MPI resources for your job, please see the RCS MPI batch job documentation page. Further details on compiling MPI programs is also available.
If an MPI job is launched across multiple nodes but there are fewer MPI processes than cores the recommended mpirun arguments are as follows. In this example there are 4 28-core nodes in the job but only 4 processes per node will be launched (for a total of 16 processes). The --map-by argument will distribute the MPI processes across the nodes, placing 2 processes per CPU socket to make the best use of memory bandwidth on each node. The --rank-by command will number the MPI ranks so that nodes contain the ranks in order, i.e. node 0 will have ranks 0-3, node 1 will have ranks 4-7, etc.:
scc1% mpirun -np 16 --map-by numa:span --rank-by core mpi_program ...args...
The OpenMPI modules are configured to understand the SGE batch job environment and the mpirun launcher will automatically use the assigned set of nodes. Additionally, mpirun will use the high-speed Infiniband networks available on the MPI nodes for inter-node communication. A utility has been installed, module xth1/1.0, that can be used to experiment with different mpirun options.