Introduction to MPI Tutorial Programs
This tutorial came with 7 examples. They are all based on the numerical
integration of a cosine function over the range of [a, b]. A mid-point rule
is used to perform the integration.
- Example 1 performs the integration serially. MPI is not used.
- Example 1_1 performs the same integral in parallel, with help from
six most fundamental MPI utility functions/subroutines:
MPI_init, MPI_Comm_rank, MPI_Comm_size, MPI_Send, MPI_Recv, and MPI_Finalize.
This code has a few drawbacks
- First and foremost it is deemed unsafe because it can potentially
deadlock due to processor 0 both send to and receive from itself.
- Secondly, the local partial integrations from
- Example1_2. A more careful review of the previous algorithm exposes the fact that it is in fact
a redundancy to send its share of local integral, my_int, to itself. By skipping over
this, the deadlocking potential is eliminated.
- Example1_3. As is well known, integration is a commutative process. Summation of local
integrals from the processors are independent of their orders. In the preceding examples,
while MPI_Send is performed in parallel, MPI_Recev is not only performed sequentially, by
virtue of a loop, the order of receiving is also unnecessarily contrainted per the loop order.
Since order of operations (i.e., summation) is immaterial, this example introduces two
wild cards: MPI_ANY_SOURCE replaces the loop index as receive process; MPI_ANY_TAG replaces
a specified tag. The use of MPI_ANY_SOURCE enables the loop to adjust to the order of
arrival of the sending processes rather than insisting on a specified sending order.
Notes
The examples are developed for the Linux system. However, they all follow the
ANSI standard and hence should work for all operating systems.
The MPI wrappers, such as mpicc, depend on the system default. On the SCC,
the environment variable MPI_COMPILER is default to the GNU compiler. As a result,
GNU C compiler will be used for the wrapper.
Program Compilation
You can compile the set of examples for the Shared Computing Cluster (SCC) or on the IBM Bluegene.
- scc% make -f make.scc
- Lee% make -f make.bgl
Program Execution
- scc% mpirun -np 4 example1
- scc% mpirun -np 4 example2
Contact Info: Kadin Tseng, kadin@bu.edu
Dates
- Created: November 24, 2013
- Modified: