
Chapter 7. Interprocess communication 63
and MPI. Latency and bandwidth are not equally transparent to the HPF
user, but in general HPF programs are slower than SHMEM and MPI
applications.
The total bandwidth of the machine is very large due to six bi-directional
communication links in each PE. It does not matter much where the com-
putational nodes of your application are physically situated. The physi-
cal start-up time of message passing is about 100 clock periods, which is
incremented by about 2 clock periods for each additional link between
processors. However, the system allocates “neighboring” processors to
your application to minimize the total communication overhead in the
computer.
7.2 Message Passing Interface (MPI)
MPI (Message Passing Interface) is a standardized message-passing li-
brary defined by a wide community of scientific and industrial experts.
Portability is the main advantage of establishing a message-passing stan-
dard. One of the goals of MPI is to provide a clearly defined set of rou-
tines that can be implemented efficiently on many types of platforms.
MPI is also easier and “cleaner” to use than the somewhat older PVM
library. In addition, the MPI library on the T3E is usually about 30%
faster than the PVM library.
Note that you do not need to use any special linker options to use MPI,
because the MPI libraries are linked automatically on the T3E. MPI rou-
tines may be called from FORTRAN 77, Fortran 90, C or C++ programs.
The version of the MPI standard available on the T3E is MPI-1, not MPI-2.
7.2.1 Format of the MPI calls
The format of the MPI calls for Fortran programs (with few exceptions)
is as follows:
SUBROUTINE sub(...)
IMPLICIT NONE
INCLUDE ’mpif.h’
INTEGER :: return_code
...
CALL MPI_ROUTINE(parameter_list, return_code)
...
END SUBROUTINE sub
In Fortran 90 programs, it is often convenient to place the definitions in
MODULE mpi which is taken into use in other modules by the command
USE mpi.
Comentarios a estos manuales