forked from OpenSpeedShop/openspeedshop
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREADME-mpi
119 lines (90 loc) · 5.61 KB
/
README-mpi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
################################################################################
# Copyright (c) 2006-2015 Krell Institute. All Rights Reserved.
#
# This program is free software; you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free Software
# Foundation; either version 2 of the License, or (at your option) any later
# version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
# details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc., 59 Temple
# Place, Suite 330, Boston, MA 02111-1307 USA
################################################################################
This README-mpi was created to help users decide what they have to do to
support multiple versions of MPI. It attempts to answer the question about
how to configure for the various MPI implementations and/or different versions
of the same MPI implementation.
The -D<mpi type> cmake define options control the build process: the MPI-related
plugins' runtime libraries are built using -I and -L options pointing to the
paths that are given for the various implementations.
Run time is then a separate issue. If an MPI job is created outside of
openss, then openss has nothing to do with what implementation it uses; if it
is created from openss, then it will use whatever is specified by the given
mpirun command and openss's environment. The openss environment also determines
the LD_LIBRARY_PATH used when loading MPI collector runtime libraries into the
target process because the entire environment from the shell where Open|SpeedShop
was run is passed through to created processes.
So then the question of multiple versions of mpich, openmpi, etc., becomes a
question of the backward/forward compatibility of those versions. If you can
-I and -L against one version and then run with another, then things will be
fine. If people want to use different versions of the same implementation
that are not compatible, then Open|SpeedShop will have a problem, unless
they can use multiple openss builds.
There are three key reasons why we can't just use a single MPI
collector runtime for all MPI implementations:
1) Different MPI implementations use different of certain key data types. E.g.
MPI_Comm is an integer in MPT but, I believe, a pointer in OpenMPI.
2) Even if the data types are the same between two versions, they may have
different values for some constants. E.g. MPI_Comm is an integer in both
MPT and MPICH, but the value of MPI_COMM_WORLD is different.
3) Not all the libraries are named "libmpi.so". E.g. MPICH uses "libmpich.so".
If, say, MyVersMPI 1.0 uses the same data types, constants, and library name
as MyVersMPI 1.1, then they should be able to use the same collector runtime
module ("mpi-myversmpi-rt.so").
For now we are assuming that each individual implementation has different data
types, constants, etc. But we are also assuming that these are consistent for
multiple different versions of the same implementation.
But if, say, MyVersMPI 2.0 causes MPI_Comm to change from an integer to a
pointer, then we'd have to update our build structure to build two different
collector runtime modules ("mpi-myversmpi1-rt.so" & "mpi-myversmpi2-rt.so")
To configure for individual versions of mpi for use with corresponding applications
built using said mpi version, use these configure options (taken from the output of
"configure --help":
"configure option" "MPI type" "default directory for search"
-DMPICH_DIR=DIR MPICH installation [/opt/mpich]
-DMPICH2_DIR=DIR MPICH2 installation [/opt/mpich2]
-DMVAPICH_DIR=DIR MVAPICH installation [/opt/mvapich]
-DMVAPICH2_DIR=DIR MVAPICH installation [/opt/mvapich2]
-DMPT_DIR=DIR MPT installation [/usr]
-DOPENMPI_DIR=DIR OpenMPI installation [/usr/local]
Also, when running a MPI or MPIT experiment on an application built
with a particular version of MPI used to require setting an environment
variable to specify which MPI implementation the monitored application
was built with.
Here are the details about how to use the environment variable:
OPENSS_MPI_IMPLEMENTATION
This variable specifies the MPI implementation being used by the MPI
application whose performance is being analyzed. It can be set to one
of the currently supported MPI implementations:
mpich, mpich2, mpt, mvapich, mvapich2, or openmpi
Open|SpeedShop is able to auto-detect the MPI implementation of
the application and this variable is only be used to override the auto-
detection code.
A typical setting of this environment variable would look like this:
setenv OPENSS_MPI_IMPLEMENTATION openmpi
This variable does not really do anything if you configured Open|SpeedShop for
only one MPI implementation or the implementation your application is built
with is the default MPI implementation in Open|SpeedShop (see config.log for
DEFAULT_MPI_IMPL_NAME).
OFFLINE VERSION:
The Open|SpeedShop offline version, in general, does need the user to tell it
which implementation the application is built with. The Open|SpeedShop offline
version can't always detect this automatically. Therefore, it is advised to set
OPENSS_MPI_IMPLEMENTATION to the mpi implementation to the one that matches the
application that is being analyzed. This can be done in a module file and then
the appropriate module file can be loaded for the MPI implementation of interest.