No edit summary |
(fpc #563: Re-work MPI policy, PYTHON_PATH, remove _cc_name_suffix, remove old packages) |
||
Line 3: | Line 3: | ||
Message Passing Interface (MPI) is an API for parallelization of programs across multiple nodes and has been around since 1994 [http://en.wikipedia.org/wiki/Message_Passing_Interface]. MPI can also be used for parallelization on SMP machines and is considered very efficient in it too (close to 100% scaling on parallelizable code as compared to ~80% commonly obtained with threads due to unoptimal memory allocation on NUMA machines). Before MPI, about every manufacturer of supercomputers had their own programming language for writing programs; MPI made porting software easy. | Message Passing Interface (MPI) is an API for parallelization of programs across multiple nodes and has been around since 1994 [http://en.wikipedia.org/wiki/Message_Passing_Interface]. MPI can also be used for parallelization on SMP machines and is considered very efficient in it too (close to 100% scaling on parallelizable code as compared to ~80% commonly obtained with threads due to unoptimal memory allocation on NUMA machines). Before MPI, about every manufacturer of supercomputers had their own programming language for writing programs; MPI made porting software easy. | ||
There are many MPI implementations available, such as | There are many MPI implementations available, such as [http://www.open-mpi.org/ Open MPI] (the default MPI compiler in Fedora and the MPI compiler used in RHEL), [http://www.mpich.org/ MPICH] (in Fedora and RHEL) and | ||
[http://mvapich.cse.ohio-state.edu/ MVAPICH1 and MVAPICH2] ( | [http://mvapich.cse.ohio-state.edu/ MVAPICH1 and MVAPICH2] (in RHEL but not yet in Fedora). | ||
As some MPI libraries work better on some hardware than others, and some software works best with some MPI library, the selection of the library used must be done in user level, on a session specific basis. Also, people doing high performance computing may want to use more efficient compilers than the default one in Fedora (gcc), so one must be able to have many versions of the MPI compiler each compiled with a different compiler installed at the same time. This must be taken into account when writing spec files. | As some MPI libraries work better on some hardware than others, and some software works best with some MPI library, the selection of the library used must be done in user level, on a session specific basis. Also, people doing high performance computing may want to use more efficient compilers than the default one in Fedora (gcc), so one must be able to have many versions of the MPI compiler each compiled with a different compiler installed at the same time. This must be taken into account when writing spec files. | ||
Line 10: | Line 10: | ||
== Packaging of MPI compilers == | == Packaging of MPI compilers == | ||
The | The files of MPI compilers <b>MUST</b> be installed in the following directories: | ||
{| | {| | ||
! File type !! Placement | ! File type !! Placement | ||
|- | |- | ||
|Binaries||<code>%{_libdir}/%{name | |Binaries||<code>%{_libdir}/%{name}/bin</code> | ||
|- | |- | ||
|Libraries||<code>%{_libdir}/%{name | |Libraries||<code>%{_libdir}/%{name}/lib</code> | ||
|- | |- | ||
|[[PackagingDrafts/Fortran|Fortran modules]]||<code>%{_fmoddir}/%{name} | |[[PackagingDrafts/Fortran|Fortran modules]]||<code>%{_fmoddir}/%{name}</code> | ||
|- | |- | ||
|Architecture specific [[Packaging/Python|Python modules]]||<code>%{ | |Architecture specific [[Packaging/Python|Python modules]]||<code>%{python2_sitearch}/%{name} | ||
%{python3_sitearch}/%{name} | |||
</code> | |||
|- | |- | ||
|Config files||<code>%{_sysconfdir}/%{name}-%{_arch} | |Config files||<code>%{_sysconfdir}/%{name}-%{_arch}</code> | ||
|} | |} | ||
Line 32: | Line 33: | ||
!File type !! Placement | !File type !! Placement | ||
|- | |- | ||
|Man pages||<code>%{_mandir}/%{name}-%{_arch} | |Man pages||<code>%{_mandir}/%{name}-%{_arch}</code> | ||
|- | |- | ||
|Include files||<code>%{_includedir}/%{name}-%{_arch} | |Include files||<code>%{_includedir}/%{name}-%{_arch}</code> | ||
|} | |} | ||
Architecture | Architecture independent parts (except headers which go into <code>-devel</code>) <b>MUST</b> be placed in a <code>-common</code> subpackage that is <code>BuildArch: noarch</code>. | ||
The runtime of MPI compilers (mpirun, the libraries, the manuals etc) <b>MUST</b> be packaged into %{name}, and the development headers and libraries into %{name}-devel. | The runtime of MPI compilers (mpirun, the libraries, the manuals etc) <b>MUST</b> be packaged into %{name}, and the development headers and libraries into %{name}-devel. | ||
Line 66: | Line 45: | ||
As the compiler is installed outside <code>PATH</code>, one needs to load the relevant variables before being able to use the compiler or run MPI programs. This is done using [[Packaging/EnvironmentModules|environment modules]]. | As the compiler is installed outside <code>PATH</code>, one needs to load the relevant variables before being able to use the compiler or run MPI programs. This is done using [[Packaging/EnvironmentModules|environment modules]]. | ||
The module file <b>MUST</b> be installed under <code>%{_sysconfdir}/modulefiles/mpi | The module file <b>MUST</b> be installed under <code>%{_sysconfdir}/modulefiles/mpi</code>. This allows as user with only one mpi implementation installed to load the module with: | ||
<pre> | <pre> | ||
Line 81: | Line 60: | ||
The module file <b>MUST</b> prepend | The module file <b>MUST</b> prepend <code>$MPI_BIN</code> into the user's <code>PATH</code> and prepend <code>$MPI_LIB</code> to <code>LD_LIBRARY_PATH</code>. | ||
The module file <b>MUST</b> also set some helper variables (primarily for use in spec files): | |||
{| | {| | ||
! Variable !! Value !! Explanation | ! Variable !! Value !! Explanation | ||
|- | |- | ||
|<code>MPI_BIN</code>||<code>%{_libdir}/%{name | |<code>MPI_BIN</code>||<code>%{_libdir}/%{name}/bin</code>||Binaries compiled against the MPI stack | ||
|- | |||
|<code>MPI_SYSCONFIG</code>||<code>%{_sysconfdir}/%{name}-%{_arch}</code>||MPI stack specific configuration files | |||
|- | |- | ||
|<code> | |<code>MPI_FORTRAN_MOD_DIR</code>||<code>%{_fmoddir}/%{name}</code>||MPI stack specific Fortran module directory | ||
|- | |- | ||
|<code> | |<code>MPI_INCLUDE</code>||<code>%{_includedir}/%{name}-%{_arch}</code>||MPI stack specific headers | ||
|- | |- | ||
|<code> | |<code>MPI_LIB</code>||<code>%{_libdir}/%{name}/lib</code>||Libraries compiled against the MPI stack | ||
|- | |- | ||
|<code> | |<code>MPI_MAN</code>||<code>%{_mandir}/%{name}-%{_arch}</code>||MPI stack specific man pages | ||
|- | |- | ||
|<code> | |<code>MPI_PYTHON2_SITEARCH</code>||<code>%{python2_sitearch}/%{name}</code>||MPI stack specific Python 2 modules | ||
|- | |- | ||
|<code> | |<code>MPI_PYTHON3_SITEARCH</code>||<code>%{python3_sitearch}/%{name}</code>||MPI stack specific Python 3 modules | ||
|- | |- | ||
|<code>MPI_COMPILER</code>||<code>%{name}-%{_arch | |<code>MPI_COMPILER</code>||<code>%{name}-%{_arch}</code>||Name of compiler package, for use in e.g. spec files | ||
|- | |- | ||
|<code>MPI_SUFFIX</code>||<code> | |<code>MPI_SUFFIX</code>||<code>_%{name}</code>||The suffix used for programs compiled against the MPI stack | ||
|} | |} | ||
As these directories may be used by software using the MPI stack, the MPI runtime package <b>MUST</b> own all of them. | As these directories may be used by software using the MPI stack, the MPI runtime package <b>MUST</b> own all of them. | ||
Line 121: | Line 103: | ||
</pre> | </pre> | ||
loading and unloading the compiler in spec files is as easy as <code>%{_openmpi_load}</code> and <code>%{_openmpi_unload}</code>. | loading and unloading the compiler in spec files is as easy as <code>%{_openmpi_load}</code> and <code>%{_openmpi_unload}</code>. | ||
Automatic setting of the module loading path in python interpreters is done using a <code>.pth</code> file placed in one of the directories normally searched for modules (<code>%{python2_sitearch}</code>, <code>%{python3_sitearch}</code>). Those <code>.pth</code> files should append the directory specified with $MPI_PYTHON2_SITEARCH or $MPI_PYTHON3_SITEARCH environment variable, depending on the interpreter version, to <code>sys.path</code>, and do nothing if those variables are unset. Module files <b>MUST NOT</b> set PYTHONPATH directly, since it cannot be set for both Python versions at the same time. | |||
If the environment module sets compiler flags such as <code>CFLAGS</code> (thus overriding the ones exported in <code>%configure</code>, the RPM macro <b>MUST</b> make them use the Fedora optimization flags <code>%{optflags}</code> once again (as in the example above in which the openmpi-%{_arch} module sets CFLAGS). | If the environment module sets compiler flags such as <code>CFLAGS</code> (thus overriding the ones exported in <code>%configure</code>, the RPM macro <b>MUST</b> make them use the Fedora optimization flags <code>%{optflags}</code> once again (as in the example above in which the openmpi-%{_arch} module sets CFLAGS). | ||
Line 128: | Line 112: | ||
Software that supports MPI <b>MUST</b> be packaged also in serial mode [i.e. no MPI], if it is supported by upstream. (for instance: <code>foo</code>). | Software that supports MPI <b>MUST</b> be packaged also in serial mode [i.e. no MPI], if it is supported by upstream. (for instance: <code>foo</code>). | ||
If possible, the packager <b>MUST</b> package versions for each MPI compiler in Fedora (e.g. if something can only be built with | If possible, the packager <b>MUST</b> package versions for each MPI compiler in Fedora (e.g. if something can only be built with mpich and mvapich2, then mvapich1 and openmpi packages do not need to be made). | ||
MPI implementation specific files <b>MUST</b> be installed in the directories used by the used MPI compiler (<code>$MPI_BIN</code>, <code>$MPI_LIB</code> and so on). | MPI implementation specific files <b>MUST</b> be installed in the directories used by the used MPI compiler (<code>$MPI_BIN</code>, <code>$MPI_LIB</code> and so on). | ||
The binaries MUST be suffixed with <code>$MPI_SUFFIX</code> (e.g. _openmpi for Open MPI, | The binaries MUST be suffixed with <code>$MPI_SUFFIX</code> (e.g. _openmpi for Open MPI, _mpich for MPICH and _mvapich2 for MVAPICH2). This is for two reasons: the serial version of the program can still be run when an MPI module is loaded and the user is always aware of the version s/he is running. This does not need to hurt the use of shell scripts: | ||
<pre> | <pre> | ||
# Which MPI implementation do we use? | # Which MPI implementation do we use? | ||
#module load mpi/ | #module load mpi/mvapich2-i386 | ||
#module load mpi/openmpi-i386 | #module load mpi/openmpi-i386 | ||
module load mpi/ | module load mpi/mpich-i386 | ||
# Run preprocessor | # Run preprocessor | ||
Line 150: | Line 134: | ||
</pre> | </pre> | ||
The MPI enabled bits <b>MUST</b> be placed in a subpackage with the suffix denoting the MPI compiler used (for instance: <code>foo-openmpi</code> for Open MPI [the traditional MPI compiler in Fedora] or <code>foo- | The MPI enabled bits <b>MUST</b> be placed in a subpackage with the suffix denoting the MPI compiler used (for instance: <code>foo-openmpi</code> for Open MPI [the traditional MPI compiler in Fedora] or <code>foo-mpich</code> for MPICH). For directory ownership and to guarantee the pickup of the correct MPI runtime, the MPI subpackages <b>MUST</b> require the correct MPI compiler's runtime package. | ||
Each MPI build of shared libraries <b>SHOULD</b> have a separate -libs subpackage for the libraries (e.g. foo- | Each MPI build of shared libraries <b>SHOULD</b> have a separate -libs subpackage for the libraries (e.g. foo-mpich-libs). As in the case of MPI compilers, library configuration (in <code>/etc/ld.so.conf.d</code>) <b>MUST NOT</b> be made. | ||
In case the headers are the same regardless of the compilation method and architecture (e.g. 32-bit serial, 64-bit Open MPI, | In case the headers are the same regardless of the compilation method and architecture (e.g. 32-bit serial, 64-bit Open MPI, MPICH), they <b>MUST</b> be split into a separate <code>-headers</code> subpackage (e.g. 'foo-headers'). Fortran modules are architecture specific and as such are placed in the (MPI implementation specific) <code>-devel</code> package (foo-devel for the serial version and foo-openmpi-devel for the Open MPI version). | ||
Each MPI build <b>MUST</b> have a separate -devel subpackage (e.g. foo- | Each MPI build <b>MUST</b> have a separate -devel subpackage (e.g. foo-mpich-devel) that includes the development libraries and <code>Requires: %{name}-headers</code> if such a package exists. The goal is to be able to install and develop using e.g. 'foo-mpich-devel' without needing to install e.g. openmpi or the serial version of the package. | ||
Files must be shared between packages as much as possible. Compiler independent parts, such as data files in <code>%{_datadir}/%{name}</code> and man files <b>MUST</b> be put into a <code>-common</code> subpackage that is required by all of the binary packages (the serial package and all of the MPI packages). | Files must be shared between packages as much as possible. Compiler independent parts, such as data files in <code>%{_datadir}/%{name}</code> and man files <b>MUST</b> be put into a <code>-common</code> subpackage that is required by all of the binary packages (the serial package and all of the MPI packages). | ||
Line 170: | Line 154: | ||
%package common | %package common | ||
%package | %package openmpi | ||
BuildRequires: openmpi-devel | BuildRequires: openmpi-devel | ||
# Require explicitly for dir ownership and to guarantee the pickup of the right runtime | # Require explicitly for dir ownership and to guarantee the pickup of the right runtime | ||
Line 182: | Line 160: | ||
Requires: %{name}-common = %{version}-%{release} | Requires: %{name}-common = %{version}-%{release} | ||
%package | %package mpich | ||
BuildRequires: | BuildRequires: mpich-devel | ||
# Require explicitly for dir ownership and to guarantee the pickup of the right runtime | # Require explicitly for dir ownership and to guarantee the pickup of the right runtime | ||
Requires: | Requires: mpich | ||
Requires: %{name}-common = %{version}-%{release} | Requires: %{name}-common = %{version}-%{release} | ||
Line 207: | Line 185: | ||
export FC=mpif90 | export FC=mpif90 | ||
export F77=mpif77 | export F77=mpif77 | ||
# Build OpenMPI version | # Build OpenMPI version | ||
Line 218: | Line 191: | ||
%{_openmpi_unload} | %{_openmpi_unload} | ||
# Build | # Build mpich version | ||
%{ | %{_mpich_load} | ||
%dobuild | %dobuild | ||
%{ | %{_mpich_unload} | ||
%install | %install | ||
# Install serial version | # Install serial version | ||
make -C serial install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p" | make -C serial install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p" | ||
# Install OpenMPI version | # Install OpenMPI version | ||
Line 237: | Line 205: | ||
%{_openmpi_unload} | %{_openmpi_unload} | ||
# Install | # Install MPICH version | ||
%{ | %{_mpich_load} | ||
make -C $MPI_COMPILER install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p" | make -C $MPI_COMPILER install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p" | ||
%{ | %{_mpich_unload} | ||
Line 246: | Line 214: | ||
%files common # All files shared between the serial and different MPI versions | %files common # All files shared between the serial and different MPI versions | ||
%files openmpi # All openmpi linked files | %files openmpi # All openmpi linked files | ||
%files | %files mpich # All mpich linked files | ||
</pre> | </pre> |
Revision as of 08:59, 16 October 2015
Introduction
Message Passing Interface (MPI) is an API for parallelization of programs across multiple nodes and has been around since 1994 [1]. MPI can also be used for parallelization on SMP machines and is considered very efficient in it too (close to 100% scaling on parallelizable code as compared to ~80% commonly obtained with threads due to unoptimal memory allocation on NUMA machines). Before MPI, about every manufacturer of supercomputers had their own programming language for writing programs; MPI made porting software easy.
There are many MPI implementations available, such as Open MPI (the default MPI compiler in Fedora and the MPI compiler used in RHEL), MPICH (in Fedora and RHEL) and MVAPICH1 and MVAPICH2 (in RHEL but not yet in Fedora).
As some MPI libraries work better on some hardware than others, and some software works best with some MPI library, the selection of the library used must be done in user level, on a session specific basis. Also, people doing high performance computing may want to use more efficient compilers than the default one in Fedora (gcc), so one must be able to have many versions of the MPI compiler each compiled with a different compiler installed at the same time. This must be taken into account when writing spec files.
Packaging of MPI compilers
The files of MPI compilers MUST be installed in the following directories:
File type | Placement |
---|---|
Binaries | %{_libdir}/%{name}/bin
|
Libraries | %{_libdir}/%{name}/lib
|
Fortran modules | %{_fmoddir}/%{name}
|
Architecture specific Python modules | %{python2_sitearch}/%{name}
|
Config files | %{_sysconfdir}/%{name}-%{_arch}
|
As include files and manual pages are bound to overlap between different MPI implementations, they MUST also placed outside normal directories. It is possible that some man pages or include files (either those of the MPI compiler itself or of some MPI software installed in the compiler's directory) are architecture specific (e.g. a definition on a 32-bit arch differs from that on a 64-bit arch), the directories that MUST be used are as follows:
File type | Placement |
---|---|
Man pages | %{_mandir}/%{name}-%{_arch}
|
Include files | %{_includedir}/%{name}-%{_arch}
|
Architecture independent parts (except headers which go into -devel
) MUST be placed in a -common
subpackage that is BuildArch: noarch
.
The runtime of MPI compilers (mpirun, the libraries, the manuals etc) MUST be packaged into %{name}, and the development headers and libraries into %{name}-devel.
As the compiler is installed outside PATH
, one needs to load the relevant variables before being able to use the compiler or run MPI programs. This is done using environment modules.
The module file MUST be installed under %{_sysconfdir}/modulefiles/mpi
. This allows as user with only one mpi implementation installed to load the module with:
module load mpi
The module file MUST have the line:
conflict mpi
to prevent concurrent loading of multiple mpi modules.
The module file MUST prepend $MPI_BIN
into the user's PATH
and prepend $MPI_LIB
to LD_LIBRARY_PATH
.
The module file MUST also set some helper variables (primarily for use in spec files):
Variable | Value | Explanation |
---|---|---|
MPI_BIN |
%{_libdir}/%{name}/bin |
Binaries compiled against the MPI stack |
MPI_SYSCONFIG |
%{_sysconfdir}/%{name}-%{_arch} |
MPI stack specific configuration files |
MPI_FORTRAN_MOD_DIR |
%{_fmoddir}/%{name} |
MPI stack specific Fortran module directory |
MPI_INCLUDE |
%{_includedir}/%{name}-%{_arch} |
MPI stack specific headers |
MPI_LIB |
%{_libdir}/%{name}/lib |
Libraries compiled against the MPI stack |
MPI_MAN |
%{_mandir}/%{name}-%{_arch} |
MPI stack specific man pages |
MPI_PYTHON2_SITEARCH |
%{python2_sitearch}/%{name} |
MPI stack specific Python 2 modules |
MPI_PYTHON3_SITEARCH |
%{python3_sitearch}/%{name} |
MPI stack specific Python 3 modules |
MPI_COMPILER |
%{name}-%{_arch} |
Name of compiler package, for use in e.g. spec files |
MPI_SUFFIX |
_%{name} |
The suffix used for programs compiled against the MPI stack |
As these directories may be used by software using the MPI stack, the MPI runtime package MUST own all of them.
MUST: By default, NO files are placed in /etc/ld.so.conf.d
. If the packager wishes to provide alternatives support, it MUST be placed in a subpackage along with the ld.so.conf.d file so that alternatives support does not need to be installed if not wished for.
MUST: If the maintainer wishes for the environment module to load automatically by use of a scriptlet in /etc/profile.d or by some other mechanism, this MUST be done in a subpackage.
MUST: The MPI compiler package MUST provide an RPM macro that makes loading and unloading the support easy in spec files, e.g. by placing the following in /etc/rpm/macros.openmpi
%_openmpi_load \ . /etc/profile.d/modules.sh; \ module load mpi/openmpi-%{_arch}; \ export CFLAGS="$CFLAGS %{optflags}"; %_openmpi_unload \ . /etc/profile.d/modules.sh; \ module unload mpi/openmpi-%{_arch};
loading and unloading the compiler in spec files is as easy as %{_openmpi_load}
and %{_openmpi_unload}
.
Automatic setting of the module loading path in python interpreters is done using a .pth
file placed in one of the directories normally searched for modules (%{python2_sitearch}
, %{python3_sitearch}
). Those .pth
files should append the directory specified with $MPI_PYTHON2_SITEARCH or $MPI_PYTHON3_SITEARCH environment variable, depending on the interpreter version, to sys.path
, and do nothing if those variables are unset. Module files MUST NOT set PYTHONPATH directly, since it cannot be set for both Python versions at the same time.
If the environment module sets compiler flags such as CFLAGS
(thus overriding the ones exported in %configure
, the RPM macro MUST make them use the Fedora optimization flags %{optflags}
once again (as in the example above in which the openmpi-%{_arch} module sets CFLAGS).
Packaging of MPI software
Software that supports MPI MUST be packaged also in serial mode [i.e. no MPI], if it is supported by upstream. (for instance: foo
).
If possible, the packager MUST package versions for each MPI compiler in Fedora (e.g. if something can only be built with mpich and mvapich2, then mvapich1 and openmpi packages do not need to be made).
MPI implementation specific files MUST be installed in the directories used by the used MPI compiler ($MPI_BIN
, $MPI_LIB
and so on).
The binaries MUST be suffixed with $MPI_SUFFIX
(e.g. _openmpi for Open MPI, _mpich for MPICH and _mvapich2 for MVAPICH2). This is for two reasons: the serial version of the program can still be run when an MPI module is loaded and the user is always aware of the version s/he is running. This does not need to hurt the use of shell scripts:
# Which MPI implementation do we use? #module load mpi/mvapich2-i386 #module load mpi/openmpi-i386 module load mpi/mpich-i386 # Run preprocessor foo -preprocess < foo.in # Run calculation mpirun -np 4 foo${MPI_SUFFIX} # Run some processing mpirun -np 4 bar${MPI_SUFFIX} -process # Collect results bar -collect
The MPI enabled bits MUST be placed in a subpackage with the suffix denoting the MPI compiler used (for instance: foo-openmpi
for Open MPI [the traditional MPI compiler in Fedora] or foo-mpich
for MPICH). For directory ownership and to guarantee the pickup of the correct MPI runtime, the MPI subpackages MUST require the correct MPI compiler's runtime package.
Each MPI build of shared libraries SHOULD have a separate -libs subpackage for the libraries (e.g. foo-mpich-libs). As in the case of MPI compilers, library configuration (in /etc/ld.so.conf.d
) MUST NOT be made.
In case the headers are the same regardless of the compilation method and architecture (e.g. 32-bit serial, 64-bit Open MPI, MPICH), they MUST be split into a separate -headers
subpackage (e.g. 'foo-headers'). Fortran modules are architecture specific and as such are placed in the (MPI implementation specific) -devel
package (foo-devel for the serial version and foo-openmpi-devel for the Open MPI version).
Each MPI build MUST have a separate -devel subpackage (e.g. foo-mpich-devel) that includes the development libraries and Requires: %{name}-headers
if such a package exists. The goal is to be able to install and develop using e.g. 'foo-mpich-devel' without needing to install e.g. openmpi or the serial version of the package.
Files must be shared between packages as much as possible. Compiler independent parts, such as data files in %{_datadir}/%{name}
and man files MUST be put into a -common
subpackage that is required by all of the binary packages (the serial package and all of the MPI packages).
A sample spec file
# Define a macro for calling ../configure instead of ./configure %global dconfigure %(printf %%s '%configure' | sed 's!\./configure!../configure!g') Name: foo Requires: %{name}-common = %{version}-%{release} %package common %package openmpi BuildRequires: openmpi-devel # Require explicitly for dir ownership and to guarantee the pickup of the right runtime Requires: openmpi Requires: %{name}-common = %{version}-%{release} %package mpich BuildRequires: mpich-devel # Require explicitly for dir ownership and to guarantee the pickup of the right runtime Requires: mpich Requires: %{name}-common = %{version}-%{release} %build # Have to do off-root builds to be able to build many versions at once # To avoid replicated code define a build macro %define dobuild() \ mkdir $MPI_COMPILER; \ cd $MPI_COMPILER; \ %dconfigure --program-suffix=$MPI_SUFFIX ;\ make %{?_smp_mflags} ; \ cd .. # Build serial version, dummy arguments MPI_COMPILER=serial MPI_SUFFIX= %dobuild # Build parallel versions: set compiler variables to MPI wrappers export CC=mpicc export CXX=mpicxx export FC=mpif90 export F77=mpif77 # Build OpenMPI version %{_openmpi_load} %dobuild %{_openmpi_unload} # Build mpich version %{_mpich_load} %dobuild %{_mpich_unload} %install # Install serial version make -C serial install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p" # Install OpenMPI version %{_openmpi_load} make -C $MPI_COMPILER install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p" %{_openmpi_unload} # Install MPICH version %{_mpich_load} make -C $MPI_COMPILER install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p" %{_mpich_unload} %files # All the serial (normal) binaries %files common # All files shared between the serial and different MPI versions %files openmpi # All openmpi linked files %files mpich # All mpich linked files