Skip to main content

Archives

A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

MPI on ECCO

Print Friendly, PDF & Email

MPI programs

The cluster supports MPI. By default, openmpi is available (module load openmpi). To use Intel MPI, load the appropriate module

module load intel/intel-mpi-5.0.3

and compile your code.

Compiling with Intel MPI

A simple example (drawn from Intel website) might proceed as follows:

# create directory to work in
mkdir intel-mpi
cd intel-mpi/
# copy test code
cp /opt/intel/impi/5.0.3.048/test/test.c .
# compile the code using the Intel compiler - does this require a license?
module load intel/intel-compilers-15.0.3
module load intel/intel-mpi-5.0.3
mpiicc -o myprog test.c
# inspect the file
file myprog 

myprog: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped

# create the hostfile to run the MPI program on (hostnames are listed on the ECCO page)
cat "compute-0-1
compute-0-2" > hostfile

Note: At the time of writing, compute-0-1 and compute-0-2 each have 8 cores.

# run the program
iqsub
# test where mpirun is
which mpirun
/opt/openmpi/bin/mpirun
mpirun -n 16 ./myprog

Hello world: rank 0 of 16 running on compute-0-1.ecco
Hello world: rank 1 of 16 running on compute-0-1.ecco
Hello world: rank 2 of 16 running on compute-0-1.ecco
Hello world: rank 3 of 16 running on compute-0-1.ecco
Hello world: rank 4 of 16 running on compute-0-1.ecco
Hello world: rank 5 of 16 running on compute-0-1.ecco
Hello world: rank 6 of 16 running on compute-0-1.ecco
Hello world: rank 7 of 16 running on compute-0-1.ecco
Hello world: rank 8 of 16 running on compute-0-2.ecco
Hello world: rank 9 of 16 running on compute-0-2.ecco
Hello world: rank 10 of 16 running on compute-0-2.ecco
Hello world: rank 11 of 16 running on compute-0-2.ecco
Hello world: rank 12 of 16 running on compute-0-2.ecco
Hello world: rank 13 of 16 running on compute-0-2.ecco
Hello world: rank 14 of 16 running on compute-0-2.ecco
Hello world: rank 15 of 16 running on compute-0-2.ecco

So pulling it all together, the following minimal qsub should work for large-scale MPI runs:

Compiling with OpenMPI

Redoing the simple example, but with OpenMPI:

# create directory to work in
mkdir open-mpi
cd open-mpi/
# copy test code - we use the same
cp /opt/intel/impi/5.0.3.048/test/test.c .
# compile the code using the GNU compiler and OpenMPI
# note the difference here: mpicc vs. mpiicc
mpicc -o myprog test.c
# inspect the file
file myprog 

myprog: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped

Note: At the time of writing, compute-0-1 and compute-0-2 each have 8 cores.
# test run the program locally
mpirun -n 2  ./myprog

Hello world: rank 0 of 16 running on compute-0-1.ecco
Hello world: rank 1 of 16 running on compute-0-1.ecco

So pulling it all together, the following minimal qsub should work for large-scale MPI runs: