How to submit an MPI parallel job

This article explains the submission of an MPI parallel job to LOTUS. It covers:

  • What is an MPI parallel job?
  • MPI implementation and LSF
  • Parallel MPI job submission     

What is an MPI parallel job?

An MPI parallel job runs on more than one core and more than one host using the Message Passing Interface (MPI) library for communication between all cores. A simple script, such as the one given below "my_script_name.bsub" 

# MPI myname 
#BSUB -cwd /home/user/work
#BSUB -q par-multi
#BSUB -n 36
#BSUB -W 00:30 
#BSUB -o %J.lo 
#BSUB -e %J.err  

# Load any environment modules (needed for mpi_myname.exe)
module load libfftw/intel/3.2.2_mpi

# Submit the job using
mpirun.lotus ./mpi_myname.exe

-n refers to the number of processors or cores you wish to run on. The rest of the #BSUB input options, and many more besides, can be found in the bsub manual page or in the related articles.

mpirun.lotus is a wrapper around the native Platform MPI mpirun command that ensures the use of the special LSF launch mechanism (blaunch) and forces the MPI communications to run over the private MPI network.

To submit the job, do not run the script, but rather use it as the standard input to bsub, like so:

$ bsub -x < my_script_name

The  -x flag is used to group the parallel jobs on to the smallest number of hosts.

MPI implementation and LSF

The Platform MPI library is the only supported MPI library on the cluster. It provides at least 10% speedup compared to mpich-gm for cluster applications, is scalable to higher node counts than other MPI libraries and , more importantly, supports the full range of interconnects from one library. The later feature allows one binary compiled application to run LOTUS over TCP or over Infiniband, or (if for example an infiniband card is down in a node) a mix of both. MPI v8 is fully MPI2 compliant and MPI I/O features are fully supported on the LOTUS home file systems and /work/scratch directories as they use the Panasas parallel file system.

The Platform MPI libraries are installed at /opt/platform_mpi/lib/linux_amd64/. Compile and link to the libraries using the  mpif90mpif77mpicc,mpiCC wrapper scripts which are in the default unix path. The scripts will detect which compiler you are using (Gnu, PGI, Intel) by the compiler environment loaded and add the relevant compiler library switches. For example

module load pgi
mpif90

will use the PGI  pgif90 compiler. Whereas

module load intel-compiler
mpicc

will call Intel's  icc compiler.

The Platform MPI User Guide can be found at  /opt/platform_mpi/doc/pcmpi.08.01.00.ug.pdf in pdf format.

Still need help? Contact Us Contact Us