Skip to content
Snippets Groups Projects
Select Git revision
  • 0cf837d5a915968ffd761a238e5d6c8c6fc79137
  • 2022 default
  • 2021
  • master protected
  • 2021
5 results

mpi-lab-exercises

Dirk Pleiter's avatar
Dirk Pleiter authored
0cf837d5
History
Name Last commit Last update
lab1
lab2
lab3
README.md

PDC Summer School: General Instructions for the MPI Labs

Where to run

The exercises will be run on PDC's cluster Tegner:

tegner.pdc.kth.se

How to login

To access PDC's systems you need an account at PDC. Check the instructions for obtaining an account.

Once you have an account, you can follow the instructions on how to connect from various operating systems.

Related to the Kerberos-based authentication environment, please check the Kerberos commands documentation

More about the environment on Tegner

Software, which is not available by default, needs to be loaded as a module at login. Use module avail to get a list of available modules. The following modules are of interest for this lab exercises:

  • UPDATE Different versions of the GNU compiler suite (gcc/*)
  • UPDATE Different versions of the Intel compiler suite (i-compilers/*)

For more information see the software development documentation page.

Home directories are provided through an OpenAFS services. See the AFS data management page for more information.

To use the Tegner compute nodes you have to submit SLURM batch jobs or run SLURM interactive jobs.

Compiling MPI programs on Tegner

By default the cray compiler is loaded into your environment. In order to use another compiler you have to swap compiler modules:

module swap PrgEnv-cray PrgEnv-gnu

or

module swap PrgEnv-cray PrgEnv-intel

On Beskow one should always use the compiler wrappers cc, CC or ftn (for C, C++ and Fortran codes, respectively), which will automatically link to MPI libraries and linear algebra libraries like BLAS, LAPACK, etc.

Examples:

# Fortran
ftn [flags] source.f90
# C
cc [flags] source.c
# C++
CC [flags] source.cpp

Note: if you are using the Intel Programming Environment, and if you are compiling C code, you might see error messages containing:

error: identifier "_Float128" is undefined

A workaround is to add a compiler flag:

cc -D_Float128=__float128 source.c

Running MPI programs

First it is necessary to book a node for interactive use:

salloc -A <allocation-name> -N 1 -t 1:0:0

You might also need to specify a reservation by adding the flag --reservation=<name-of-reservation>.

Then the srun command is used to launch an MPI application:

srun -n 24 ./example.x

In this example we will start 24 MPI tasks (there are 24 cores per node on the Tegner Thin nodes).

If you do not use srun and try to start your program on the login node then you will get an error similar to

Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(408): Initialization failed
MPID_Init(123).......: channel initialization failed
MPID_Init(461).......:  PMI2 init failed: 1

MPI Exercises