diff --git a/README.md b/README.md index f2c24beb8677c315fa844d6a95c9092499bca815..60fb1367a53aba4857ead3d69db96770546033e7 100644 --- a/README.md +++ b/README.md @@ -32,15 +32,15 @@ First it is necessary to book a node for interactive use: salloc -A <allocation-name> -N 1 -t 1:0:0 ``` -Then the aprun command is used to launch an MPI application: +Then the srun command is used to launch an MPI application: ``` -aprun -n 32 ./example.x +srun -n 32 ./example.x ``` In this example we will start 32 MPI tasks (there are 32 cores per node on the Beskow nodes). -If you do not use aprun and try to start your program on the login node then you will get an error similar to +If you do not use srun and try to start your program on the login node then you will get an error similar to ``` Fatal error in MPI_Init: Other MPI error, error stack: diff --git a/lab3/README.md b/lab3/README.md index 648e03f71f13cb046152efc77345b996e4a01cc8..682d1060e69d0de95417408c0cfc30f3b7f517ad 100644 --- a/lab3/README.md +++ b/lab3/README.md @@ -56,13 +56,15 @@ Use `mpi_wtime` to compute latency and bandwidth with the bandwidth and latency For this exercise, it is nice to compare running on the same node e.g. ``` -aprun -n 2 ./mpi_latency.x +salloc -N 1 --ntasks-per-node=2 -A <project> -t 00:05:00 +srun -n 2 ./mpi_latency.x ``` with running on separate nodes ``` -aprun -N 1 -n 2 ./mpi_latency.x +salloc -N 2 --ntasks-per-node=1 -A <project> -t 00:05:00 +srun -n 2 ./mpi_latency.x ``` Similarly for the bandwidth.