diff --git a/README.md b/README.md
index e793d6c39308bde00a39ffb09c600912c4642104..01ac2454eba9da23963447503333d88a284dcff7 100644
--- a/README.md
+++ b/README.md
@@ -42,7 +42,7 @@ ftn my_prog.f90
 First it is necessary to book a node for interactive use:
 
 ```
-salloc -A <allocation-name> -p main -N 1 -t 1:0:0
+salloc -p shared --nodes=1 --cpus-per-task=32 -t 0:30:00 -A edu22.summer --reservation=<name-of-reservation>
 ```
 
 You might also need to specify a **reservation** by adding the flag `--reservation=<name-of-reservation>`.
@@ -59,4 +59,4 @@ In this example we will start 128 MPI tasks (there are 128 cores per node on all
 ## MPI Exercises
 
 - MPI Lab 1: [Program Structure and Point-to-Point Communication in MPI](lab1/README.md)
-- MPI Lab 2: [MPI I/O and MPI performance](lab2/README.md)
\ No newline at end of file
+- MPI Lab 2: [MPI I/O and MPI performance](lab2/README.md)
diff --git a/lab2/README.md b/lab2/README.md
index 38fe9094125bc40867b9e24fc73d908607c0f65d..81784840ba263d04a46411a66d6ca3c14cf50e82 100644
--- a/lab2/README.md
+++ b/lab2/README.md
@@ -49,14 +49,14 @@ Use ``mpi_wtime`` to compute latency and bandwidth with the bandwidth and latenc
 For this exercise you should compare different setups where (a) both MPI ranks are on the same node, e.g.
 
 ```
-salloc -N 1 --ntasks-per-node=2 -A <project> -t 00:05:00
+salloc -p shared --nodes=1 --cpus-per-task=2 -t 0:30:00 -A edu22.summer --reservation=<name-of-reservation>
 mpirun -n 2 ./mpi_latency.x
 ```
 
 or on separate nodes, e.g.
 
 ```
-salloc -N 2 --ntasks-per-node=1 -A <project> -t 00:05:00
+salloc -p main --nodes=2 --cpus-per-task=2 -t 0:30:00 -A edu22.summer
 mpirun -n 2 ./mpi_latency.x
 ```