From 324097136dadb0b98afcbd4458674f5ff0cfd757 Mon Sep 17 00:00:00 2001
From: Dirk Pleiter <pleiter@kth.se>
Date: Thu, 4 Aug 2022 14:53:45 +0200
Subject: [PATCH] Smaller updates

---
 README.md      | 4 ++--
 lab2/README.md | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/README.md b/README.md
index e793d6c..01ac245 100644
--- a/README.md
+++ b/README.md
@@ -42,7 +42,7 @@ ftn my_prog.f90
 First it is necessary to book a node for interactive use:
 
 ```
-salloc -A <allocation-name> -p main -N 1 -t 1:0:0
+salloc -p shared --nodes=1 --cpus-per-task=32 -t 0:30:00 -A edu22.summer --reservation=<name-of-reservation>
 ```
 
 You might also need to specify a **reservation** by adding the flag `--reservation=<name-of-reservation>`.
@@ -59,4 +59,4 @@ In this example we will start 128 MPI tasks (there are 128 cores per node on all
 ## MPI Exercises
 
 - MPI Lab 1: [Program Structure and Point-to-Point Communication in MPI](lab1/README.md)
-- MPI Lab 2: [MPI I/O and MPI performance](lab2/README.md)
\ No newline at end of file
+- MPI Lab 2: [MPI I/O and MPI performance](lab2/README.md)
diff --git a/lab2/README.md b/lab2/README.md
index 38fe909..8178484 100644
--- a/lab2/README.md
+++ b/lab2/README.md
@@ -49,14 +49,14 @@ Use ``mpi_wtime`` to compute latency and bandwidth with the bandwidth and latenc
 For this exercise you should compare different setups where (a) both MPI ranks are on the same node, e.g.
 
 ```
-salloc -N 1 --ntasks-per-node=2 -A <project> -t 00:05:00
+salloc -p shared --nodes=1 --cpus-per-task=2 -t 0:30:00 -A edu22.summer --reservation=<name-of-reservation>
 mpirun -n 2 ./mpi_latency.x
 ```
 
 or on separate nodes, e.g.
 
 ```
-salloc -N 2 --ntasks-per-node=1 -A <project> -t 00:05:00
+salloc -p main --nodes=2 --cpus-per-task=2 -t 0:30:00 -A edu22.summer
 mpirun -n 2 ./mpi_latency.x
 ```
 
-- 
GitLab