From 87cc7a723f027f14a0ef21dc1245dc306ad14e87 Mon Sep 17 00:00:00 2001
From: Ilya Zhukov <i.zhukov@fz-juelich.de>
Date: Wed, 22 May 2024 14:59:07 +0200
Subject: [PATCH] Add some notes.

---
 docs/running-jobs.md | 7 +++++++
 docs/using-gpus.md   | 6 ++++++
 2 files changed, 13 insertions(+)

diff --git a/docs/running-jobs.md b/docs/running-jobs.md
index c8e1bf24..40914594 100644
--- a/docs/running-jobs.md
+++ b/docs/running-jobs.md
@@ -294,6 +294,13 @@ module load GCC ParaStationMPI
 srun ./hellompi
 ```
 
+:::warning
+
+Always use the same software stack (e.g. compiler, MPI) that was used to build the software to ensure compatibility and optimal performance. Different versions can cause errors or degrade performance.
+
+:::
+
+
 Remember to specify `gpu:4` `gres` for JUWELS Booster.
 
 Then save the script and submit it for execution with:
diff --git a/docs/using-gpus.md b/docs/using-gpus.md
index efda45db..3c634ca8 100644
--- a/docs/using-gpus.md
+++ b/docs/using-gpus.md
@@ -126,6 +126,12 @@ srun: Job step aborted: Waiting up to 6 seconds for job step to finish.
 `sgoto` takes the job id as the first argument and the node number within the job as the second argument where the counting starts with 0.  
 `nvidia-smi` prints some useful information about available GPUs on a node, like temperature, memory usage, currently running processes and power consumption.
 
+:::note
+
+In the example above, `nvidia-smi` shows 0% utilisation because the GPUs are not being used. If GPUs were being used, the utilisation metrics displayed by `nvidia-smi` would be non-zero, reflecting the active use of GPU resources.
+
+:::
+
 ## GPU Affinity
 
 On systems with more than one GPU per node, a choice presents itself - which GPU should be visible to which application task(s)?  
-- 
GitLab