... | ... | @@ -151,13 +151,18 @@ make -j 8 |
|
|
|
|
|
would use 8 threads to run the compilation. Give this a try. If everything is correctly set, you should see a message indicating a successful build.
|
|
|
|
|
|
3- check that the `serghei` binary exists at `serghei/bin`.
|
|
|
4- check that the `serghei` binary exists at `serghei/bin`.
|
|
|
|
|
|
# 5. Running a test case
|
|
|
|
|
|
Let's run a simple test to check if everything is working fine. We will use an analytical dam break case, included with SERGHEI as a unit test.
|
|
|
|
|
|
1. Navigate to `serghei/unitTests/ShallowWater/Analytical/dambreakX_4_1`.
|
|
|
1. Navigate to where the test case is
|
|
|
|
|
|
```bash
|
|
|
cd $SERGHEIPATH/unitTests/ShallowWater/Analytical/dambreakX_4_1
|
|
|
```
|
|
|
|
|
|
2. You will see that this here contains an `input` directory, which contains a number of input files to define the simulation. At this stage, we will not be concerned with how this works. We will focus on that later.
|
|
|
3. To run this case with SERGHEI, run
|
|
|
|
... | ... | @@ -173,7 +178,7 @@ The command above should run a dam break simulation for 10 seconds. If all works |
|
|
|
|
|
If something goes wrong, check the paths, make sure you are in the right directory, and make sure that `serghei` was properly built (from the previous step).
|
|
|
|
|
|
**IMPORTANT**: Bear in mind that this is **not the correct** way to use the HPC system. By running the command above we ran a command in the ***login node*** of the HPC system, which is not meant to run simulations. We have only done this to quickly test if our build was ok, with an extremely small simulation, on a single CPU. To properly run a simulation we will use `slurm` and `sbatch` scripts in the next tutorial.
|
|
|
**IMPORTANT**: Bear in mind that this is **not the correct** way to use the HPC system. By running the command above we ran a command in the ***login node*** of the HPC system, which is not meant to run simulations. We have only done this to quickly test if our build was ok, with an extremely small simulation, on a single CPU. To properly run a simulation we will use `slurm` and `sbatch` scripts in the next part of this tutorial.
|
|
|
|
|
|
# 6. Run a case using sbatch
|
|
|
|
... | ... | @@ -213,14 +218,18 @@ rm -rf $casePath/$OUTDIR |
|
|
srun $SERGHEIPATH/bin/serghei $casePath/input/ $casePath/$OUTDIR/ $OMP_NUM_THREADS
|
|
|
|
|
|
```
|
|
|
The first block (with the `SBATCH` keyword) informs the system of the HPC resources we want to use. It requires an account name to which the compute time will be billed against, a maximum job time, how many tasks we want per node, how many nodes we want and how many cpus we wish to use per task. Finally, since the HPC system is divided into **partitions** of nodes, we must specify which one we will use.
|
|
|
The first block (with the `SBATCH` keyword) informs the system of the HPC resources we want to use. It requires an account name to which the compute time will be billed against, a maximum job time, how many tasks we want per node, how many nodes we want and how many CPUs we wish to use per task. Finally, since the HPC system is divided into **partitions** of nodes, we must specify which one we will use.
|
|
|
|
|
|
For the problem we will run here, we will use:
|
|
|
- the `dc-cpu-devel` partition
|
|
|
- use all of the CPUs in a single node (e.g., 64 in JURECA-DC)
|
|
|
- only use one task, as all of our computational domain will be in the same node.
|
|
|
|
|
|
For the problem we will run here, we will use the `dc-cpu-devel` partition, we will use all of the CPUs in a single node, and will only use one task, as all of our computational domain will be in the same node.
|
|
|
Configure the sbatch script with this information and save it in some reasonable place where you can find it. Remember to also update the `casePath` in the script to where the `input` folder lies.
|
|
|
Configure the sbatch script with this information and save it in some reasonable place where you can find it (you can call it something like `my_sbatch_script.job`. Remember to also update the `casePath` in the script to where the `input` folder lies. Hint: the `casePath` should not include the `input` itself (i.e., it is the parent directory of `input`).
|
|
|
|
|
|
Finally, we can run this script and launch the job.
|
|
|
```bash
|
|
|
sbatch my_sbatch_script
|
|
|
sbatch my_sbatch_script.job
|
|
|
```
|
|
|
|
|
|
You can check if your job is in the queue, or already running, with
|
... | ... | |