Update Setting up the Monai tsunami experiment authored by Daniel Caviedes Voullieme's avatar Daniel Caviedes Voullieme
......@@ -34,7 +34,7 @@ Let's first setup `parameters.input` and `sw.input`, which are mandatory. Rememb
### Simulation control `parameters.input`.
The simulation duration should be 30s, with a CFL value of 0.5, spatial output with a frequency of 1s and observation output frequency of 0.25s.
The simulation duration should be 20s, with a CFL value of 0.5, spatial output with a frequency of 1s and observation output frequency of 0.25s.
We will run this test on a single compute node, therefore ***no domain decomposition*** is necessary in either the x nor y directions. Consequently `parNx` and `parNy` should be 1.
......@@ -50,4 +50,41 @@ Let's try to launch the job. Recall that the command looks like this
```$SERGHEIPATH/bin/serghei ./input/ ./output/ NUMBER_OF_THREADS```
or use some `sbatch` script to configure the job in an HPC system.
\ No newline at end of file
However, we now want to launch this job using the compute nodes. Although this can be done through the command line, it is better to use an `sbatch` script to configure the job in an HPC system.
A minimal `sbatch` script for SERGHEI looks like this below.
```
#!/bin/bash -x
#SBATCH --job-name="SERGHEI"
#SBATCH --account=##acocunt_name##
#SBATCH --time=00:05:00
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=64
#SBATCH --nodes=1
#SBATCH --partition=dc-cpu-devel
##### until here, we have configured the HPC resoureces #####
##### now we configure some additional goodies #####
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
## load the modules
source $SERGHEIPATH/machineSetup
## define the path to the simulation
casePath="/the/path/to/the/input/files"
## define some output folder name
OUTDIR=output
## remove the output folder (otherwise SERGHEI will prompt an error if it exists)
rm -rf $casePath/$OUTDIR
## launch the job in the HPC system
srun $SERGHEIPATH/bin/serghei $casePath/input/ $casePath/$OUTDIR/ $OMP_NUM_THREADS
```