@@ -43,60 +43,9 @@ The initial conditions for the experiment are those of a quiescent (i.e., zero v
This benchmark typically uses Manning's friction law, with a roughness value of 0.01. Set this up in the file. Finally, because of the small scale of the experiment, and because in a tsunami we are particularly interested on the advancement of the wave over dry land, we will use a dry depth tolerance of 0.0001 m.
### Setup and sbatch file and run it
We could try to run a job by running something like this:
### Running the simulation
Copy the example sbatch script that was created in a previous tutorial and modify it to include the path to this tsunami case. Then run the sbatch script.
However, this will not send the job to the compute nodes, but rather launch the job locally in the login node where we are working. We now want to launch this job using the compute nodes. Although this can be done through the command line, it is better to use an `sbatch` script to configure the job in an HPC system.
A minimal `sbatch` script for SERGHEI looks like this below.
```bash
#!/bin/bash -x
#SBATCH --job-name="SERGHEI"
#SBATCH --account=##account_name##
#SBATCH --time=00:05:00
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=##how_many?##
#SBATCH --nodes=1
#SBATCH --partition=##partition_to_use##
##### until here, we have configured the HPC resoureces #####
##### now we configure some additional goodies #####
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
## load the modules
source $SERGHEIPATH/machineSetup
## define the path to the simulation
casePath="/the/path/to/the/input/files"
## define some output folder name
OUTDIR=output
## remove the output folder (otherwise SERGHEI will prompt an error if it exists)
The first block (with the `SBATCH` keyword) informs the system of the HPC resources we want to use. It requires an account name to which the compute time will be billed against, a maximum job time, how many tasks we want per node, how many nodes we want and how many cpus we wish to use per task. Finally, since the HPC system is divided into **partitions** of nodes, we must specify which one we will use.
For the problem we will run here, we will use the `dc-cpu-devel` partition, we will use all of the CPUs in a single node, and will only use one task, as all of our computational domain will be in the same node.
Configure the sbatch script with this information and save it in some reasonable place (e.g., where you have the Monai files). Remember to also update the `casePath` in the script to where the `input` folder lies.
Finally, we can run this script and launch the job.
```
sbatch my_sbatch_script
```
If things go well, you should get an output folder and inside it a `log.out` file.
You will also get a `slurm` job report showing everything that happened behind the scenes and not shown in the terminal while the job was launched in the compute nodes. Inspecting the contents of this file which also show if there was an error or a successful run. If you had errors, try troubleshooting through them.
### Visualising results
Let's check out the results. Although the output of this exercise is rather small, we will nonetheless visualise it in a lightweight way in the remote system.