The first block (with the `SBATCH` keyword) informs the system of the HPC resources we want to use. It requires an account name to which the compute time will be billed against, a maximum job time, how many tasks we want per node, how many nodes we want and how many CPUs we wish to use per task. Finally, since the HPC system is divided into **partitions** of nodes, we must specify which one we will use.
For the problem we will run here, we will use:
For the problem we will run here, we will use a special configuration set up for our training.
It is important that the sbatch script has the right [accounts and reservations](https://gitlab.jsc.fz-juelich.de/serghei_tutorials/serghei_tutorial/-/wikis/Accounts-and-reservations) for this training.
Then, we also setup the actual resources we request:
- the account `training2226`
- the `dc-cpu` partition
- use all of the CPUs in a single node (e.g., 64 in JURECA-DC)
- only use one task, as all of our computational domain will be in the same node.
- for our training project we have a **reservation**. That is, nodes that are reserved for us to use during the training, and not available to other users. We will use `training2226-cpu`
Configure the sbatch script with this information and save it in some reasonable place where you can find it (you can call it something like `my_sbatch_script.job`. Remember to also update the `casePath` in the script to where the `input` folder lies. Hint: the `casePath` should not include the `input` itself (i.e., it is the parent directory of `input`).