| ... | ... | @@ -194,11 +194,19 @@ For the problem we will run here, we will use the `dc-cpu-devel` partition, we w |
|
|
|
Configure the sbatch script with this information and save it in some reasonable place where you can find it. Remember to also update the `casePath` in the script to where the `input` folder lies.
|
|
|
|
|
|
|
|
Finally, we can run this script and launch the job.
|
|
|
|
```
|
|
|
|
```bash
|
|
|
|
sbatch my_sbatch_script
|
|
|
|
```
|
|
|
|
|
|
|
|
You can check if your job is in the queue, or already running, with
|
|
|
|
|
|
|
|
```bash
|
|
|
|
squeue -u $USER
|
|
|
|
```
|
|
|
|
|
|
|
|
which uses the `USER` environmental variable (your username effectively) to query the system for active jobs.
|
|
|
|
|
|
|
|
If things go well, you should get an output folder and inside it a `log.out` file.
|
|
|
|
You will also get a `slurm` job report showing everything that happened behind the scenes and not shown in the terminal while the job was launched in the compute nodes. Inspecting the contents of this file which also show if there was an error or a successful run. If you had errors, try troubleshooting through them.
|
|
|
|
|
|
|
|
We will use this sbatch script as a base for further tutorials. |
|
|
\ No newline at end of file |
|
|
|
We will use this sbatch script as a base for further tutorials, so remember where you keep it. You can later create copies of it. |
|
|
\ No newline at end of file |