Skip to content
Snippets Groups Projects
Commit 83bd4253 authored by Andreas Herten's avatar Andreas Herten
Browse files

Add containers.

parent b5120f77
No related branches found
No related tags found
No related merge requests found
......@@ -49,6 +49,17 @@ The most relevant modules are
* Compiler: `GCC` (with additional `CUDA`), `NVHPC`
* MPI: `ParaStationMPI`, `OpenMPI` (make sure to have loaded `MPI-settings/CUDA` as well)
## Containers
JSC supports containers thorugh Apptainer (previously: Singularity) on the HPC systems. The details are covered in a [dedicated article in the systems documetnation](https://apps.fz-juelich.de/jsc/hps/jureca/container-runtime.html). Access is subject to accepting a dedicated license agreement (because of special treatment regarding support) on JuDoor.
Once access is granted (check your `groups`), Docker containers can be imported and executed similarly to the following example:
```
$ apptainer pull tf.sif docker://nvcr.io/nvidia/tensorflow:20.12-tf1-py3
$ srun -n 1 --pty apptainer exec --nv tf.sif python3 myscript.py
```
## Batch System
The JSC systems use a special flavor of Slurm as the workload manager (PSSlurm). Most of the vanilla Slurm commands are available with some Jülich-specific additions. An overview of Slurm is available in the according documentation which also gives example job scripts and interactive commands: [https://apps.fz-juelich.de/jsc/hps/jureca/batchsystem.html](https://apps.fz-juelich.de/jsc/hps/jureca/batchsystem.html)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment