diff --git a/README.md b/README.md
index 200e95fb91d76ccb2cbbb1a2cf326f0256061fa2..0ddc356708c1a622c5b8ac4c6d9b0967aab148b4 100644
--- a/README.md
+++ b/README.md
@@ -49,6 +49,17 @@ The most relevant modules are
  * Compiler: `GCC` (with additional `CUDA`), `NVHPC`
  * MPI: `ParaStationMPI`, `OpenMPI` (make sure to have loaded `MPI-settings/CUDA` as well)
 
+## Containers
+
+JSC supports containers thorugh Apptainer (previously: Singularity) on the HPC systems. The details are covered in a [dedicated article in the systems documetnation](https://apps.fz-juelich.de/jsc/hps/jureca/container-runtime.html). Access is subject to accepting a dedicated license agreement (because of special treatment regarding support) on JuDoor.
+
+Once access is granted (check your `groups`), Docker containers can be imported and executed similarly to the following example:
+
+```
+$ apptainer pull tf.sif docker://nvcr.io/nvidia/tensorflow:20.12-tf1-py3
+$ srun -n 1 --pty apptainer exec --nv tf.sif python3 myscript.py
+```
+
 ## Batch System
 
 The JSC systems use a special flavor of Slurm as the workload manager (PSSlurm). Most of the vanilla Slurm commands are available with some Jülich-specific additions. An overview of Slurm is available in the according documentation which also gives example job scripts and interactive commands: [https://apps.fz-juelich.de/jsc/hps/jureca/batchsystem.html](https://apps.fz-juelich.de/jsc/hps/jureca/batchsystem.html)