diff --git a/Batch-Systems.md b/Batch-Systems.md
index d46059448149d88a21c9218e146ee8874eeeb2ac..d3fb1f32b385136089508f901b76ea74d3a6b8f1 100644
--- a/Batch-Systems.md
+++ b/Batch-Systems.md
@@ -59,13 +59,13 @@ mpirun hostname
 * `bjobs -u all`: Show all currently unfinished jobs
 * `bkill ID`: Kill job with ID
 
-## JURECA
+## JUWELS
 
-Documentation for JURECA's batch system can be found [online](http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/JURECA/UserInfo/Batch.html?nn=1803700). There's also [a PDF](http://www.fz-juelich.de/SharedDocs/Downloads/IAS/JSC/EN/JURECA/jureca_batch_system_manual.html?nn=1803776) with more detailed information on JURECA's Slurm installation.
+Documentation for JUWELS' batch system can be found [online](https://apps.fz-juelich.de/jsc/hps/juwels/batchsystem.html).
 
-For the Hackathon, a reservation named **eurohack** exists.
+For the Hackathon, a reservation is going to be created.
 
-The MPI launcher on JURECA is called `srun`.
+The MPI launcher on JUWELS is called `srun`.
 
 ### Interactive Jobs
 
@@ -74,12 +74,10 @@ The MPI launcher on JURECA is called `srun`.
 When running interactively, resources need to be allocated first. `salloc` is the program handling this.
 
 ```
-salloc --partition=gpus --reservation=eurohack --gres=mem128,gpu:4 --time=0:40:00
+salloc --partition=gpus --gres=gpu:4 --time=0:40:00
 ```
 
-Here, a node with 128 GB RAM and 4 GPUs (2 full K80 devices) is allocated on the `gpus` partition using our reservation *eurohack* for 40 minutes. All options are mandatory, expect for `--time` which defaults to 60 minutes.
-
-For visualization purposes the `vis` partition can also be used. Here more memory is available (`mem512` or `mem1024`), but only at maximum 2 GPUs (K40). Note that our reservation is not valid for this partition.
+Here, a node with 4 GPUs (4 V100 devices) is allocated on the `gpus` partition for 40 minutes. All options are mandatory, expect for `--time` which defaults to 60 minutes.
 
 Further useful options:
 
@@ -126,21 +124,3 @@ srun ./gpu-prog                 # Singe program uses MPI, launch with srun
 * `squeue`: List all unfinished jobs
 * `squeue -u ME`: List unfinished jobs of user ME
 * `scancel ID`: Cancel a job with ID
-
-## Piz Daint
-
-### Batch Jobs
-
-```
-#!/bin/bash -l
-#SBATCH --nodes=1
-#SBATCH --ntasks-per-node=1
-#SBATCH --ntasks-per-core=1
-#SBATCH --cpus-per-task=1
-#SBATCH --constraint=gpu
-#SBATCH --time=00:30:00
-#export CRAY_CUDA_MPS=1
-export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
-module load daint-gpu
-srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c $SLURM_CPUS_PER_TASK ./cube_cray -p parameter.inp
-```
diff --git a/JURECA.md b/JURECA.md
deleted file mode 100644
index f20b39e24e7b58e9c8363c79d1028f77c821f2f2..0000000000000000000000000000000000000000
--- a/JURECA.md
+++ /dev/null
@@ -1,64 +0,0 @@
-# JURECA
-
-JURECA is one of Jülich's Top500 supercomputers. The system comprises 1872 compute nodes of which 75 are equipped with GPUs. Per node, two Intel Xeon Haswell CPUs are available, each with 12 cores and two threads per core (48 threads in total); each GPU-node has 2 Tesla K80 GPUs which appear as 4 devices on the node.
-
-JURECA has a rich documentation available [online on JSC's webpages](http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/JURECA/JURECA_node.html).
-
-Please note the warning in the login message concerning multi-GPU jobs!
-
-## Module System
-
-JURECA offers software through a module system. It is organized hierarchically, with the outermost level determined by the chosen compiler. Some software might only be available by loading a certain compiler first. A typical next hierarchical level is the MPI implementation.
-
-`module avail` will show the available compiler entry points, of which `PGI/16.9-GCC-5.4.0`, `GCC/5.4.0` are of special interest for the Hackathon. CUDA can be loaded by `module load CUDA/8.0.44`, `module unload CUDA/8.0.44` will unload it. `module list` lists and `module purge` removes all loaded modules; `module --force purge` will remove also sticky modules. Most of the times, the version numbers can be omitted.
-
-To search through all available modules for `NAME`, use `module spider NAME`. If `NAME` matches an exact module, like `module spider CUDA/8.0.44`, detailed information about the module and how to load it is displayed. `module key NAME` searches for `NAME` in all module titles or descriptions.
-
-Additional compiler versions, in part specially installed for the Hackathon, are available through the *development stage*. The following commands need to be called to enter the stage and make the modules available:
-
-```
-module use /usr/local/software/jureca/OtherStages
-module load Stages/Devel
-```
-
-For the Hackathon of special interest are:
-
-* CUDA module: `module load CUDA/8.0.44`
-    - *Note:* `nvcc_pgc++` is available which calls `nvcc` with the PGI C++ compiler (by `-ccbin=pgc++`)
-* GCC modules:
-    - `module load GCC/5.4.0`
-    - Other versions through `Stages/Devel`, but might not be working through the tool chains (`GCC/4.9.2`, `GCC/4.9.3`, `GCC/5.1.0`, `GCC/5.2.0`, `GCC/5.3.0`, `GCC/5.4.0`)
-* PGI modules:
-    - `module load PGI/16.9-GCC-5.4.0`
-    - `module load PGI/16.10-GCC-5.4.0` (via `Stages/Devel`)
-    - `module load PGI/17.1-GCC-5.4.0` (via `Stages/Devel`)
-* MPI modules:
-    - `module load MVAPICH2`
-        + *Note:* This should load the correct version for a given compiler automatically (`MVAPICH2/2.2-GDR`)
-* Score-P modules:
-    - `module load Score-P` for `GCC/5.4.0` (via `Stages/Devel`)
-        + `module use /usr/local/software/jureca/OtherStages; module load Stages/Devel `
-        + `module load GCC/5.4.0 MVAPICH2/2.2-GDR Score-P/3.0-p1`
-    - `module load Score-P` for `PGI/16.10-GCC-5.4.0` (via `Stages/Devel`)
-        + `module use /usr/local/software/jureca/OtherStages; module load Stages/Devel `
-        + `module load PGI/16.10-GCC-5.4.0 MVAPICH2/2.2-GDR Score-P/3.0-p1`
-        + *Note: With full CUDA support!*
-    - `module load Score-P` for `PGI/17.1-GCC-5.4.0` (via `Stages/Devel`)
-        + `module use /usr/local/software/jureca/OtherStages; module load Stages/Devel `
-        + `module load PGI/17.1-GCC-5.4.0 MVAPICH2/2.2-GDR Score-P/3.0-p1`
-        + *Note: With CUDA support, although `nvcc` does not yet support `pgc++` from PGI 17.1 as a host compiler.*
-* Scalasca module:
-    - `module load Scalasca/2.3.1` for `PGI/16.10-GCC-5.4.0` (via `Stages/Devel`)
-    - `module load Scalasca/2.3.1` for `PGI/17.1-GCC-5.4.0` (via `Stages/Devel`)
-* Vampir module:
-    - `module load Vampir`
-
-## Batch System
-
-JURECA makes the GPU-equipped compute nodes available through the Slurm batch system. See the `Batch-Systems.md` file for a description.
-
-## File System
-
-JURECA and JURON both share a file system (called *GPFS*). If you intend to work on both machines simultaneous, you might want to create different build directories as the x86 and POWER architectures are not binary-compatible.
-
-Both machines offer `$HOME` as the main place to store any kind of data. An additional scratch file system is available under the environment variable `$WORK`. Albeit being connected by a faster link, data there will be cleaned after 90 days.
diff --git a/JUWELS.md b/JUWELS.md
new file mode 100644
index 0000000000000000000000000000000000000000..88d2e1777a74a9037b5a60d0b1d83db767d6a4bd
--- /dev/null
+++ b/JUWELS.md
@@ -0,0 +1,58 @@
+# JUWELS
+
+JUWELS is one of Jülich's [Top500 supercomputers](https://www.top500.org/system/179424). The system comprises about 2500 compute nodes of which 48 are equipped with GPUs. Per node, two Intel Skylake CPUs are available and each GPU-node has 4 NVIDIA Tesla V100 GPUs (16 GB Ram).
+
+The documentation of JUWELS is [available online](http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/JUWELS/JUWELS_node.html), there's also a [Quick Start guide](https://apps.fz-juelich.de/jsc/hps/juwels/quickintro.html).
+
+## Module System
+
+JUWELS offers software through a module system. It is organized hierarchically, with the outermost level determined by the chosen compiler. Some software might only be available by loading a certain compiler first. A typical next hierarchical level is the MPI implementation.
+
+`module avail` will show the available compiler entry points, of which `PGI/18.7-GCC-7.3.0` is of special interest for the Hackathon. CUDA can be loaded by `module load CUDA/9.2.88`, `module unload CUDA/9.2.88` will unload it. `module list` lists and `module purge` removes all loaded modules. Most of the times, the version numbers can be omitted.
+
+To search through all available modules for `NAME`, use `module spider NAME`. If `NAME` matches an exact module, like `module spider CUDA/9.2.88`, detailed information about the module and how to load it is displayed. `module key NAME` searches for `NAME` in all module titles or descriptions.
+
+For the Hackathon of special interest are following. Older versions are available in other stages, which can be enabled by calling:
+
+```
+module use /usr/local/software/jureca/OtherStages
+[module load Stages/Devel]
+```
+
+
+* CUDA module: `module load CUDA/9.2.88`
+    - *Note:* `nvcc_pgc++` is available which calls `nvcc` with the PGI C++ compiler (by `-ccbin=pgc++`)
+* GCC module:
+    - `module load GCC/7.3.0`
+* PGI modules:
+    - `module load PGI/18.7-GCC-7.3.0`
+    - Others via `Stages/Devel`
+* MPI modules:
+    - `module load MVAPICH2`
+        + *Note:* This should load the correct version for a given compiler automatically (GCC/CUDA: `MVAPICH2/2.3-GDR`, PGI: `MVAPICH2/2.3rc1-GDR`)
+* Score-P modules:
+    - `module load Score-P`, only for `GCC/8.2.0` which isn't working with CUDA (TBD)
+* Scalasca module:
+    - `module load Scalasca`, only for `GCC/8.2.0` which isn't working with CUDA (TBD)
+* Vampir module:
+    - `module load Vampir`, only in `Stages/2018b` (TBD)
+
+## Batch System
+
+JUWELS makes the GPU-equipped compute nodes available through the Slurm batch system. See the `Batch-Systems.md` file for a description.
+
+## File System
+
+All Jülich systems both share a file system (called *GPFS*). You have different `$HOME` directories for each. In addition, there are two more storage spaces available. Descriptions:
+
+* `$HOME`: Only 5 GB available to have the most-important files
+* `$PROJECT`: Plenty of space for all project members to share
+* `$SCRATCH`: Plenty of temporary space!
+
+For the environment variables to map to the correct values, the project environment needs to be activated with
+
+```bash
+jutil env activate -p training1908 -A training1908
+```
+
+See also [the online description](http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/NewUsageModel/UserAndProjectSpace.html?nn=2363700).
diff --git a/Login.md b/Login.md
index 4237418fab73574668919aedfe015ff38384e746..84b62079de26e20f25586debebdd1620ed2db806 100644
--- a/Login.md
+++ b/Login.md
@@ -1,56 +1,43 @@
-# Logging In
+# Accounts
 
-Access to the supercomputers is granted via SSH.
+## Account Creation
 
-## Accounts
+User management for the supercomputers in Jülich is done centrally via the JuDOOR portal. Hackathon attendees need to signup for a JuDOOR account and then apply to be added to the Hackathon project `training1908`. This link will let you join the project:
 
-For JURON and JURECA, please use either the temporary `train0XX` accounts given to you or your permanent accounts.
+[https://dspserv.zam.kfa-juelich.de/judoor/projects/join/TRAINING1908](https://dspserv.zam.kfa-juelich.de/judoor/projects/join/TRAINING1908)
 
-### Procedure for `train0XX` Accounts
+Once you are in the project, you need to agree to the usage agreements of `JUWELS` and `JUWELS GPU`.
 
-In order to receive login details associated to a temporary `train0XX` account, the following procedure needs to be performed:
+After that, you can upload your SSH public key via the »Manage SSH-keys« link. *(New to SSH? See for example [this help at Github](https://help.github.com/en/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent).)**
 
-1. Fill out usage agreements for JURECA and/or JURON
-2. Swap signed agreements against passwords for user ID and password for SSH key
-3. Download private part of SSH key from given URL
-4. *Running Windows and PuTTy? Convert your SSH key with PuTTYgen! [Small external tutorial](https://devops.profitbricks.com/tutorials/use-ssh-keys-with-putty-on-windows/#use-existing-public-and-private-keys).*
+## Login
 
-You might be prompted to change permission of the just-downloaded SSH key. Call `chmod 600 id_train0XX` then.
+Please login to JUWELS via SSH
 
-### Already Existing Accounts
-
-You are perfectly fine to use your already existing account on JURON and/or JURECA. Please make sure the JURECA account is known to the organizers, since the reservation on the batch system is associated to it.
-
-## Using SSH
-
-Please connect to the two systems with SSH. Using a `train0XX` account, it looks like this:
-
-* `ssh -i id_train0XX train0XX@juron.fz-juelich.de`
-* `ssh -i id_train0XX train0XX@jureca.fz-juelich.de`
-
-To forward the X server, use `-X` or `-Y` as an additional flag. In case of launching a GUI application on the compute backends it might be necessary to forward your SSH authentication agent (`-o ForwardAgent=yes`).
+```bash
+ssh name1@juwels.fz-juelich.de
+```
 
-If you are new to SSH, you might want to have a look at some tutorials ([example](https://www.digitalocean.com/community/tutorials/ssh-essentials-working-with-ssh-servers-clients-and-keys)).
+In case you are using PuTTY on Windows, maybe see [this external tutorial](https://devops.profitbricks.com/tutorials/use-ssh-keys-with-putty-on-windows/#use-existing-public-and-private-keys) .
 
-Use `ssh-add id_train0XX` to stop entering the passphrase of the given key.
+## Environment
 
-### Creating Alias
+One of the first steps after login should be to activate the environment for the GPU Hackathon using `jtuil`:
 
-It's handy to create an alias for your connection to JURECA or JURON, especially if you are using the custom private SSH key of the `train0XX` accounts.
+```bash
+jutil env activate -p training1908 -A training1908
+```
 
-Add the following to your `~/.ssh/config` (create the file if it does not exist):
+### Tips & Trouble Shooting
 
+* [SSH tutorial](https://www.digitalocean.com/community/tutorials/ssh-essentials-working-with-ssh-servers-clients-and-keys)
+* Use `ssh -Y […]` or `ssh -X […]` to forward X windows to your machine
+* It's good to add your SSH key to SSH agent. Call `ssh-add` and enter the key's password; you won't be prompted for further password requests during login now
+* Even easier: SSH alias inside of your SSH config (`~/.ssh/config`). Add this, for example:
 ```
-Host jureca
-    HostName jureca.fz-juelich.de
-    User train055
-    IdentityFile ~/Downloads/id_train055
+Host juwels
+    HostName juwels.fz-juelich.de
+    User name1
     ForwardAgent Yes
 ```
-
-(Adapt lines 3 and 4 to match your configuration.)
-
-You can then use the alias `jureca` now in any statement with `ssh`, `scp`, or `rsync`, like
-    * `ssh jureca ls .`
-    * `scp jureca:~/results.csv ~/Downloads/`
-    * `rsync --archive --verbose jureca:~/hackathon/ ~/hackathon`
+Now the alias `juwels` even works with `scp` and `rsync`.
diff --git a/PizDaint.md b/PizDaint.md
deleted file mode 100644
index de3e7500a1d755247428e434359cea1d4552248e..0000000000000000000000000000000000000000
--- a/PizDaint.md
+++ /dev/null
@@ -1,17 +0,0 @@
-# Piz Daint (CSCS)
-
-Piz Daint, located at the Swiss National Supercomputing Centre (CSCS), is currently Europe's fastest supercomputer. It has recently been upgraded to
-Cray XC50 with nodes comprising an Intel Broadwell processors and an PCI-Express-connected NVIDIA P100 GPU.
-
-## Accounts
-
-Since Piz Daint is not operated by Jülich Supercomputing Centre, only a limited number of accounts is available. Please talk to Dirk Pleiter if you want to have access to the machine.
-
-## Connecting to Piz Daint
-
-* Use SSH with the provided username and password to login to the gateway computer at `ela.cscs.ch`
-* From there, connect to `daint.cscs.ch`
-
-## Further information
-
-* [Getting started](http://user.cscs.ch/getting_started/running_jobs/piz_daint)
diff --git a/README.md b/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..9875aa1dff479402cc92f3afb0dfdcd1b108e2a2
--- /dev/null
+++ b/README.md
@@ -0,0 +1,13 @@
+# Helmholtz GPU Hackathon 2019
+
+This repository hold the documentation for the GPU Hackathon 2019 at Jülich Supercomputing Centre (Forschungszentrum Jülich).
+
+Currently, the documentation is still being compiled. If you find errors or room for improvement, please file an issue!
+
+Available documents:
+
+* [Account Creation and Login](Accounts.md)
+* [JUWELS Introduction](JUWELS.md)
+* [JURON Introduction](JURON.md)
+* [Overview of the Batch Systems](Batch-Systems.md)
+* [More Information and Useful Links](More.md)