... | ... | @@ -45,7 +45,7 @@ Create an environmental variable for the path to the local working copy of the r |
|
|
|
|
|
`export SERGHEIPATH=$(pwd)`
|
|
|
|
|
|
This command exports the current path (which is returned by the `pwd` command) into the `SERGHEIPATH` variable. Bare in mind that this is not persisting once you log out or close the terminal. This means you need to set this again if you do.
|
|
|
This command exports the current path (which is returned by the `pwd` command) into the `SERGHEIPATH` variable. Bare in mind that this is not persisting after you log out or close the terminal. This means you need to set this again if you log out or close the terminal.
|
|
|
|
|
|
You can check if this variable is set by running
|
|
|
|
... | ... | @@ -53,9 +53,9 @@ You can check if this variable is set by running |
|
|
|
|
|
**Optional but convenient**
|
|
|
|
|
|
A way setting this in a persisting way is to define it in your local `.bashrc` file. Edit the `.bashrc` file
|
|
|
A way setting this in a persisting way is to define it in your local `.bashrc` file. Edit the `.bashrc` file in your home folder
|
|
|
|
|
|
`vim .bashrc`
|
|
|
`vim ~/.bashrc`
|
|
|
|
|
|
and include a new line which includes the path to `serghei` (which you can copy from the `pwd` output).
|
|
|
|
... | ... | @@ -65,12 +65,12 @@ and include a new line which includes the path to `serghei` (which you can copy |
|
|
|
|
|
SERGHEI has a few dependencies (software on which it relies).
|
|
|
|
|
|
1. software to compile the code. We rely on the GCC compiler for this.
|
|
|
2. software to manage communications and parallelism. For this we require some flavour of the Message Passing Interface [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface). For JSC systems we can use [OpenMPI](https://www.open-mpi.org/), but we prefer [Parastation MPI](https://github.com/ParaStation/psmpi/branches).
|
|
|
1. software to compile the code. We rely on the [GCC](https://gcc.gnu.org/) compiler for this.
|
|
|
2. software to manage communications and parallelism. For this we require some flavour of the Message Passing Interface [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface). For JSC systems we can use [OpenMPI](https://www.open-mpi.org/), but in our JSC systems we prefer [Parastation MPI](https://github.com/ParaStation/psmpi/branches).
|
|
|
3. software to manage parallel output to disk. We rely on [Parallel NetCDF](https://parallel-netcdf.github.io/).
|
|
|
4. In order to program, compile and run SERGHEI on different hardware architectures (namely CPUs and GPUs) we rely on the performance-portability library [Kokkos](https://github.com/kokkos/kokkos.git).
|
|
|
|
|
|
Typically, in your own computer you would have to download and install all this software. In HPC systems, (1) you don't have administrator rights to do so, (2) it can be complicated to build complex layers of software dependencies, and (3) typically used software is already installed and ready to use in the system.
|
|
|
Typically, in your own local computer you would have to download and install all this software. In HPC systems, (1) you don't have administrator rights to do so, (2) it can be complicated to build complex layers of software dependencies, and (3) typically used software is already installed and ready to use in the system.
|
|
|
|
|
|
The software is usually not visible (in fact it is not loaded into the system) by default. This allows you to select only the minimal set of software that you actually require, which minimises conflicts and maximises efficiency. To make the software available to you, HPC systems provide **software modules**. You can read more on JSC's module system on [JSC's user documentation](https://apps.fz-juelich.de/jsc/hps/jureca/software-modules.html).
|
|
|
|
... | ... | @@ -90,9 +90,9 @@ To load the GCC module try |
|
|
|
|
|
Try to do the same for the Parastation MPI and the Parallel NetCDF modules. Use `module spider` to find the modules, and `module load` to load them.
|
|
|
|
|
|
Like the environmental variable before, the _environment_ you have prepared by loading these modules will not survive when you log out or close your terminal, in which case you will have to repeat this procedure.
|
|
|
Like the environmental variable before, the **environment** you have prepared by loading these modules will not survive when you log out or close your terminal, in which case you will have to repeat this procedure.
|
|
|
|
|
|
It is therefore practical to write a script that will make this easier. This of already exists for SERGHEI, and you can find it in the `serghei/machines` folder. Here you will find a number of scripts which load the modules required by SERGHEI for several machines.
|
|
|
It is therefore practical to write a script that will make this easier. This of course already exists for SERGHEI, and you can find it in the `serghei/machines` folder. Here you will find a number of scripts which load the modules required by SERGHEI for several machines.
|
|
|
|
|
|
Since we are using JURECA-DC, we should use the script for that machine. To load the environment defined in this script run
|
|
|
|
... | ... | @@ -104,17 +104,25 @@ If all goes well, we now should have the environment ready. You can check which |
|
|
|
|
|
### Getting Kokkos
|
|
|
|
|
|
Kokkos is available in GitHub at https://github.com/kokkos/kokkos.git. To clone the Kokkos repository, follow a analogous procedure to what was done to clone SERGHEI into your home directory.
|
|
|
Kokkos is available in GitHub at [https://github.com/kokkos/kokkos.git](https://github.com/kokkos/kokkos.git). To clone the Kokkos repository, follow a analogous procedure to what was done to clone SERGHEI into your home directory.
|
|
|
|
|
|
Usually dependencies need to be built in the system (so that they are available locally as a binary, or through a module). For SERGHEI, Kokkos does not need to be built beforehand. It will be built when we build SERGHEI, if the path to Kokkos is properly set in the next step.
|
|
|
|
|
|
# 4. Compiling and building
|
|
|
|
|
|
1. navigate to `serghei/src/`.
|
|
|
1. Navigate to `serghei/src/`. An easy way to do this if your environment is correctly set is
|
|
|
|
|
|
```bash
|
|
|
cd $SERGHEIPATH/src
|
|
|
```
|
|
|
|
|
|
2. Open the `Makefile`. This is a script which controls how SERGHEI is compiled. Note that there are some definitions early on. In particular `KOKKOS_PATH` and `KOKKOS_SRC_PATH` are defined as a function of `HOME`. Note that the `Makefile` assumes that `KOKKOS_PATH` will be in `HOME` path. `HOME` is an environmental variable which contain the path to your home directory. Exit the file and check what your HOME environmental variable contains.
|
|
|
3. No we will use `make` to compile and build. Running `make` attempts to read the `Makefile`, which you can see exists in this directory. If yout ry to run `make` in a different directory which does not contain a `Makefile` you will get an error. `make` support parallel threads, so that the compilation is faster. For example
|
|
|
|
|
|
`make -j 8`
|
|
|
3. Now we will use `make` to compile and build. Running `make` attempts to read the `Makefile`, which you can see exists in this directory. If you try to run `make` in a different directory which does not contain a `Makefile` you will get an error. `make` support parallel threads, so that the compilation is faster. For example
|
|
|
|
|
|
```bash
|
|
|
make -j 8
|
|
|
```
|
|
|
|
|
|
would use 8 threads to run the compilation. Give this a try. If everything is correctly set, you should see a message indicating a successful build.
|
|
|
|
... | ... | @@ -136,6 +144,9 @@ The first part of the command above (`$SERGHEIPATH/bin/serghei`) invokes the `se |
|
|
|
|
|
Finally, the `1` at then is the number of threads we are using. In this case 1 thread, meaning we only use 1 of the CPU cores for the computation.
|
|
|
|
|
|
The command above should run a dam break simulation for 10 seconds. If all works, then we know we have built serghei properly and are ready to work with it on fancier things.
|
|
|
The command above should run a dam break simulation for 10 seconds. If all works, then we know we have built SERGHEI properly and are ready to work with it on fancier things.
|
|
|
|
|
|
If something goes wrong, check the paths, make sure you are in the right directory, and make sure that `serghei` was properly built (from the previous step).
|
|
|
|
|
|
** IMPORTANT DISCLAIMER **
|
|
|
Bare in mind that this is **not the correct** way to use the HPC system. By running the command above we ran a command in the ***login node*** of the HPC system, which is not meant to run simulations. We have only done this to quickly test if our build was ok, with an extremely small simulation, on a single CPU. To properly run a simulation we will use `slurm` and `sbatch` scripts in the next tutorial. |
|
|
\ No newline at end of file |