Skip to content
Snippets Groups Projects
Select Git revision
  • cfa30723f90555ea228d2d23d7f530456360a5a8
  • main default protected
  • instances/2025_05
  • instances/2024_11
4 results

software-modules.md

Blame
  • software-modules.md 11.57 KiB
    sidebar_position: 4

    Software Modules

    HPC centres will usually make some effort to provide software that is commonly used for scientific purposes. This includes compilers, parallel programming libraries like MPI, numerical libraries, and even complete simulation programs. These software packages form a hierarchy of dependencies (simulation programs use numerical and parallel programming libraries, and everything must be compiled with a specific compiler). Towards the bottom of this hierarchy, packages tend to be interchangeable (several compilers for C or Fortran, several libraries implement the MPI standard) and some of the higher up packages perform better when compiled with a certain compiler. It therfore makes sense to offer a range of software packages that implement low level functions and then build a software landscape upon each combination of those low level packages. The two lowest levels in this hierarchy, compiler and MPI library together form a "toolchain". To help keep the complexity of accessing these different collections of software in check, JSC uses a combination of EasyBuild and Lmod to build software and make it available as software modules. During a log in session, modules can be loaded and unloaded using the module command to use the software that is provided by them. When you log in, a set of default modules is loaded for you, e.g. on JUWELS:

    $ module list
    
    Currently Loaded Modules:
      1) Stages/2025     (S)   3) zlib/.1.3.1    (H)   5) make/4.4.1
      2) GCCcore/.13.3.0 (H)   4) binutils/.2.42 (H)   6) StdEnv/2025
    
      Where:
       S:  Module is Sticky, requires --force to unload or purge
       H:             Hidden Module
    

    To see what other modules can currently be loaded, type:

    $ module avail
    
    ----------------------------------- System side compilers -----------------------------------
       AOCC/4.2.0    Clang/18.1.8
    
    --------------------------------------- Core packages ---------------------------------------
       ADIOS2/2.10.2                              VTKm/20250106-nompi               (g)
       ANARI-SDK/0.12.1-nompi                     VTune/2024.3.0
       ASE/3.23.0                                 Vampir/10.5.0
    
       [...]
    
       UnZip/6.0                                  xpra/6.2.3                        (g,D)
       VTK/9.4.1-nompi                            xxHash/0.8.3
       VTK/9.4.1                          (D)     zsh/5.9
    
    ----------------------------------------- Compilers -----------------------------------------
       GCC/13.3.0                    Intel/2024.2.0     (D)      NVHPC/25.1-CUDA-12 (g)
       Intel/2024.2.0-CUDA-12 (g)    NVHPC/24.9-CUDA-12 (g,D)    NVHPC/25.3-CUDA-12 (g)
    
    ------------------------------------- Production Stages -------------------------------------
       Stages/2023 (S)    Stages/2024 (S)    Stages/2025 (S,L,D)
    
    ----------------------------- User-based install configuration ------------------------------
       UserInstallations/easybuild
    
    ------------------------------- System install configuration --------------------------------
       Developers/InstallSoftware
    
      Where:
       S:        Module is Sticky, requires --force to unload or purge
       g:        Built with GPU support
       L:        Module is loaded
       Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2" will load foo/1.2.3
       D:        Default Module
    
    If the avail list is too long consider trying:
    
    "module --default avail" or "ml -d av" to just list the default modules.
    "module overview" or "ml ov" to display the number of modules for each name.
    
    Use "module spider" to find all possible modules and extensions.
    Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".
    

    The available modules are grouped into categories:

    • Core packages, which are independent of the choice of toolchain
    • Compilers, which are the first ingredient of a toolchain
    • Archictectures, which can be used to load software for different processor architectures. This category does not exist on all systems.

    Go ahead and load a compiler:

    $ module load GCC

    If you now run module avail again, you will notice two additional software categories:

    $ module avail
    
    ---------------------- System MPI runtimes available for GNU compilers ----------------------
    [...]
    
    ------------------------ System packages compiled with GNU compilers ------------------------
    [...]

    These contain modules that depend on (or were built with) the GCC module that you just loaded. Loading one of the available MPI modules will complete your choice of a toolchain and make more software available:

    $ module load OpenMPI
    $ module avail
    
    ------------------------------------- OpenMPI settings --------------------------------------
       MPI-settings/CUDA-UCC    MPI-settings/CUDA (D)    MPI-settings/UCX-UCC    MPI-settings/UCX (L)
    
    ------------------ System packages compiled with OpenMPI and GCC compilers ------------------
    [...]

    If you are looking for a particular piece of software that you know the name of, rather than rummaging through all the toolchains, you can use the module spider subcommand, as the output of module avail suggests:

    $ module spider GROMACS
    
    ------------------------------------------------------------------------------------------------------
      GROMACS:
    ------------------------------------------------------------------------------------------------------
        Description:
          GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian
          equations of motion for systems with hundreds to millions of particles. It is primarily
          designed for biochemical molecules like proteins and lipids that have a lot of complicated
          bonded interactions, but since GROMACS is extremely fast at calculating the non-bonded
          interactions (that usually dominate simulations) many groups are also using it for research on
          non-biological systems, e.g. polymers.
    
         Versions:
            GROMACS/2022.4-plumed
            GROMACS/2022.4
            GROMACS/2023.2
            GROMACS/2024.3-PLUMED-2.9.3
            GROMACS/2024.3
    
    ------------------------------------------------------------------------------------------------------
      For detailed information about a specific "GROMACS" package (including how to load the modules) use the module's full name.
      Note that names that have a trailing (E) are extensions provided by other modules.
      For example:
    
         $ module spider GROMACS/2024.3
    ------------------------------------------------------------------------------------------------------
    

    Loading the GROMACS module with OpenMPI loaded fails:

    $ module load GROMACS
    Lmod has detected the following error:  These module(s) or extension(s) exist but cannot be
    loaded as requested: "GROMACS"
       Try: "module spider GROMACS" to see how to load the module(s).

    module spider with a specific module version provides details on how the module can be loaded:

    $ module spider GROMACS/2024.3
    
    ------------------------------------------------------------------------------------------------------
      GROMACS: GROMACS/2024.3
    ------------------------------------------------------------------------------------------------------
        Description:
          GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian
          equations of motion for systems with hundreds to millions of particles. It is primarily
          designed for biochemical molecules like proteins and lipids that have a lot of complicated
          bonded interactions, but since GROMACS is extremely fast at calculating the non-bonded
          interactions (that usually dominate simulations) many groups are also using it for research on
          non-biological systems, e.g. polymers.
    
        Properties:
          Built with GPU support
    
        You will need to load all module(s) on any one of the lines below before the "GROMACS/2024.3" module is available to load.
    
          Stages/2025  GCC/13.3.0  ParaStationMPI/5.10.0-1
          Stages/2025  GCC/13.3.0  ParaStationMPI/5.10.0-1-mt
          Stages/2025  GCC/13.3.0  ParaStationMPI/5.11.0-1
          Stages/2025  GCC/13.3.0  ParaStationMPI/5.11.0-1-mt
    
        Help:
          Description
          ===========
          GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations
          of motion for systems with hundreds to millions of particles. It is primarily designed for
          biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions,
          but since GROMACS is extremely fast at calculating the non-bonded interactions (that usually
          dominate simulations) many groups are also using it for research on non-biological systems, e.g.
          polymers.
    
    
          Usage
          =====
          Use `gmx` to execute GROMACS commands on a single node, for example, to prepare your run. Use
          `gmx_mpi mdrun` in your job scripts with `srun`.
    
    
          More information
          ================
           - Homepage: http://www.gromacs.org
           - Documentation:
              - http://manual.gromacs.org/documentation/current/user-guide/index.html
           - Site contact: Support <sc@fz-juelich.de>, software responsible Jan Meinke <j.meinke@fz-juelich.de>
    

    The problem is that GROMACS is only available in toolchains which include ParaStationMPI. We could simply reload the MPI module rather than having to reload the entire toolchain, but this can sometimes come with unintended consequences, where what is loaded in a module load command is not necessarily unloaded while swapping modules.
    For this reason, we also do not recommend using the module unload command, although it is available. We would recommend to unload (almost) all modules and start with a fresh environment, using module purge:

    $ module purge
    $ module load GCC
    $ module load ParaStationMPI
    $ module load GROMACS

    :::info

    The module spider command is case-insensitive, meaning you can search for modules using any combination of uppercase or lowercase letters (e.g., module spider PyTorch and module spider pytorch both work).

    In contrast, the module load command is case-sensitive. You must use the exact capitalization of the module name and version as listed (e.g., module load GROMACS/2024.3 is correct, while module load gromacs/2024.3 may fail if the case does not match).

    :::

    The module command is part of the Lmod software package. It comes with its own help document which you can access by running module help and a user guide is available online.

    The JUWELS system is special in terms that it consist of multiple system modules (as opposed to software modules) based on different compute technologies. The software we provide on JUWELS is also split into different hierarchies, one per system module. As JUWELS uses different login nodes for the different system modules (Cluster and Booster), the correct software collection is loaded automatically based on which login node you use, so we would always strongly recommend using the login nodes of the JUWELS module you intend to compute on.

    Further reading

    Our online documentation has more information on software modules. It lists the basic tool chains (compiler + communication library + math library) available on our systems and discusses using older software stages. If you want more details, you can find the documentation for our various systems here: