@@ -104,6 +104,22 @@ Except for the MPI and the node-internal CUDA transport layer, all layers utiliz
With any transport layer but MPI or intra-node CUDA it is important to make sure that the PMI (not MPI) environment is correctly set up. The easiest way to achieve this using slurm is: `srun --mpi=pmi2` or `srun --mpi=pmix`. If this option is not available or not supported by slurm please consult the relevant PMI documentation for your system.
## WIP: Supported Combinations of Communication APIS & Various Options
!!! WORK IN PROGRESS !!!
Not all option combinations are currently possible. The following table shows supported combinations.
Linktest can be configured to test MPI or TCP without the miniPMI library. In the case of MPI no additional work is necessary, aside from executing with `mpiexe` or the like, and linktest can be used as above. When testing TCP communication without the miniPMI library the cluster configuration needs to be specified explicitly via the following four environment variables: `LINKTEST_TCP_SIZE`, `LINKTEST_TCP_RANK`, `LINKTEST_TCP_IPADDR_<<<RANK>>>` and `LINKTEST_TCP_PORT_<<<RANK>>>`.