Usage
Linktest has to be started in parallel, with an even number of proccesses for example using srun --ntasks 2 linktest.
You can control its execution via the following command-line arguments:
-h
or --help
: Prints a help message similar to the following:
You can check the usage via linktest -h
(even without srun), which should look similar to this
Version : <<<VERSION>>>
Usage : linktest [options]
Possible options (default values in parathesis):
-h/--help print help message and exit
-v/--version print version and exit
-w/--num-warmup-messages VAL number of warmup pingpong messages [REQUIRED]
-n/--num-messages VAL number of pingpong messages [REQUIRED]
-s/--size-messages VAL message size in bytes [REQUIRED]
-m/--mode VAL transport Layer to be used [REQUIRED]*
--all-to-all additionally perform MPI all-to-all tests (0)
--bidirectional perform bidirectional tests (0)
--use-gpu-memory use GPU memory to store messages (0)
--bisection perform a bandwidth tests between bisecting halves (0)
--randomize randomize testing order (0)
--serial-tests serialize tests (0)
--no-sion-file do not write data to sion file (0)
--parallel-sion-file write data in parallel (sion) (0)
--num-slowest VAL number of slowest pairs to be retested (10)
--min-iterations VAL linktest repeats for at least <min_iterations> (1)
--min-runtime VAL linktest runs for at least <min_runtime> seconds communication time (0.0)
-o/--output VAL output file name (pingpong_results_bin.sion)
* This build supports [<<<SUPPORTED COMMUNICATION APIs>>>].
Alternatively to --mode, the transport layer can be defined by using linktest.LAYER
or setting environment variable LINKTEST_VCLUSTER_IMPL
where <<<VERSION>>>
is the three part version of Linktest executable and <<<SUPPORTED COMMUNICATION APIs>>>
is a list of support communication APIs/Layers. This option supersedes all others. When executing Linktest with this command-line option it does not need to be run in parallel.
-v
or --version
: Prints the following version information:
FZJ Linktest (<<<VERSION>>>)
where <<<VERSION>>>
is the three part version of Linktest executable. Like the -h
or --help
option Linktest does not need to be executed with this option. This option supersedes all other aside from the -h
or --help
option.
-w/--num-warmup-messages
: Specifies that the following integer indicates the number of warm-up messages to use to warm up a connection before testing it. When not printing help or version information this command-line argument is required.
-n/--num-messages
: Specifies that the following integer indicates the number of messages measurements should be averaged over during testing. When not printing help or version information this command-line argument is required.
-s/--size-messages
: Specifies that the following integer indicates the message size in bytes for testing. When not printing help or version information this command-line argument is required.
-m
or --mode
: Specifies that the following ASCII string indicates the communication API to use for testing. Alternatively the communication API can be extracted from the extension of the Linktest executable name or from the LINKTEST_VCLUSTER_IMPL
environment variable. When multiple ways of specifying the communication API are used then -m
or --mode
supersedes the linktest executable extension, which in turn also supersedes the LINKTEST_VCLUSTER_IMPL
environment variable.
--all-to-all
: Specifies that the following integer, if non-zero, indicates that all-to-all testing should be done before and after the main Linktest test if the used communication API is MPI.
--bidirectional
: Specifies that the following integer, if non-zero, indicates that testing should occur
bidirectionally instead of semi-directionally, which is the default.
--bisection
: Specifies that the following integer, if non-zero, indicates that the tasks for testing should be split in two halves and that testing should only occur between these two. This is useful for determining bisection bandwidths.
--randomize
: Specifies that the following integer, if non-zero, indicates that the order in which tests are performed is to be randomized.
--serial-tests
: Specifies that the following integer, if non-zero, indicates that connections should be tested in serial. By default testing occurs in parallel.
--no-sion-file
: Specifies that the following integer, if non-zero, indicates that the collected results should not be written out into a SION file.
--parallel-sion-file
: Specifies that the following integer, if non-zero, indicates that the collected results should be written out into a SION file in parallel if writing is enabled.
--num-slowest
: Specifies that the following integer indicates the number of slowest connections to serially retest after the end of the main test.
--min-iterations
: Specifies that the following integer indicates the number of times the linktest benchmark should be repeated. If not one the writing of SION files is disabled. This command-line argument is useful to apply a communication load to the system.
--min-runtime
: Specifies that the following floating-point--precision number indicates the number of seconds that Linktest should repeat itself for. If non-zero the writing of SION files is disabled. This command-line is useful to apply a communication load to the system.
-o
or --output
: Specifies that the following string indicates the filename of the output SION file.
The arguments num-warmup-messages, num-messages & size-messages are required. The transport layer is usually given through the --mode option. In rare cases where this doesn't work, you can fall back to the linktest.LAYER executables, and/or set the environment variable LINKTEST_VCLUSTER_IMPL
.
# Option 1: Using mode to specify the virtual-cluster implementation
srun --ntasks 4 \
linktest \
--mode mpi \
--num-warmup-messages 10 \
--num-messages 100 \
--size-messages $((16*1024*1024));
# Option 2: Using a linktest executable with a suffix
srun --ntasks 4 \
linktest.mpi \
--num-warmup-messages 10 \
--num-messages 100 \
--size-messages $((16*1024*1024));
# Option 3: Using the LINKTEST_VCLUSTER_IMPL enviroment variable
export LINKTEST_VCLUSTER_IMPL=mpi;
srun --ntasks 4 \
linktest \
--num-warmup-messages 10 \
--num-messages 100 \
--size-messages $((16*1024*1024));
Except for the MPI and the node-internal CUDA transport layer, all layers utilize the TCP sockets implementation underneath for setup and exchange of data in non-benchmark code segments. The TCP layer implementation uses a lookup of the hostname of the node to determine the IPs for the initial connection setup. There are currently only limited methods to customize this behavior. The code supports the option to set LINKTEST_SYSTEM_NODENAME_SUFFIX
as a suffix to be added to the short hostname. For example, on JSC systems, LINKTEST_SYSTEM_NODENAME_SUFFIX=i
may need to be exported to make sure the out-of-band connection setup is done via the IPoIB network.
With any transport layer but MPI or intra-node CUDA it is important to make sure that the PMI (not MPI) environment is correctly set up. The easiest way to achieve this using slurm is: srun --mpi=pmi2
or srun --mpi=pmix
. If this option is not available or not supported by slurm please consult the relevant PMI documentation for your system.
JSC Run Examples
Linktest on 2048 nodes, 1 task per node, message size 16 MiB, 2 warmup messages and 4 messages for measurement:
xenv -L GCC -L CUDA -L ParaStationMPI -L SIONlib salloc -N 2048 srun -n 2048 ./linktest --mode mpi --num-warmup-messages 2 --num-messages 4 --size-messages $((16*1024*1024))
Linktest on 936 nodes, 4 tasks per node (one per GPU) using device memory:
xenv -L GCC -L CUDA -L ParaStationMPI -L SIONlib salloc -N 936 srun -n 3744 ./linktest --mode mpi --num-warmup-messages 2 --num-messages 4 --size-messages $((16*1024*1024)) --use-gpus
Bidirectional bandwidth test:
xenv -L GCC -L CUDA -L ParaStationMPI -L SIONlib salloc -N 936 srun -n 3744 ./linktest --mode mpi --num-warmup-messages 2 --num-messages 4 --size-messages $((16*1024*1024)) --use-gpus --bidir
Perform exchange only between bisecting halves:
xenv -L GCC -L CUDA -L ParaStationMPI -L SIONlib salloc -N 936 srun -n 3744 ./linktest --mode mpi --num-warmup-messages 2 --num-messages 4 --size-messages $((16*1024*1024)) --use-gpus --bisect
Linktest on JUSUF (MPI through UCP)
$ xenv -L GCC -L CUDA -L ParaStationMPI \
env UCX_TLS=rc_x,self,sm UCX_NET_DEVICES=mlx5_1:1 \
/usr/bin/salloc -A root -N 168 \
srun -n 168 ./linktest --mode mpi \
--num-warmup-messages 4 \
--num-messages 10 \
--size-messages 16777216
Output
Linktest writes measurement results to stdout and monitoring information to stderr. Additionally by default a binary file in sion format will be produced containing detailed measurement data. These files are often quite sparse, therefore they can be compressed very efficiently if needed.
stdout
The stdout output starts with the settings that were given for this run
------------------- Linktest Args ------------------------
Virtual-Cluster Implementation: mpi
Message length: 1024 B
Number of Messages: 1000
Number of Messages. (Warmup): 10
Communication Pattern: Semidirectional End to End
use gpus: No
mixing pe order: No
serial test only: No
max serial retest: 2
write protocol (SION): Yes, funneled
output file: "linktest_mpi_2nx4c.sion"
----------------------------------------------------------
followed by the main benchmark cycle
Starting Test of all connections:
---------------------------------
Parallel PingPong for step 1: avg: 3.41977 us ( 285.5639 MiB/s) min: 3.24080 us ( 301.3333 MiB/s) max: 4.20862 us ( 232.0387 MiB/s)
Analyse Summary: min. 3.2408 us ( 301.333 MiB/s) max. 4.2086 us ( 232.039 MiB/s) avg. 3.4198 us ( 285.564 MiB/s)
Timing Summary: 1 step(s) required 33.05570 ms ( 33.05570 ms/step). 6 step(s) remain. Estimated time remaining: 198.33422 ms
Parallel PingPong for step 2: avg: 2.07276 us ( 471.1417 MiB/s) min: 438.45200 ns ( 2.1751 GiB/s) max: 3.87595 us ( 251.9545 MiB/s)
Analyse Summary: min. 438.4520 ns ( 2.175 GiB/s) max. 4.2086 us ( 232.039 MiB/s) avg. 2.7463 us ( 355.597 MiB/s)
Timing Summary: 2 step(s) required 65.68457 ms ( 32.84228 ms/step). 5 step(s) remain. Estimated time remaining: 164.21142 ms
...
In each step warmup and measurement messages are sent to the communication partner. The communication partner changes from step to step. Each step prints the following
Parallel PingPong for step: The aggregated measurement results of the current step
Analyse Summary: The aggregated results for all steps until this point
Timing Summary: Summary how long the steps took so far, and how much longer the benchmark is estemated to run
After the benchmark is finished the aggregated results for all steps are printed
Linktest Timing Results - Iteration 1:
RESULT: Min Time: 433.63310397 ns ( 2.199 GiB/s)
RESULT: Max Time: 4.62629204 us ( 211.090 MiB/s)
RESULT: Avg Time: 2.25120053 us ( 433.796 MiB/s)
At the end the slowest connections are retested in serial, which ensures that Linktest places no stress additional stress on the system aside from the stress required to measure the connection. This is useful to see if the poor performance of a given connection may be due to the load Linktest places on the system, for example the interconnects, or if the connection is just bad, for example due to a badly seated connection.
0: PINGPONG 3 <-> 6: 1st: 4.62629 us ( 211.0897 MiB/s) 2nd: 3.89782 us ( 250.5408 MiB/s)
1: PINGPONG 2 <-> 5: 1st: 4.20862 us ( 232.0387 MiB/s) 2nd: 3.17407 us ( 307.6689 MiB/s)
Linktest Slow-Pairs Results - Iteration 1:
RESULT: Min Time: 3.17407004 us ( 307.669 MiB/s)
RESULT: Max Time: 3.89781850 us ( 250.541 MiB/s)
RESULT: Avg Time: 3.53594427 us ( 276.182 MiB/s)
stderr
The stderr output shows information that are for debugging/monitoring purposes. The following example shows 2 info messages, the memory usage on each node and the runtime of non measuring steps in linktest.
[linktest.cc in main:92] info: System string = "generic"
[benchmark.cc in benchmark:902] info: Using PinnedMmapAllocator
timings[000] [first sync] t= 30.69149 ms
task[000000] on jrc0734.jureca ( 0) mem= 145.5898 kiB
task[000001] on jrc0734.jureca ( 1) mem= 145.3633 kiB
task[000002] on jrc0734.jureca ( 2) mem= 145.3398 kiB
task[000003] on jrc0734.jureca ( 3) mem= 145.3477 kiB
task[000004] on jrc0735.jureca ( 4) mem= 145.4297 kiB
task[000005] on jrc0735.jureca ( 5) mem= 145.3516 kiB
task[000006] on jrc0735.jureca ( 6) mem= 147.4375 kiB
task[000007] on jrc0735.jureca ( 7) mem= 145.4062 kiB
timings[000] [mapping] t= 643.33295 us
timings[000] [randvec] t= 339.93274 ns
PE00000: psum=37 pasum=37 do_mix=0
timings[000] [getpart] t= 14.67997 us
timings[000] [search slow] t= 82.80016 us
timings[000] [test slow] t= 14.33950 ms
linktest_output_sion_collect_local_data[0] alloc+init local buffer of size 831 bytes for 8 tasks
timings[000] [sioncollect] t= 82.95011 us
timings[000] [sioncollwr] t= 101.74610 ms
timings[000] [sionclose] t= 403.51134 us
[sionwrite] 3904 B
timings[000] [all] t= 312.74890 ms
SION Files
Unless turned off, Linktest will, by default, also generate a binary SION file, whose default name is pingpong_results_bin.sion
. This file contains the Linktest measurements, a list of the involved hosts, as well as the options passed to Linktest when it was executed.
If --no-sion-file
is specified as a command-line option when executing Linktest then no SION file is generated. If --parallel-sion-file
is specified as a command-line option when executing Linktest then the output SION file, if enabled, will be written out in parallel. This speeds up the output to file systems that support parallel access. The name of the output SION file can be changed via the command-line argument -o
or --output
followed by a space and the name of the file.
SION File Defragmentation
The format of these SION files is optimized for parallel access which causes them to be very sparse. You can compress the SION files as follows:
siondefrag -q 1 input.sion output.sion
where input.sion
is the name of the input SION file and output.sion
is the name of the output SION file. Note that in-place compression is possible, as such the names of the input and output SION files can be identical.