**Instructions and hints on how to run for the OpenMP lab**
# PDC Summer School: General Instructions for the OpenMP Labs
# Where to run
## Where to run
The exercises will be run on PDC's CRAY XC-40 system[Beskow](https://www.pdc.kth.se/hpc-services/computing-systems):
The exercises will be run on PDC's cluster[Tegner](https://www.pdc.kth.se/hpc-services/computing-systems/tegner-1.737437):
```
```
beskow.pdc.kth.se
tegner.pdc.kth.se
```
```
# How to login
## How to login
To access PDC's cluster you should use your laptop and the Eduroam or KTH Open wireless networks.
To access PDC's systems you need an account at PDC. Check the [instructions for obtaining an account](https://www.pdc.kth.se/support/documents/getting_access/get_access.html#apply-via-pdc-webpage).
[Instructions on how to connect from various operating systems](https://www.pdc.kth.se/support/documents/login/login.html).
Once you have an account, you can follow the [instructions on how to connect from various operating systems](https://www.pdc.kth.se/support/documents/login/login.html).
Related to the Kerberos-based authentication environment, please check the [Kerberos commands documentation](https://www.pdc.kth.se/support/documents/login/login.html#general-information-about-kerberos)
# More about the environment on Beskow
## More about the environment on Tegner
The Cray automatically loads several[modules](https://www.pdc.kth.se/support/documents/run_jobs/job_scheduling.html#accessing-software) at login.
Software, which is not available by default, needs to be loaded as a[module](https://www.pdc.kth.se/support/documents/run_jobs/job_scheduling.html#accessing-software) at login. Use ``module avail`` to get a list of available modules. The following modules are of interest for this lab exercises:
- Different versions of the Intel compiler suite (``i-compilers/*``)
- SLURM - [batch jobs](https://www.pdc.kth.se/support/documents/run_jobs/queueing_jobs.html) and [interactive jobs](https://www.pdc.kth.se/support/documents/run_jobs/run_interactively.html)
- Programming environment - [Compilers for software development](https://www.pdc.kth.se/support/documents/software_development/development.html)
# Compiling MPI programs on Beskow
For more information see the [software development documentation page](https://www.pdc.kth.se/support/documents/software_development/development.html).
By default the cray compiler is loaded into your environment. In order to use another compiler you have to swap compiler modules:
Home directories are provided through an OpenAFS services. See the [AFS data management page](https://www.pdc.kth.se/support/documents/data_management/afs.html) for more information.
```
To use the Tegner compute nodes you have to submit [SLURM batch jobs](https://www.pdc.kth.se/support/documents/run_jobs/queueing_jobs.html) or run [SLURM interactive jobs](https://www.pdc.kth.se/support/documents/run_jobs/run_interactively.html).
module swap PrgEnv-cray PrgEnv-gnu
```
or
```
module swap PrgEnv-cray PrgEnv-intel
```
On Beskow one should always use the *compiler wrappers*`cc`, `CC` or
## Compiling programs
`ftn` (for C, C++ and Fortran codes, respectively),
which will automatically link to MPI libraries and linear
algebra libraries like BLAS, LAPACK, etc.
Examples:
By default you are provided with the compilers that come with the OS and are not the most recent versions of the compilers. To use a recent version of the GNU compiler suite or the Intel compilers use
```
```
# Intel
module load gcc
ftn -openmp source.f90
```
cc -openmp source.c
or
CC -openmp source.cpp
```
# Cray
module load i-compilers
ftn -openmp source.f90
cc -openmp source.c
CC -openmp source.cpp
# GNU
ftn -fopenmp source.f90
cc -fopenmp source.c
CC -fopenmp source.cpp
```
```
# Running OpenMP programs on Beskow
## Running OpenMP programs
After having compiled your code with the
After having compiled your code with the
[correct compilers flags for OpenMP](https://www.pdc.kth.se/support/documents/software_development/development.html),
[correct compilers flags for OpenMP](https://www.pdc.kth.se/support/documents/software_development/development.html),
...
@@ -68,13 +51,12 @@ it is necessary to book a node for interactive use:
...
@@ -68,13 +51,12 @@ it is necessary to book a node for interactive use:
salloc -A <allocation-name> -N 1 -t 1:0:0
salloc -A <allocation-name> -N 1 -t 1:0:0
```
```
You might also need to specify a **reservation** by adding the flag
You might also need to specify a **reservation** by adding the flag ``--reservation=<name-of-reservation>``.
``--reservation=<name-of-reservation>``.
An environment variable specifying the number of threads should also be set:
An environment variable specifying the number of threads should also be set:
```
```
export OMP_NUM_THREADS=32
export OMP_NUM_THREADS=24
```
```
Then the srun command is used to launch an OpenMP application:
Then the srun command is used to launch an OpenMP application:
...
@@ -83,23 +65,13 @@ Then the srun command is used to launch an OpenMP application:
...
@@ -83,23 +65,13 @@ Then the srun command is used to launch an OpenMP application:
srun -n 1 ./example.x
srun -n 1 ./example.x
```
```
In this example we will start one task with 32 threads (there are 32 cores per node on the Beskow nodes).
In this example we will start one task with 24 threads.
It is important to use the `srun` command since otherwise the job will run on the Beskow login node.
# OpenMP Exercises
It is important to use the `srun` command since otherwise the job will run on the login node.
The aim of these exercises is to give an introduction to OpenMP programming.
## OpenMP Exercises
All examples are available in both C and Fortran90.
- OpenMP Intro lab:
The aim of these exercises is to give an introduction to OpenMP programming. All examples are available in both C and Fortran90.
-[Instructions](intro_lab/README.md)
- Simple hello world program [in C](intro_lab/hello.c) and [in Fortran](intro_lab/hello.f90)
- Calculate π[in C](intro_lab/pi.c) and [in Fortran](intro_lab/pi.f90)
- Solutions will be made available later during the lab
- OpenMP Advanced Lab:
-[Instructions](advanced_lab/README.md)
- In C: [shwater2d.c](advanced_lab/c/shwater2d.c), [vtk_export.c](advanced_lab/c/vtk_export.c) and [Makefile](advanced_lab/c/Makefile)
- In Fortran: [shwater2d.f90](advanced_lab/f90/shwater2d.f90), [vtk_export.f90](advanced_lab/f90/vtk_export.f90) and [Makefile](advanced_lab/f90/Makefile)
- Solutions will be made available later during the lab
The aim of this exercise is to give hands-on experience in parallelizing a
The aim of this exercise is to give hands-on experience in parallelizing a larger program, measure parallel performance and gain experience in what to expect from modern multi-core architectures.
larger program, measure parallel performance and gain experience in what to
expect from modern multi-core architectures.
In the exercise you will use a dual hexadeca-core shared memory Intel Xeon
Your task is to parallelize a finite-volume solver for the two dimensional shallow water equations. Measure speed-up and if you have time, tune the code. You do not need to understand the numerics in order to solve this exercise (a short description is given in Appendix A). However, it assumes some prior experience with OpenMP, please refer to the lecture on shared memory programming if necessary.
E5-2698v3 Haswell node. There will be several nodes available on the Cray for
interactive use during the lab and each group will have access to a node of
their own. Running the program should therefore give you realistic timings and
speedup characteristics.
Your task is to parallelize a finite-volume solver for the two dimensional
shallow water equations. Measure speedup and if you have time, tune the code.
You don’t need to understand the numerics in order to solve this exercise (a
short description is given in Appendix A). However, it assumes some prior
experience with OpenMP, please refer to the lecture on shared memory
programming if necessary.
## Algorithm
## Algorithm
For this exercise we solve the shallow water equations on a square domain using
For this exercise we solve the shallow water equations on a square domain using a simple dimensional splitting approach. Updating volumes *Q* with numerical fluxes *F* and *G*, first in the x and then in the y direction, more easily expressed with the following pseudo-code
a simple dimensional splitting approach. Updating volumes Q with numerical
fluxes F and G, first in the x and then in the y direction, more easily
expressed with the following pseudo-code
```
```
for each time step do
for each time step do
...
@@ -40,30 +24,20 @@ for each time step do
...
@@ -40,30 +24,20 @@ for each time step do
end
end
```
```
In order to obtain good parallel speedup with OpenMP, each sub-task assigned to
In order to obtain good parallel speed-up with OpenMP, each sub-task assigned to a thread needs to be rather large. Since the nested loops contains a lot of numerical calculations the solver is a perfect candidate for OpenMP parallelization. But as you will see in this exercise, it is fairly difficult to obtain optimal speed-up on today’s multi-core computers. However, it should be fairly easy to obtain some speed-up without too much effort. The difficult task is to make a good use of all the available cores.
a thread needs to be rather large. Since the nested loops contains a lot of
numerical calculations the solver is a perfect candidate for OpenMP
Choose to work with either the given serial C/Fortran 90 code or, if you think you have time, write your own implementation (but do not waste time and energy). Compile the code by typing make and execute the program ``shwater2d`` with ``srun`` as described in the general documentation.
parallelization. But as you will see in this exercise, it’s fairly difficult to
obtain optimal speedup on today’s multi-core computers. However, it
should be fairly easy to obtain some speedup without too much effort. The
difficult task is to make a good use of all the available cores.
Choose to work with either the given serial C/Fortran 90 code or, if you think
## 1. Parallelize the code
you have time, write your own implementation (but don’t waste time and energy).
Compile the code by typing make and execute the program ``shwater2d`` with ``srun`` as
A serial version of the code is provided here: [shwater2d.c](c/shwater2d.c) or [shwater2d.f](f90/shwater2d.f90). Remember not to try parallelising everything.
add OpenMP statements to make it run in parallel and make sure the computed solution is correct.Do not parallelize everything Some advices are provided below.
Start with the file [shwater2d.c](c/shwater2d.c) or
### Tasks and questions to be addressed
[shwater2d.f](f90/shwater2d.f90), add OpenMP statements to make it run in
parallel and make sure the computed solution is correct. Some advice are given
below
- How should the work be distributed among threads
1) How should the work be distributed among threads?
- Don’t parallelize everything
2) Add OpenMP statements to make the code in parallel without affecting the correctness of the code.
- What’s the difference between
3) What is the difference between
```
```
!$omp parallel do
!$omp parallel do
...
@@ -93,55 +67,45 @@ and
...
@@ -93,55 +67,45 @@ and
_Hint: How are threads created/destroyed by OpenMP? How can it impact performance?_
_Hint: How are threads created/destroyed by OpenMP? How can it impact performance?_
### 2. Measure parallel performance.
## 2. Measure parallel performance.
In this exercise, we explore parallel performance refers to the computational speed-up *S*<sub>n</sub>_ = $\Delta$*T*<sub>1</sub>/$\Delta$*T*<sub>n</sub>_, using _n_ threads.
### Tasks and questions to be addressed
In this exercise, parallel performance refers to the computational speedup _S<sub>n</sub>_ =
1) Measure run time $\Delta$T for 1, 2, ..., 24 threads and calculate the speed-up.
_T_<sub>1</sub>/_T<sub>n</sub>_, using _n_ threads. Measure run time T for 1, 2, ..., 16 threads and
2) Is it linear? If not, why?
calculate speedup. Is it linear? If not, why? Finally, is the obtained speedup
3) Finally, is the obtained speed-up acceptable?
acceptable? Also, try to increase the space discretization (M,N) and see if it
4) Try to increase the space discretization (M,N) and see if it affects the speed-up.
affects the speedup.
Recall from the OpenMP exercises that the number of threads is determined by an
Recall from the OpenMP exercises that the number of threads is determined by an environment variable ``OMP_NUM_THREADS``. One could change the variable or use the shell script provided in Appendix B.
environment variable ``OMP_NUM_THREADS``. One could change the variable or use
the shell script provided in Appendix B.
### 3. Optimize the code.
### 3. Optimize the code.
The given serial code is not optimal, why? If you have time, go ahead and try
The given serial code is not optimal, why? If you have time, go ahead and try to make it faster. Try to decrease the serial run time. Once the serial
to make it faster. Try to decrease the serial run time. Once the serial
performance is optimal, redo the speedup measurements and comment on the result.
performance is optimal, redo the speedup measurements and comment on the
result.
For debugging purposes you might want to visualize the computed solution.
For debugging purposes you might want to visualize the computed solution. Uncomment the line ``save_vtk``. The result will be stored in ``result.vtk``, which can be opened in ParaView, available on Tegner after ``module add paraview``. Beware that the resulting file could be rather large, unless the space discretization (M,N) is decreased.
Uncomment the line ``save_vtk``. The result will be stored in ``result.vtk``, which can
be opened in ParaView, available on Tegner after
``module add paraview``. Beware that the resulting file could be rather large,
unless the space discretization (M,N) is decreased.
### A. About the Finite-Volume solver
## A. About the Finite-Volume solver
In this exercise we solve the shallow water equations in two dimensions given
In this exercise we solve the shallow water equations in two dimensions given by
by
<imgsrc="image/eq_1.png"alt="Eq_1"width="800px"/>
<imgsrc="image/eq_1.png"alt="Eq_1"width="800px"/>
where _h_ is the depth and (_u_,_v_) are the velocity vectors. To solve the equations
where _h_ is the depth and (_u_,_v_) are the velocity vectors. To solve the equations we use a dimensional splitting approach, i.e. reducing the two dimensional problem to a sequence of one-dimensional problems
we use a dimensional splitting approach, i.e. reducing the two dimensional problem
to a sequence of one-dimensional problems
<imgsrc="image/eq_2.png"alt="Eq_2"width="800px"/>
<imgsrc="image/eq_2.png"alt="Eq_2"width="800px"/>
For this exercise we use the Lax-Friedrich’s scheme, with numerical fluxes _F_, _G_
For this exercise we use the Lax-Friedrich’s scheme, with numerical fluxes *F*, *G* defined as
defined as
<imgsrc="image/eq_3.png"alt="Eq_3"width="800px"/>
<imgsrc="image/eq_3.png"alt="Eq_3"width="800px"/>
where _f_ and _g_ are the flux functions, derived from (1). For simplicity we use
where *f* and *g* are the flux functions, derived from (1). For simplicity we use reflective boundary conditions, thus at the boundary
reflective boundary conditions, thus at the boundary
The goal of these exercises is to familiarize you with OpenMP environment and
The goal of these exercises is to familiarize you with OpenMP environment and make our first parallel codes with OpenMP. We will also record the code performance and understand race condition and false sharing. This laboratory contains four exercises, each with step-by-step instructions below.
make our first parallel codes with OpenMP. We will also record the code
performance and understand race condition and false sharing. This laboratory
contains four exercises, each with step-by-step instructions below.
For your experiments, you are going to use a node of the
To run your code, you need first to generate your executable. It is very important that you include a compiler flag telling the compiler that you are going to use OpenMP. If you forget the flag, the compiler will happily ignore all the OpenMP directives and create an executable that runs in serial. Different compilers often have different flags, but often they follow the convention of the GNU compilers and accept the OpenMP flag ``-fopenmp``.
To run your code on Beskow, you will need to have an interactive allocation:
To run your code, you will need to have an (e.g., interactive) allocation:
```
```
salloc -N 1 -t 4:00:00 -A <name-of-allocation> --reservation=<name-of-reservation>
salloc -N 1 -t 4:00:00 -A <name-of-allocation> --reservation=<name-of-reservation>
...
@@ -48,7 +31,7 @@ To set the number of threads, you need to set the OpenMP environment variable:
...
@@ -48,7 +31,7 @@ To set the number of threads, you need to set the OpenMP environment variable:
export OMP_NUM_THREADS=<number-of-threads>
export OMP_NUM_THREADS=<number-of-threads>
```
```
To run an OpenMP code on a computing node of Beskow:
To run an OpenMP code on a computing node:
```
```
srun -n 1 ./name_exec
srun -n 1 ./name_exec
...
@@ -58,10 +41,7 @@ srun -n 1 ./name_exec
...
@@ -58,10 +41,7 @@ srun -n 1 ./name_exec
_Concepts: Parallel regions, parallel, thread ID_
_Concepts: Parallel regions, parallel, thread ID_
Here we are going to implement the first OpenMP program. Expected knowledge
Here we are going to implement the first OpenMP program. Expected knowledge includes basic understanding of OpenMP environment, how to compile an OpenMP program, how to set the number of OpenMP threads and retrieve the thread ID number at runtime.
includes basic understanding of OpenMP environment, how to compile an OpenMP
program, how to set the number of OpenMP threads and retrieve the thread ID
number at runtime.
Your code using 4 threads should behave similarly to:
Your code using 4 threads should behave similarly to:
...
@@ -80,29 +60,44 @@ Hello World from Thread 2
...
@@ -80,29 +60,44 @@ Hello World from Thread 2
Hello World from Thread 1
Hello World from Thread 1
```
```
Instructions: Write a C/Fortran code to make each OpenMP thread print "``Hello
### Tasks and questions to be addressed
World from Thread X!``" with ``X`` = thread ID.
1) Write a C/Fortran code to make each OpenMP thread print "``Hello World from Thread X!``" with ``X`` = thread ID.
2) How do you change the number of threads?
3) How many different ways are there to change the number of threads? Which one are those?
4) How can you make the output ordered from thread 0 to thread 4?
Hints:
Hints:
- Remember to include OpenMP library
- Remember to include OpenMP library
- Retrieve the ID of the thread with ``omp_get_thread_num()`` in C or in Fortran ``OMP_GET_THREAD_NUM()``.
- Retrieve the ID of the thread with ``omp_get_thread_num()`` in C or in Fortran ``OMP_GET_THREAD_NUM()``.
_Concepts: Parallel, default data environment, runtime library calls_
- How many different ways are there to change the number of threads? Which one are those?
- How can you make the output ordered from thread 0 to thread 4?
## Exercise 2 - Creating Threads: calculate π in parallel using pragma omp parallel
Here are considering the parallelisation of a widely used computational pattern, namely adding an array with a scaled array. Serial versions of the this task are provided: [stream-triad.c](stream-triad.c) / [stream-triad.f90](stream-triad.f90)
_Concepts: Parallel, default data environment, runtime library calls_
This implementation performs repeated execution of the benchmarked kernel to make improve time measurements.
### Tasks and questions to be addressed
1) Create a parallel version of the programs using a parallel construct: ``#pragma omp parallel for``. In addition to a parallel construct, you might need some runtime library routines:
-``int omp_get_num_threads()`` to get the number of threads in a team
-``int omp_get_thread_num()`` to get thread ID
-``double omp_get_wtime()`` to get the time in seconds since a fixed point in the past
-``omp_set_num_threads()`` to request a number of threads in a team
2) Run the parallel code and take the execution time with 1, 2, 4, 12, 24 threads for different array length ``N``. Record the timing.
3) Produce a plot showing execution time as a function of array length for different number of threads.
4) How large does ``N`` has to be for using 2 threads becoming more beneficial compared to a single thread?
5) How large needs ``N`` to be chosen for all arrays not to fit into the L3 cache?
6) Compare results for large ``N`` and 8 threads using different settings of ``OMP_PROC_BIND`` and reason about the observed performance differences.
## Exercise 3 - Parallel calculation of $\pi$ using ``pragma omp parallel``
Here we are going to implement a first parallel version of the
_Concepts: Parallel, default data environment, runtime library calls_
[pi.c](pi.c) / [pi.f90](pi.f90)
code to calculate the value of π using the parallel construct.
The figure below shows the numerical technique, we are going to use to calculate π.
Here we are going to implement a first parallel version of the [pi.c](pi.c) / [pi.f90](pi.f90) code to calculate the value of π using the parallel construct. The figure below shows the numerical technique, we are going to use to calculate π.
where each rectangle has width Δx and height F(x<sub>i</sub>) at the middle of interval i.
where each rectangle has width $\Delta$ and height F(x<sub>i</sub>) at the middle of interval i.
A simple serial C code to calculate π is the following:
A simple serial C code to calculate $\pi$ is the following:
```
```
unsigned long nsteps = 1<<27; /* around 10^8 steps */
unsigned long nsteps = 1<<27; /* around 10^8 steps */
...
@@ -134,86 +129,55 @@ A simple serial C code to calculate π is the following:
...
@@ -134,86 +129,55 @@ A simple serial C code to calculate π is the following:
pi *= 4.0 * dx;
pi *= 4.0 * dx;
```
```
Instructions: Create a parallel version of the
### Tasks and questions to be addressed
[pi.c](pi.c) / [pi.f90](pi.f90) program using a
parallel construct: ``#pragma omp parallel``. Run the parallel code and take the
execution time with 1, 2, 4, 8, 16, 32 threads. Record the timing.
Pay close attention to shared versus private variables.
- In addition to a parallel construct, you might need the runtime library routines
1) Create a parallel version of the [pi.c](pi.c) / [pi.f90](pi.f90) program using a parallel construct: ``#pragma omp parallel``. Pay close attention to shared versus private variables. In addition to a parallel construct, you might need some runtime library routines:
-``int omp_get_num_threads()``; to get the number of threads in a team
-``int omp_get_num_threads()`` to get the number of threads in a team
-``int omp_get_thread_num()``; to get thread ID
-``int omp_get_thread_num()`` to get thread ID
-``double omp_get_wtime()``; to get the time in seconds since a fixed point in the past
-``double omp_get_wtime()`` to get the time in seconds since a fixed point in the past
-``omp_set_num_threads()``; to request a number of threads in a team
-``omp_set_num_threads()`` to request a number of threads in a team
2) Run the parallel code and take the execution time with 1, 2, 4, 8, 12, 24 threads. Record the timing.
3) How does the execution time change varying the number of threads? Is it what you expected? If not, why do you think it is so?
4) Is there any technique you heard of in class to improve the scalability of the technique? How would you implement it?
Hints:
Hints:
- Use a parallel construct: ``#pragma omp parallel``.
- Use a parallel construct: ``#pragma omp parallel``.
- Divide loop iterations between threads (use the thread ID and the number of threads).
- Divide loop iterations between threads (use the thread ID and the number of threads).
- Create an accumulator for each thread to hold partial sums that you can later
- Create an accumulator for each thread to hold partial sums that you can later combine to generate the global sum.
combine to generate the global sum.
Questions:
- How does the execution time change varying the number of threads? Is it what you
## Exercise 3 - Calculate $\pi$ using critical and atomic directives
expected? If not, why do you think it is so?
- Is there any technique you heard of in class to improve the scalability of the
technique? How would you implement it?
## Exercise 3 - Calculate π using critical and atomic directives
Here we are going to implement a second and a third parallel version of the
Here we are going to implement a second and a third parallel version of the[pi.c](pi.c) / [pi.f90](pi.f90) code to calculate the value of $\pi$ using the critical and atomic directives.
[pi.c](pi.c) / [pi.f90](pi.f90) code to calculate the value of π
using the critical and atomic directives.
### Tasks and questions to be addressed
Instructions: Create two new parallel versions of the
1) Create two new parallel versions of the [pi.c](pi.c) / [pi.f90](pi.f90) program using the parallel construct ``#pragma omp parallel`` and a) ``#pragma omp critical`` b) ``#pragma omp atomic``.
[pi.c](pi.c) / [pi.f90](pi.f90) program
2) Run the two new parallel codes and take the execution time with 1, 2, 4, 8, 16, 32 threads. Record the timing in a table.
using the parallel construct ``#pragma omp parallel`` and 1) ``#pragma omp critical``
3) What would happen if you hadn’t used critical or atomic a shared variable?
2) ``#pragma omp atomic``. Run the two new parallel codes and take the execution
4) How does the execution time change varying the number of threads? Is it what you expected?
time with 1, 2, 4, 8, 16, 32 threads. Record the timing in a table.
5) Do the two versions of the code differ in performance? If so, what do you think is the reason?
Hints:
Hints:
- We can use a shared variable π to be updated concurrently by different
- We can use a shared variable $\pi$ to be updated concurrently by different threads. However, this variable needs to be protected with a critical section or an atomic access.
threads. However, this variable needs to be protected with a critical section
or an atomic access.
- Use critical and atomic before the update ``pi += step``
- Use critical and atomic before the update ``pi += step``
Questions:
- What would happen if you hadn’t used critical or atomic a shared variable?
- How does the execution time change varying the number of threads? Is it what
you expected?
- Do the two versions of the code differ in performance? If so, what do you
think is the reason?
## Exercise 4 - Calculate π with a loop and a reduction
## Exercise 4 - Calculate π with a loop and a reduction
Here we are going to implement a fourth parallel version of the
Here we are going to implement a fourth parallel version of the [pi.c](pi.c) / [pi.f90](pi.f90) code to calculate the value of $\pi$; using ``omp for`` and ``reduction`` operations.
[pi.c](pi.c) / [pi.f90](pi.f90)
code to calculate the value of π using ``omp for`` and ``reduction`` operations.
Instructions: Create a new parallel versions of the
### Tasks and questions to be addressed
[pi.c](pi.c) / [pi.f90](pi.f90) program using
the parallel construct ``#pragma omp for`` and ``reduction`` operation. Run the new
parallel code and take the execution time for 1, 2, 4, 8, 16, 32 threads. Record
the timing in a table. Change the schedule to dynamic and guided and measure
the execution time for 1, 2, 4, 8, 16, 32 threads.
Hints:
1) Create a new parallel versions of the [pi.c](pi.c) / [pi.f90](pi.f90) program using the parallel construct ``#pragma omp for`` and ``reduction`` operation.
2) Run the new parallel code and take the execution time for 1, 2, 4, 8, 12, 24 threads. Record the timing in a table. Change the schedule to dynamic and guided and measure the execution time for 1, 2, 4, 8, 12, 24 threads.
3) What is the scheduling that provides the best performance? What is the reason for that?
4) What is the fastest parallel implementation of pi.c / pi.f90 program? What is the reason for it being the fastest? What would be an even faster implementation of pi.c / pi.f90 program?
- To change the schedule, you can either change the environment variable with
Hints:
``export OMP_SCHEDULE=type`` where ``type`` can be any of static, dynamic, guided or in
the source code as ``omp parallel for schedule(type)``.
Questions:
- What is the scheduling that provides the best performance? What is the reason for that?
- To change the schedule, you can either change the environment variable with ``export OMP_SCHEDULE=type`` where ``type`` can be any of static, dynamic, guided or in the source code as ``omp parallel for schedule(type)``.
- What is the fastest parallel implementation of pi.c / pi.f90 program? What is
\ No newline at end of file
the reason for it being the fastest? What would be an even faster implementation