From 7829993a7c9dc7e0404e2cbdae0d0e81197ed675 Mon Sep 17 00:00:00 2001
From: Dirk Pleiter <pleiter@kth.se>
Date: Tue, 31 Aug 2021 21:47:35 +0200
Subject: [PATCH] Clarified formulation of exercise 3

---
 lab1/README.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lab1/README.md b/lab1/README.md
index 2460afd..ed14cd2 100644
--- a/lab1/README.md
+++ b/lab1/README.md
@@ -50,7 +50,7 @@ Hint: look at the program comments. How does the precision of the calculation de
 
 Hint: edit DARTS to have various input values from 10 to 10000. What do you think will happen to the precision with which we calculate $\pi$ when we split up the work among the nodes?
 
-Now parallelize the serial program. Use only the six basic MPI calls.
+Now parallelize the serial program. Use only the following six basic MPI calls: MPI_Init, MPI_Finalize, MPI_Comm_rank, MPI_Comm_size, MPI_Send, MPI_Recv
 
 Hint: As the number of darts and rounds is hard coded then all workers already know it, but each worker should calculate how many are in its share of the DARTS so it does its share of the work. When done, each worker sends its partial sum back to the master, which receives them and calculates the final sum.
 
@@ -106,4 +106,4 @@ Implement the domain decomposition described above, and add message passing to t
 
 ## Acknowledgment
 
-The examples in this lab are provided for educational purposes by [National Center for Supercomputing Applications](http://www.ncsa.illinois.edu/), (in particular their [Cyberinfrastructure Tutor](http://www.citutor.org/)), [Lawrence Livermore National Laboratory](https://computing.llnl.gov/) and [Argonne National Laboratory](http://www.mcs.anl.gov/). Much of the LLNL MPI materials comes from the [Cornell Theory Center](http://www.cac.cornell.edu/).  We would like to thank them for allowing us to develop the material for machines at PDC.  You might find other useful educational materials at these sites.
\ No newline at end of file
+The examples in this lab are provided for educational purposes by [National Center for Supercomputing Applications](http://www.ncsa.illinois.edu/), (in particular their [Cyberinfrastructure Tutor](http://www.citutor.org/)), [Lawrence Livermore National Laboratory](https://computing.llnl.gov/) and [Argonne National Laboratory](http://www.mcs.anl.gov/). Much of the LLNL MPI materials comes from the [Cornell Theory Center](http://www.cac.cornell.edu/).  We would like to thank them for allowing us to develop the material for machines at PDC.  You might find other useful educational materials at these sites.
-- 
GitLab