diff --git a/lab1/README.md b/lab1/README.md
index 2460afd61fe50e03d00399e931f2a8f3930b92a4..ed14cd220413d9399846c0c55bcee46aa405891e 100644
--- a/lab1/README.md
+++ b/lab1/README.md
@@ -50,7 +50,7 @@ Hint: look at the program comments. How does the precision of the calculation de
 
 Hint: edit DARTS to have various input values from 10 to 10000. What do you think will happen to the precision with which we calculate $\pi$ when we split up the work among the nodes?
 
-Now parallelize the serial program. Use only the six basic MPI calls.
+Now parallelize the serial program. Use only the following six basic MPI calls: MPI_Init, MPI_Finalize, MPI_Comm_rank, MPI_Comm_size, MPI_Send, MPI_Recv
 
 Hint: As the number of darts and rounds is hard coded then all workers already know it, but each worker should calculate how many are in its share of the DARTS so it does its share of the work. When done, each worker sends its partial sum back to the master, which receives them and calculates the final sum.
 
@@ -106,4 +106,4 @@ Implement the domain decomposition described above, and add message passing to t
 
 ## Acknowledgment
 
-The examples in this lab are provided for educational purposes by [National Center for Supercomputing Applications](http://www.ncsa.illinois.edu/), (in particular their [Cyberinfrastructure Tutor](http://www.citutor.org/)), [Lawrence Livermore National Laboratory](https://computing.llnl.gov/) and [Argonne National Laboratory](http://www.mcs.anl.gov/). Much of the LLNL MPI materials comes from the [Cornell Theory Center](http://www.cac.cornell.edu/).  We would like to thank them for allowing us to develop the material for machines at PDC.  You might find other useful educational materials at these sites.
\ No newline at end of file
+The examples in this lab are provided for educational purposes by [National Center for Supercomputing Applications](http://www.ncsa.illinois.edu/), (in particular their [Cyberinfrastructure Tutor](http://www.citutor.org/)), [Lawrence Livermore National Laboratory](https://computing.llnl.gov/) and [Argonne National Laboratory](http://www.mcs.anl.gov/). Much of the LLNL MPI materials comes from the [Cornell Theory Center](http://www.cac.cornell.edu/).  We would like to thank them for allowing us to develop the material for machines at PDC.  You might find other useful educational materials at these sites.