{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "# Interactive Parallel Computing with IPython Parallel\n",
    "\n",
    "<div class=\"dateauthor\">\n",
    "21 June 2022 | Jan H. Meinke\n",
    "</div>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "*Computers have more than one core.* Wouldn't it be nice if we could use all the cores of our local machine or a compute node of a cluster from our [Jupyter][IP] notebook? \n",
    "\n",
    "Click on the ``+``-sign at the top of the Files tab on the left to start a new launcher. In the launcher click on Terminal. A terminal will open as a new tab. Grab the tab and pull it to the right to have the terminal next to your notebook.\n",
    "\n",
    "**Note**: The terminal does not have the same modules loaded as the notebook. To fix that type `source $PROJECT_training2219/hpcpy22`.\n",
    "\n",
    "In the terminal type ``ipcluster``. You'll see the help message telling you that you need to give it subcommand. Take a look at the message and then enter \n",
    "\n",
    "``` bash\n",
    "export OMP_NUM_THREADS=32\n",
    "ipcluster start --n=4\n",
    "```\n",
    "\n",
    "This will start a cluster with four engines and should limit the number of threads to 32 threads per engine to avoid oversubscription.\n",
    "\n",
    "> If you use the classical [Jupyter][IP] notebook, this is even easier if you have the cluster extension installed. (We don't have that one on our JupyterHub, yet). One of the tabs of your browser has the title \"Home\". If you switch to that tab, there are several tabs within the web page. One of them is called \"IPython Clusters\". Click on \"IPython Clusters\", increase the number of engines in the \"default\" profile to 4, and click on Start. The status changes from stopped to running. After you did that come back to this tab.\n",
    "\n",
    ">If the \"Clusters\" tab shows the message:\n",
    "\n",
    ">>    Clusters tab is now provided by IPython parallel. See IPython parallel for installation details.\n",
    "    \n",
    "> you need to quit your notebook server (make sure all your notebooks ar saved) and run the command \n",
    "\n",
    ">>    ipcluster nbextension enable\n",
    "    \n",
    ">Now, when you start `jupyter notebook` you should see a field that lets you set the number of engines in the \"IPython Clusters\" tab.\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "[IP]: http://www.jupyter.org"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## Overview"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "IPyParallel has three parts: controller, engines, and client. The controller is the central hub. The client communicates only with the controller. The controller keeps track of the available engines and forwards requests from the client to the engines. It schedules the work and monitors its status. The results are communicated through the controller back to the client.\n",
    "\n",
    "All three components can run on different computers. A Jupyter notebook might run on your laptop and connect to an ipcontroller on a JUWELS login node, which in turn talks to engines running on a compute node."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "fragment"
    }
   },
   "source": [
    "![IPython Parallel Architecture](images/ipyparallel.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## The Client"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "Now let's see how we access the \"Cluster\". [IPython][IP] comes with a module [ipyparallel][IPp] that is used to access the engines, we just started. We first need to import Client.\n",
    "\n",
    "[IPp]: https://ipyparallel.readthedocs.io/en/latest/\n",
    "[IP]: http://www.ipython.org"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "-"
    }
   },
   "outputs": [],
   "source": [
    "from ipyparallel import Client"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "fragment"
    }
   },
   "outputs": [],
   "source": [
    "rc = Client(profile=\"default\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "We can list the ids of the engines attached"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "fragment"
    }
   },
   "outputs": [],
   "source": [
    "rc.ids"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## Views\n",
    "\n",
    "A *view* gives us access to a set of engines using a given scheduler. There are two types of views: a *direct view* and a *load-balanced* view."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "As the name implies a *direct view* gives us direct control of the engines. We can push and pull data and apply functions using a couple of different methods. We are in control what runs where.\n",
    "\n",
    "A *load-balanced view* tries to balance the work between all the engines. We can submit tasks to it in the same way as before, but with a *load-balanced view*, the scheduler decides where a function is executed. It's also possible to define dependencies between tasks to build a dependency graph or even build this graph by hand. You'll learn a little more about dependencies in [Parallel, Task-Based Computing with Load Balancing on your Local Machine][LocalTaskParallel]\n",
    "\n",
    "Let's start with a *direct view* and learn about the methods used to execute code on the engines and move data around. \n",
    "\n",
    "We create a *direct view* of the engines by slicing the Client object:\n",
    "\n",
    "[LocalTaskParallel]: LocalTaskParallel.ipynb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "fragment"
    }
   },
   "outputs": [],
   "source": [
    "v01 = rc[0:2] # First two engines (0 and 1)\n",
    "v23 = rc[2:4] # Engines 2 and 3\n",
    "dview = rc[:] # All available engines"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## Parallel Magic"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "Before we go into the details of the interface of a `DirectView`--that's the name of the class, let's look at IPython magic.\n",
    "\n",
    "IPython makes it very easy to use IPyParallel. It provides the magic commands ``%px`` and ``%%px`` to execute code in parallel. The target attribute is used to pick the engines, you want. By default, all the engines of the last Client object created are used. You can also specify if a command should be executed `blocking`--the default--or `non-blocking`.\n",
    "\n",
    "Note, the commands prefixed with ``%px`` are *not* executed locally. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "fragment"
    }
   },
   "outputs": [],
   "source": [
    "%px import numpy as np # import numpy on all engines as np\n",
    "import numpy as np # do it locally, too."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "Since it's fairly common that you want to execute a cell remotely and locally, there's an option for that. Just add ``--local``.\n",
    "\n",
    "**Note**: This works only for ``%%px`` not ``%px``."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "fragment"
    }
   },
   "outputs": [],
   "source": [
    "%%px --local \n",
    "np.__version__ # print the numpy version of the engines. Note how the output is prefixed. It can be accessed that way, too. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    " The engines run ipython. Magic commands work, too."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%%px --local\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%%px --local \n",
    "import matplotlib.pyplot as plt\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "The cell magic command ``%%px`` lets us execute more than one statement. The option ``--target`` lets us choose which engines we want to use. Here we are using engines 0 to 3."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%%px --target 0:4\n",
    "a = np.random.random([10,10])\n",
    "plt.imshow(a, interpolation=\"nearest\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "Yes, the output can be graphical.\n",
    "\n",
    "Remember that the imports, we performed with ``%px`` are not available in our notebook. We can fix that by using"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "with rc[:].sync_imports():\n",
    "    import matplotlib.pyplot"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "Unfortunately mapping of namespaces does not work that way."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "## Using the Direct View"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "As mentioned above a *direct view* lets you control each engine directly. You can also decide if a command should be blocking or not.\n",
    "\n",
    "We can, for example, create two random 100 by 100 element matrices on each engine, multiply them, and then display them. On each engine the code would look like this"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "outputs": [],
   "source": [
    "a = np.random.random([100, 100])\n",
    "b = np.random.random([100, 100])\n",
    "c = a.dot(b)\n",
    "plt.imshow(c, interpolation=\"nearest\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "As we learned before, we can use the ``%%px`` cell magic to execute this on all engines. Here we use the ``--target`` option to specify every second engine starting at 0. ``%px`` and ``%%px`` use the currently active view. By default that's the first view created. You can make a view active by calling ``view.activate(suffix)``. Use ``view.activate?`` to learn more about suffix."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%%px --target 0::2\n",
    "a = np.random.random([100, 100])\n",
    "b = np.random.random([100, 100])\n",
    "c = a.dot(b)\n",
    "plt.imshow(c, interpolation=\"nearest\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "Magic commands are blocking by default, i.e., the next cell can only be executed after all the engines have finished their work. We can pass the option ``--noblock`` to change that behavior."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "%%px\n",
    "import threadpoolctl\n",
    "threadpoolctl.threadpool_limits(limits=32, user_api='blas')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%%px --noblock\n",
    "a = np.random.random([2000, 2000])\n",
    "b = np.random.random([2000, 2000])\n",
    "c = a.dot(b)\n",
    "c.sum() / 4.0e9"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "We get an AsyncResult back. We can continue working in our notebook and pick up the result, when we are ready to do so."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%result"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "## Execute and Apply"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "The foundation of executing code with a ``DirectView`` is ``apply``. It calls a function (the first argument) with args and kwargs. The values of the arguments are taken from the notebook and pushed to the engines."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "c = dview.apply(lambda a,b: np.dot(a,b), a, b) # This uses a and b from the notebook"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "c.done()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "c = c.result()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "c[0].shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "The function can be a lambda function:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "c = dview.apply(lambda a,b : a + b, a, b)\n",
    "c.done()\n",
    "c = c.result()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "outputs": [],
   "source": [
    "c[0].shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "It can also be ``exec``:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview.apply(exec, 'c = a + b') # Note, this uses the variables defined on the engines."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview['c'][0].shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview.execute('c=a+b')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview['c']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "## Remote functions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "source": [
    "There are two decorators ``@parallel`` and ``@remote`` that can create functions that are executed on the engines.\n",
    "\n",
    "A function decorated with ``@parallel`` takes a sequence or an array as argument and distributes the work over the engines. Each engine still gets a sequence or array and should return one, too."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "from ipyparallel import parallel, remote"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "@dview.parallel(block=True)\n",
    "def even(x):\n",
    "    \"\"\"Return only even elements of x\n",
    "    \n",
    "    Paramters\n",
    "    ---------\n",
    "    x : sequence or array\n",
    "        A list of values\n",
    "    \n",
    "    Returns\n",
    "    -------\n",
    "    res : like x\n",
    "        even elements of x\n",
    "    \"\"\"\n",
    "    return [e for e in x if not e % 2]\n",
    "#    return None if x % 2 else x\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "even(list(range(0,16)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "A `remote` function, on the other hand just runs on each engine with the full set of data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "@dview.remote(block=True)\n",
    "def scale(a):\n",
    "    for i in range(len(a)):\n",
    "        a[i] *= 2\n",
    "        \n",
    "    return a"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "scale(list(range(0, 16)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "## Moving data around"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "So far the runtime has taken care of moving data to and from the engines, but we can do this explicitely. There are 4 commands to do that:\n",
    "\n",
    "* push\n",
    "* pull\n",
    "* scatter\n",
    "* gather"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "Push takes a dictionary with the remote variable name as key:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "notes"
    }
   },
   "outputs": [],
   "source": [
    "dview.block=True"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "localA = list(range(10))\n",
    "dview.push(dict(remoteA=localA))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "We can get a variable back with pull. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview.pull('remoteA')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "There's also a shorthand notation, where we treat the view as a dictionary."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview['remoteA']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "The methods ``push`` and ``pull`` push/pull the same data to/from all engines. They don't take a list and distribute it. That's what ``scatter`` and ``gather`` do. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview.scatter('a',list(range(24)))\n",
    "dview['a']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview.gather('a')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "## List comprehension"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "With those methods at hand, we can build a parallel list comprehension."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview.scatter('x',list(range(64)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%px y = [i**10 for i in x]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "y = dview.gather('y')\n",
    "y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "## Exploring Latency and Bandwidth"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "Latency (the time until something happens) and bandwidth (the amount of data we get through the network) are two important properties of your parallel system that define what is practical and what is not. We will use the ``%timeit`` magic to measure these properties. ``%timit`` and its sibbling ``%%timeit`` measure the run time of a statement (cell in the case of ``%%timeit``) by executing the statement multiple times (by default at least 3 times). For short running routines many loops of 3 executions are performed and the minimum time measured is then displayed. The number of loops and the number of executions can be adjusted. Take a look at the documentation. Give it a try."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "Lets first see how long it takes to send off a new task using ``execute`` and ``apply``."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview.block = False"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%%px --noblock --local\n",
    "a = np.random.random([2000, 2000])\n",
    "b = np.random.random([2000, 2000])\n",
    "c = a.dot(b)\n",
    "c.sum() / 4.0e9"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "Let's first execute nothing."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%timeit dview.execute('')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "Next we'll use a very minimal function. It just returns its argument. In this case the argument is empty."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%timeit dview.apply(lambda x : x, '')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "Here, we'll tell every engine to perform a matrix-matrix multiplication (see [Matrix-Matrix Multiplication Using a DirectView](Matrix-Matrix-Multiplication-Using-a-DirectView) below for more about matrix multiplications)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%timeit -n 1 -r 4 dview.execute('c = a.dot(b)')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "Now, we'll make the execution blocking. This means, we are measuring the time the function needs to return a result instead of just the time needed to launch the task."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview.block=True"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%timeit dview.execute('')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%timeit dview.apply(lambda x : x, '')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%timeit -n 1 -r 4 rc[0].execute('c = a.dot(b)')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%timeit a.dot(b)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview.block=False"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "We can start about 500 parallel tasks per second and finish about a quarter as many. This gives an estimate of the granularity we need to use this model for efficient parallelization. Any task that takes less time than this will be dominated by the overhead."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "To get an idea about the bandwidth available let's push some arrays to the engines. We make this blocking."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "dview.block=True"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "a = np.random.random(256*1024)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%timeit dview.push(dict(a=a))\n",
    "%timeit dview.push(dict(a=a[:128*1024]))\n",
    "%timeit dview.push(dict(a=a[:64*1024]))\n",
    "%timeit dview.push(dict(a=a[:32*1024]))\n",
    "%timeit dview.push(dict(a=a[:16*1024]))\n",
    "%timeit dview.push(dict(a=a[:8*1024]))\n",
    "%timeit dview.push(dict(a=a[:4*1024]))\n",
    "%timeit dview.push(dict(a=a[:2*1024]))\n",
    "%timeit dview.push(dict(a=a[:1024]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "Calculate the bandwidth for the largest array and the smallest array."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "bwmax = len(rc) * 256 * 8 / 9.8e-3\n",
    "bwmin = len(rc) * 8 / 6.1e-3\n",
    "print(\"The bandwidth is between %.2f kB/s and %.2f kB/s.\" %( bwmin, bwmax))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "## Matrix-Matrix Multiplication Using a DirectView"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "Matrix multiplication is one of the favorites in High Performance Computing (HPC). It's computationally intensive---if done right---, easily parallelized with little communication, and the basis of many real world applications.\n",
    "\n",
    "Let's say, we have two matrices A and B, where\n",
    "\n",
    "$$ A = \\left ( \\begin{array}{cccc}\n",
    "                4 & 3 & 1 & 6 \\\\\n",
    "                1 & 2 & 0 & 3 \\\\\n",
    "                7 & 9 & 2 & 0 \\\\\n",
    "                2 & 2 & -1 & 4 \\\\\n",
    "               \\end{array}\n",
    "       \\right ) $$\n",
    "\n",
    "and \n",
    "\n",
    "$$ B = \\left ( \\begin{array}{cc}\n",
    "                \\frac{1}{12} & \\frac{1}{2} \\\\\n",
    "                \\frac{1}{9}  & \\frac{1}{4} \\\\\n",
    "                \\frac{1}{3}  &      1      \\\\\n",
    "                \\frac{1}{7}  & -\\frac{1}{3}\n",
    "                \\end{array}\n",
    "       \\right ). $$\n",
    "\n",
    "To calculate the element of $C = A B$ at row *i* and column *j*, we perform a dot (scalar) product of the ith row of A and the jth column of B:\n",
    "\n",
    "$$ C_{ij} = \\sum_k A_{i,k} B_{k, j} $$.\n",
    "\n",
    "For this to work, the number of columns in $A$ has to be equal to the number of rows in $B$."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "We can generate two matrices of size n by n filled with random numbers using ``np.random.random``."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "n = 16\n",
    "A = np.random.random([n, n])\n",
    "B = np.random.random([n, n])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "NumPy includes the dot product. For 2-dimensional arrays ``np.dot`` performs a matrix-matrix multiplication."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "C = np.dot(A, B)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%timeit np.dot(A, B)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "There are different ways to parallelize a matrix-matrix multiplication. Each element of the matrix can be calculated independently."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%%timeit  \n",
    "p = len(rc)\n",
    "# Distribute the elements of the result viewmatrix round robin.\n",
    "C1h = [[rc[(i * n + j) % p].apply(np.dot, A[i,:], B[:,j]) for j in range(n)] for i in range(n)]\n",
    "# Wait until the calculation is done\n",
    "dview.wait()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "This, however, produces $n^2$ short tasks and the overhead (latency) is just overwhelming.\n",
    "\n",
    "We want to calculate\n",
    "\n",
    "$$ C = \\left ( \\begin{array}{cccc}\n",
    "                4 & 3 & 1 & 6 \\\\\n",
    "                1 & 2 & 0 & 3 \\\\\n",
    "                7 & 9 & 2 & 0 \\\\\n",
    "                2 & 2 & -1 & 4 \\\\\n",
    "               \\end{array}\n",
    "       \\right ) \n",
    "              \\left ( \\begin{array}{cc}\n",
    "                \\frac{1}{12} & \\frac{1}{2} \\\\\n",
    "                \\frac{1}{9}  & \\frac{1}{4} \\\\\n",
    "                \\frac{1}{3}  &      1      \\\\\n",
    "                \\frac{1}{7}  & -\\frac{1}{3}\n",
    "                \\end{array}\n",
    "       \\right ). \n",
    "$$\n",
    "\n",
    "We can split the matrices into tiles. In the above example, we might use a 2 by 2 tile.\n",
    "\n",
    "$$ C = \\left ( \\begin{array} {cc}\n",
    "               a_{00} & a_{01} \\\\\n",
    "               a_{10} & a_{11}\n",
    "               \\end{array} \\right )\n",
    "       \\left ( \\begin{array} {c}\n",
    "               b_{00} \\\\\n",
    "               b_{10}\n",
    "               \\end{array} \\right )\n",
    "     = \\left ( \\begin{array} {c}\n",
    "               a_{00} b_{00} + a_{01} b_{10} \\\\\n",
    "               a_{10} b_{00} + a_{11} b_{10}\n",
    "               \\end{array} \\right )\n",
    "               ,\n",
    "$$\n",
    "\n",
    "where, for example, $a_{00}= \\left ( \\begin{array}{cc} 4 & 3 \\\\ 1 & 2 \\end{array} \\right )$. $a_{00}b_{00}$ is a matrix-matrix product and the addition of two matrices of the same shape is defined element by element."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "In our example, we have two $n$ by $n$ matrices and we are going to split them in quadrants."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "n = 4096\n",
    "A = np.random.random([n, n])\n",
    "B = np.random.random([n, n])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "tdot = %timeit -o np.dot(A,B)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "type(n // 2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "nhalf = n // 2\n",
    "a00 = A[:nhalf, :nhalf]\n",
    "a01 = A[:nhalf, nhalf:]\n",
    "a10 = A[nhalf:, :nhalf]\n",
    "a11 = A[nhalf:, nhalf:]\n",
    "b00 = B[:nhalf, :nhalf]\n",
    "b01 = B[:nhalf, nhalf:]\n",
    "b10 = B[nhalf:, :nhalf]\n",
    "b11 = B[nhalf:, nhalf:]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "The calculation of the partial results in Python looks very similar to the mathematical description above:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "c00 = np.dot(a00, b00) + np.dot(a01, b10)\n",
    "c01 = np.dot(a00, b01) + np.dot(a01, b11)\n",
    "c10 = np.dot(a10, b00) + np.dot(a11, b10)\n",
    "c11 = np.dot(a10, b01) + np.dot(a11, b11)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%%timeit -o\n",
    "c00 = np.dot(a00, b00) + np.dot(a01, b10)\n",
    "c01 = np.dot(a00, b01) + np.dot(a01, b11)\n",
    "c10 = np.dot(a10, b00) + np.dot(a11, b10)\n",
    "c11 = np.dot(a10, b01) + np.dot(a11, b11)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "_.best / tdot.best"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "Hm, this is slower than doing it directly...\n",
    "\n",
    "Next we create one view per engine."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "d0 = rc[0]\n",
    "d1 = rc[1]\n",
    "d2 = rc[2]\n",
    "d3 = rc[3]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%timeit d0.apply(lambda A, B : np.dot(A, B), A, B).wait()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "c00h = d0.apply(lambda a, b, c, d : np.dot(a, b) + np.dot(c, d), a00, b00, a01, b10)\n",
    "c01h = d1.apply(lambda a, b, c, d : np.dot(a, b) + np.dot(c, d), a00, b01, a01, b11)\n",
    "c10h = d2.apply(lambda a, b, c, d : np.dot(a, b) + np.dot(c, d), a10, b00, a11, b10)\n",
    "c11h = d3.apply(lambda a, b, c, d : np.dot(a, b) + np.dot(c, d), a10, b01, a11, b11)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "c00h.wait()\n",
    "c01h.wait()\n",
    "c10h.wait()\n",
    "c11h.wait()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "c00 = c00h.get()\n",
    "c01 = c01h.get()\n",
    "c10 = c10h.get()\n",
    "c11 = c11h.get()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "outputs": [],
   "source": [
    "%%timeit\n",
    "c00h = d0.apply(lambda a, b, c, d : np.dot(a, b) + np.dot(c, d), a00, b00, a01, b10)\n",
    "c01h = d1.apply(lambda a, b, c, d : np.dot(a, b) + np.dot(c, d), a00, b01, a01, b11)\n",
    "c10h = d2.apply(lambda a, b, c, d : np.dot(a, b) + np.dot(c, d), a10, b00, a11, b10)\n",
    "c11h = d3.apply(lambda a, b, c, d : np.dot(a, b) + np.dot(c, d), a10, b01, a11, b11)\n",
    "c00h.wait()\n",
    "c01h.wait()\n",
    "c10h.wait()\n",
    "c11h.wait()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "skip"
    }
   },
   "source": [
    "Nothing says, we have to stop at 4 tiles nor do we have to use square tiles. We could also recursively subdivide our tiles.\n",
    "\n",
    "The code is not any faster, because our implementation of numpy already blocks the matrices and uses all cores, but it shows the principle."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "celltoolbar": "Slideshow",
  "kernelspec": {
   "display_name": "HPC Python 2022",
   "language": "python",
   "name": "hpcpy22"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}