diff --git a/LICENSE b/LICENSE
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..a79ea789a5b55f7328d1fd987293376838112048 100644
--- a/LICENSE
+++ b/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2020 Lukas Leufen
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
\ No newline at end of file
diff --git a/README.md b/README.md
index 7696415b9d9ad2168ad54b2d45b2b1606d39d89f..5c55b4094232908a56cdcf61ba437976f8714e8b 100644
--- a/README.md
+++ b/README.md
@@ -5,45 +5,213 @@ learning (ML) models for the analysis and forecasting of meteorological and air
 
 # Installation
 
-* Install __proj__ on your machine using the console. E.g. for opensuse / leap `zypper install proj`
-* A c++ compiler is required for the installation of the program __cartopy__
-* Install all requirements from `requirements.txt` preferably in a virtual environment
-* Installation of MLAir:
-    * Either clone MLAir from its repository in gitlab (link??) and use it without installation 
+MLAir is based on several python frameworks. To work properly, you have to install all packages from the 
+`requirements.txt` file. Additionally to support the geographical plotting part it is required to install geo
+packages built for your operating system. Name names of these package may differ for different systems, we refer
+here to the opensuse / leap OS. The geo plot can be removed from the `plot_list`, in this case there is no need to 
+install the geo packages.
+
+* (geo) Install **proj** on your machine using the console. E.g. for opensuse / leap `zypper install proj`
+* (geo) A c++ compiler is required for the installation of the program **cartopy**
+* Install all requirements from [`requirements.txt`](https://gitlab.version.fz-juelich.de/toar/machinelearningtools/-/blob/master/requirements.txt)
+  preferably in a virtual environment
+* (tf) Currently, TensorFlow-1.13 is mentioned in the requirements. We already tested the TensorFlow-1.15 version and couldn't
+  find any compatibility errors. Please note, that tf-1.13 and 1.15 have two distinct branches each, the default branch 
+  for CPU support, and the "-gpu" branch for GPU support. If the GPU version is installed, MLAir will make use of the GPU
+  device.
+* Installation of **MLAir**:
+    * Either clone MLAir from the [gitlab repository](https://gitlab.version.fz-juelich.de/toar/machinelearningtools.git) 
+      and use it without installation (beside the requirements) 
     * or download the distribution file (?? .whl) and install it via `pip install <??>`. In this case, you can simply
-    import MLAir in any python script inside your virtual environment using `import mlair`.
-    
-## Special instructions for installation on Jülich HPC systems
+      import MLAir in any python script inside your virtual environment using `import mlair`.
 
-_Please note, that the HPC setup is customised for JUWELS and HDFML. When using another HPC system, you can use the HPC 
-setup files as a skeleton and customise it to your needs._
+# How to start with MLAir
 
-The following instruction guide you through the installation on JUWELS and HDFML. 
-* Clone the repo to HPC system (we recommend to place it in `/p/projects/<project name>`).
-* Setup venv by executing `source setupHPC.sh`. This script loads all pre-installed modules and creates a venv for 
-all other packages. Furthermore, it creates slurm/batch scripts to execute code on compute nodes. <br> 
-You have to enter the HPC project's budget name (--account flag).
-* The default external data path on JUWELS and HDFML is set to `/p/project/deepacf/intelliaq/<user>/DATA/toar_<sampling>`. 
-<br>To choose a different location open `run.py` and add the following keyword argument to `ExperimentSetup`: 
-`data_path=<your>/<custom>/<path>`. 
-* Execute `python run.py` on a login node to download example data. The program will throw an OSerror after downloading.
-* Execute either `sbatch run_juwels_develgpus.bash` or `sbatch run_hdfml_batch.bash` to verify that the setup went well.
-* Currently cartopy is not working on our HPC system, therefore PlotStations does not create any output.
+In this section, we show three examples how to work with MLAir.
 
-Note: The method `PartitionCheck` currently only checks if the hostname starts with `ju` or `hdfmll`. 
-Therefore, it might be necessary to adopt the `if` statement in `PartitionCheck._run`.
+## Example 1
 
+We start MLAir in a dry run without any modification. Just import mlair and run it.
+```python
+import mlair
 
-# Security
+# just give it a dry run without any modification 
+mlair.run()
+```
+The logging output will show you many informations. Additional information (including debug messages) are collected 
+inside the experiment path in the logging folder.
+```log
+INFO: mlair started
+INFO: ExperimentSetup started
+INFO: Experiment path is: /home/<usr>/mlair/testrun_network 
+...
+INFO: load data for DEBW001 from JOIN 
+...
+INFO: Training started
+...
+INFO: mlair finished after 00:00:12 (hh:mm:ss)
+```
 
-* To use hourly data from ToarDB via JOIN interface, a private token is required. Request your personal access token and
-add it to `src/join_settings.py` in the hourly data section. Replace the `TOAR_SERVICE_URL` and the `Authorization` 
-value. To make sure, that this **sensitive** data is not uploaded to the remote server, use the following command to
-prevent git from tracking this file: `git update-index --assume-unchanged src/join_settings.py`
+## Example 2
+
+Now we update the stations and customise the window history size parameter.
+
+```python
+import mlair
+
+# our new stations to use
+stations = ['DEBW030', 'DEBW037', 'DEBW031', 'DEBW015', 'DEBW107']
+
+# expanded temporal context to 14 (days, because of default sampling="daily")
+window_history_size = 14
+
+# restart the experiment with little customisation
+mlair.run(stations=stations, 
+          window_history_size=window_history_size)
+```
+The output looks similar, but we can see, that the new stations are loaded.
+```log
+INFO: mlair started
+INFO: ExperimentSetup started
+...
+INFO: load data for DEBW030 from JOIN 
+INFO: load data for DEBW037 from JOIN 
+...
+INFO: Training started
+...
+INFO: mlair finished after 00:00:24 (hh:mm:ss)
+```
+
+## Example 3
+
+Let's just apply our trained model to new data. Therefore we keep the window history size parameter but change the stations.
+In the run method, we need to disable the trainable and create new model parameters. MLAir will use the model we have
+trained before. Note, this only works if the experiment path has not changed or a suitable trained model is placed 
+inside the experiment path.
+```python
+import mlair
+
+# our new stations to use
+stations = ['DEBY002', 'DEBY079']
+
+# same setting for window_history_size
+window_history_size = 14
+
+# run experiment without training
+mlair.run(stations=stations, 
+          window_history_size=window_history_size, 
+          create_new_model=False, 
+          trainable=False)
+```
+We can see from the terminal that no training was performed. Analysis is now made on the new stations.
+```log
+INFO: mlair started
+...
+INFO: No training has started, because trainable parameter was false. 
+...
+INFO: mlair finished after 00:00:06 (hh:mm:ss)
+```
+
+# Customised workflows and models
+
+# Custom Workflow
+
+MLAir provides a default workflow. If additional steps are to be performed, you have to append custom run modules to 
+the workflow.
+
+```python
+import mlair
+import logging
 
-# Customise your experiment
+class CustomStage(mlair.RunEnvironment):
+    """A custom MLAir stage for demonstration."""
+
+    def __init__(self, test_string):
+        super().__init__()  # always call super init method
+        self._run(test_string)  # call a class method
+        
+    def _run(self, test_string):
+        logging.info("Just running a custom stage.")
+        logging.info("test_string = " + test_string)
+        epochs = self.data_store.get("epochs")
+        logging.info("epochs = " + str(epochs))
+
+        
+# create your custom MLAir workflow
+CustomWorkflow = mlair.Workflow()
+# provide stages without initialisation
+CustomWorkflow.add(mlair.ExperimentSetup, epochs=128)
+# add also keyword arguments for a specific stage
+CustomWorkflow.add(CustomStage, test_string="Hello World")
+# finally execute custom workflow in order of adding
+CustomWorkflow.run()
+```
+```log
+INFO: mlair started
+...
+INFO: ExperimentSetup finished after 00:00:12 (hh:mm:ss)
+INFO: CustomStage started
+INFO: Just running a custom stage.
+INFO: test_string = Hello World
+INFO: epochs = 128
+INFO: CustomStage finished after 00:00:01 (hh:mm:ss)
+INFO: mlair finished after 00:00:13 (hh:mm:ss)
+```
+
+## Custom Model
+
+Each model has to inherit from the abstract model class to ensure a smooth training and evaluation behaviour. It is 
+required to implement the set model and set compile options methods. The later has to set the loss at least.
+
+```python
+
+import keras
+from keras.losses import mean_squared_error as mse
+from keras.optimizers import SGD
+
+from mlair.model_modules import AbstractModelClass
+
+class MyLittleModel(AbstractModelClass):
+    """
+    A customised model with a 1x1 Conv, and 3 Dense layers (32, 16
+    window_lead_time). Dropout is used after Conv layer.
+    """
+    def __init__(self, window_history_size, window_lead_time, channels):
+        super().__init__()
+        # settings
+        self.window_history_size = window_history_size
+        self.window_lead_time = window_lead_time
+        self.channels = channels
+        self.dropout_rate = 0.1
+        self.activation = keras.layers.PReLU
+        self.lr = 1e-2
+        # apply to model
+        self.set_model()
+        self.set_compile_options()
+        self.set_custom_objects(loss=self.compile_options['loss'])
+
+    def set_model(self):
+        # add 1 to window_size to include current time step t0
+        shape = (self.window_history_size + 1, 1, self.channels)
+        x_input = keras.layers.Input(shape=shape)
+        x_in = keras.layers.Conv2D(32, (1, 1), padding='same')(x_input)
+        x_in = self.activation()(x_in)
+        x_in = keras.layers.Flatten()(x_in)
+        x_in = keras.layers.Dropout(self.dropout_rate)(x_in)
+        x_in = keras.layers.Dense(32)(x_in)
+        x_in = self.activation()(x_in)
+        x_in = keras.layers.Dense(16)(x_in)
+        x_in = self.activation()(x_in)
+        x_in = keras.layers.Dense(self.window_lead_time)(x_in)
+        out = self.activation()(x_in)
+        self.model = keras.Model(inputs=x_input, outputs=[out])
+
+    def set_compile_options(self):
+        self.compile_options = {"optimizer": SGD(lr=self.lr),
+                                "loss": mse, 
+                                "metrics": ["mse"]}
+```
 
-This section summarises which parameters can be customised for a training.
 
 ## Transformation
 
@@ -98,8 +266,34 @@ scaling values instead of the calculation method. For method *centre*, std can s
 class: `xr.DataArray` with `dims=["variables"]` and one value for each variable.
 
 
-## Inception Model
 
-See a description [here](https://towardsdatascience.com/a-simple-guide-to-the-versions-of-the-inception-network-7fc52b863202)
-or take a look on the papers [Going Deeper with Convolutions (Szegedy et al., 2014)](https://arxiv.org/abs/1409.4842)
-and [Network In Network (Lin et al., 2014)](https://arxiv.org/abs/1312.4400).
+
+
+# Special Remarks
+
+## Special instructions for installation on Jülich HPC systems
+
+_Please note, that the HPC setup is customised for JUWELS and HDFML. When using another HPC system, you can use the HPC 
+setup files as a skeleton and customise it to your needs._
+
+The following instruction guide you through the installation on JUWELS and HDFML. 
+* Clone the repo to HPC system (we recommend to place it in `/p/projects/<project name>`).
+* Setup venv by executing `source setupHPC.sh`. This script loads all pre-installed modules and creates a venv for 
+all other packages. Furthermore, it creates slurm/batch scripts to execute code on compute nodes. <br> 
+You have to enter the HPC project's budget name (--account flag).
+* The default external data path on JUWELS and HDFML is set to `/p/project/deepacf/intelliaq/<user>/DATA/toar_<sampling>`. 
+<br>To choose a different location open `run.py` and add the following keyword argument to `ExperimentSetup`: 
+`data_path=<your>/<custom>/<path>`. 
+* Execute `python run.py` on a login node to download example data. The program will throw an OSerror after downloading.
+* Execute either `sbatch run_juwels_develgpus.bash` or `sbatch run_hdfml_batch.bash` to verify that the setup went well.
+* Currently cartopy is not working on our HPC system, therefore PlotStations does not create any output.
+
+Note: The method `PartitionCheck` currently only checks if the hostname starts with `ju` or `hdfmll`. 
+Therefore, it might be necessary to adopt the `if` statement in `PartitionCheck._run`.
+
+## Security using JOIN
+
+* To use hourly data from ToarDB via JOIN interface, a private token is required. Request your personal access token and
+add it to `src/join_settings.py` in the hourly data section. Replace the `TOAR_SERVICE_URL` and the `Authorization` 
+value. To make sure, that this **sensitive** data is not uploaded to the remote server, use the following command to
+prevent git from tracking this file: `git update-index --assume-unchanged src/join_settings.py`
diff --git a/docs/_source/conf.py b/docs/_source/conf.py
index ac1131a008f5c95a62718def6046085294f6efba..573918ee35e9757b8c0b32b2697fc0cc2bc0b38f 100644
--- a/docs/_source/conf.py
+++ b/docs/_source/conf.py
@@ -17,7 +17,7 @@ sys.path.insert(0, os.path.abspath('../..'))
 
 # -- Project information -----------------------------------------------------
 
-project = 'machinelearningtools'
+project = 'MLAir'
 copyright = '2020, Lukas H Leufen, Felix Kleinert'
 author = 'Lukas H Leufen, Felix Kleinert'
 
@@ -118,7 +118,7 @@ latex_elements = {
 # (source start file, target name, title,
 #  author, documentclass [howto, manual, or own class]).
 latex_documents = [
-    (master_doc, 'machinelearningtools.tex', 'MachineLearningTools Documentation',
+    (master_doc, 'mlair.tex', 'MLAir Documentation',
      author, 'manual'),
 ]
 
diff --git a/docs/_source/get-started.rst b/docs/_source/get-started.rst
index e5a82fdcf1d16ca2188a04e3dce76dc7ba9d477a..98a96d43675a0263be5bfc2d452b8af1c2626b60 100644
--- a/docs/_source/get-started.rst
+++ b/docs/_source/get-started.rst
@@ -1,16 +1,232 @@
-Get started with MachineLearningTools
-=====================================
+Get started with MLAir
+======================
 
-<what is machinelearningtools?>
+Install MLAir
+-------------
 
-MLT Module and Funtion Documentation
-------------------------------------
+MLAir is based on several python frameworks. To work properly, you have to install all packages from the
+`requirements.txt` file. Additionally to support the geographical plotting part it is required to install geo
+packages built for your operating system. Name names of these package may differ for different systems, we refer
+here to the opensuse / leap OS. The geo plot can be removed from the `plot_list`, in this case there is no need to
+install the geo packages.
 
-Install MachineLearningTools
-----------------------------
+* (geo) Install **proj** on your machine using the console. E.g. for opensuse / leap `zypper install proj`
+* (geo) A c++ compiler is required for the installation of the program **cartopy**
+* Install all requirements from [`requirements.txt`](https://gitlab.version.fz-juelich.de/toar/machinelearningtools/-/blob/master/requirements.txt)
+  preferably in a virtual environment
+* (tf) Currently, TensorFlow-1.13 is mentioned in the requirements. We already tested the TensorFlow-1.15 version and couldn't
+  find any compatibility errors. Please note, that tf-1.13 and 1.15 have two distinct branches each, the default branch
+  for CPU support, and the "-gpu" branch for GPU support. If the GPU version is installed, MLAir will make use of the GPU
+  device.
+* Installation of **MLAir**:
+    * Either clone MLAir from the [gitlab repository](https://gitlab.version.fz-juelich.de/toar/machinelearningtools.git)
+      and use it without installation (beside the requirements)
+    * or download the distribution file (?? .whl) and install it via `pip install <??>`. In this case, you can simply
+      import MLAir in any python script inside your virtual environment using `import mlair`.
 
-Dependencies
+
+How to start with MLAir
+-----------------------
+
+In this section, we show three examples how to work with MLAir.
+
+Example 1
+~~~~~~~~~
+
+We start MLAir in a dry run without any modification. Just import mlair and run it.
+
+.. code-block:: python
+
+    import mlair
+
+    # just give it a dry run without any modification
+    mlair.run()
+
+
+The logging output will show you many informations. Additional information (including debug messages) are collected
+inside the experiment path in the logging folder.
+
+.. code-block::
+
+    INFO: mlair started
+    INFO: ExperimentSetup started
+    INFO: Experiment path is: /home/<usr>/mlair/testrun_network
+    ...
+    INFO: load data for DEBW001 from JOIN
+    ...
+    INFO: Training started
+    ...
+    INFO: mlair finished after 00:00:12 (hh:mm:ss)
+
+
+Example 2
+~~~~~~~~~
+
+Now we update the stations and customise the window history size parameter.
+
+.. code-block:: python
+
+    import mlair
+
+    # our new stations to use
+    stations = ['DEBW030', 'DEBW037', 'DEBW031', 'DEBW015', 'DEBW107']
+
+    # expanded temporal context to 14 (days, because of default sampling="daily")
+    window_history_size = 14
+
+    # restart the experiment with little customisation
+    mlair.run(stations=stations,
+              window_history_size=window_history_size)
+
+The output looks similar, but we can see, that the new stations are loaded.
+
+.. code-block::
+
+    INFO: mlair started
+    INFO: ExperimentSetup started
+    ...
+    INFO: load data for DEBW030 from JOIN
+    INFO: load data for DEBW037 from JOIN
+    ...
+    INFO: Training started
+    ...
+    INFO: mlair finished after 00:00:24 (hh:mm:ss)
+
+Example 3
+~~~~~~~~~
+
+Let's just apply our trained model to new data. Therefore we keep the window history size parameter but change the stations.
+In the run method, we need to disable the trainable and create new model parameters. MLAir will use the model we have
+trained before. Note, this only works if the experiment path has not changed or a suitable trained model is placed
+inside the experiment path.
+
+.. code-block:: python
+
+    import mlair
+
+    # our new stations to use
+    stations = ['DEBY002', 'DEBY079']
+
+    # same setting for window_history_size
+    window_history_size = 14
+
+    # run experiment without training
+    mlair.run(stations=stations,
+              window_history_size=window_history_size,
+              create_new_model=False,
+              trainable=False)
+
+We can see from the terminal that no training was performed. Analysis is now made on the new stations.
+
+.. code-block::
+
+    INFO: mlair started
+    ...
+    INFO: No training has started, because trainable parameter was false.
+    ...
+    INFO: mlair finished after 00:00:06 (hh:mm:ss)
+
+
+
+Customised workflows and models
+-------------------------------
+
+Custom Workflow
+~~~~~~~~~~~~~~~
+
+MLAir provides a default workflow. If additional steps are to be performed, you have to append custom run modules to
+the workflow.
+
+.. code-block:: python
+
+    import mlair
+    import logging
+
+    class CustomStage(mlair.RunEnvironment):
+        """A custom MLAir stage for demonstration."""
+
+        def __init__(self, test_string):
+            super().__init__()  # always call super init method
+            self._run(test_string)  # call a class method
+
+        def _run(self, test_string):
+            logging.info("Just running a custom stage.")
+            logging.info("test_string = " + test_string)
+            epochs = self.data_store.get("epochs")
+            logging.info("epochs = " + str(epochs))
+
+
+    # create your custom MLAir workflow
+    CustomWorkflow = mlair.Workflow()
+    # provide stages without initialisation
+    CustomWorkflow.add(mlair.ExperimentSetup, epochs=128)
+    # add also keyword arguments for a specific stage
+    CustomWorkflow.add(CustomStage, test_string="Hello World")
+    # finally execute custom workflow in order of adding
+    CustomWorkflow.run()
+
+.. code-block::
+
+    INFO: mlair started
+    ...
+    INFO: ExperimentSetup finished after 00:00:12 (hh:mm:ss)
+    INFO: CustomStage started
+    INFO: Just running a custom stage.
+    INFO: test_string = Hello World
+    INFO: epochs = 128
+    INFO: CustomStage finished after 00:00:01 (hh:mm:ss)
+    INFO: mlair finished after 00:00:13 (hh:mm:ss)
+
+Custom Model
 ~~~~~~~~~~~~
 
-Data
-~~~~
+Each model has to inherit from the abstract model class to ensure a smooth training and evaluation behaviour. It is
+required to implement the set model and set compile options methods. The later has to set the loss at least.
+
+.. code-block:: python
+
+    import keras
+    from keras.losses import mean_squared_error as mse
+    from keras.optimizers import SGD
+
+    from mlair.model_modules import AbstractModelClass
+
+    class MyLittleModel(AbstractModelClass):
+        """
+        A customised model with a 1x1 Conv, and 3 Dense layers (32, 16
+        window_lead_time). Dropout is used after Conv layer.
+        """
+        def __init__(self, window_history_size, window_lead_time, channels):
+            super().__init__()
+            # settings
+            self.window_history_size = window_history_size
+            self.window_lead_time = window_lead_time
+            self.channels = channels
+            self.dropout_rate = 0.1
+            self.activation = keras.layers.PReLU
+            self.lr = 1e-2
+            # apply to model
+            self.set_model()
+            self.set_compile_options()
+            self.set_custom_objects(loss=self.compile_options['loss'])
+
+        def set_model(self):
+            # add 1 to window_size to include current time step t0
+            shape = (self.window_history_size + 1, 1, self.channels)
+            x_input = keras.layers.Input(shape=shape)
+            x_in = keras.layers.Conv2D(32, (1, 1), padding='same')(x_input)
+            x_in = self.activation()(x_in)
+            x_in = keras.layers.Flatten()(x_in)
+            x_in = keras.layers.Dropout(self.dropout_rate)(x_in)
+            x_in = keras.layers.Dense(32)(x_in)
+            x_in = self.activation()(x_in)
+            x_in = keras.layers.Dense(16)(x_in)
+            x_in = self.activation()(x_in)
+            x_in = keras.layers.Dense(self.window_lead_time)(x_in)
+            out = self.activation()(x_in)
+            self.model = keras.Model(inputs=x_input, outputs=[out])
+
+        def set_compile_options(self):
+            self.compile_options = {"optimizer": SGD(lr=self.lr),
+                                    "loss": mse,
+                                    "metrics": ["mse"]}