diff --git a/docs/_source/_plots/padding_example1.png b/docs/_source/_plots/padding_example1.png
new file mode 100755
index 0000000000000000000000000000000000000000..e609cbb9fe22f406c97ceb8637751e484d139409
Binary files /dev/null and b/docs/_source/_plots/padding_example1.png differ
diff --git a/docs/_source/_plots/padding_example2.png b/docs/_source/_plots/padding_example2.png
new file mode 100755
index 0000000000000000000000000000000000000000..cfc84c6961eb6d24aef135d9e8fc5bae74a78f8a
Binary files /dev/null and b/docs/_source/_plots/padding_example2.png differ
diff --git a/mlair/model_modules/GUIDE.md b/mlair/model_modules/GUIDE.md
new file mode 100644
index 0000000000000000000000000000000000000000..3cda63538b06a83afe9c0c20d9c6ef46d00633fe
--- /dev/null
+++ b/mlair/model_modules/GUIDE.md
@@ -0,0 +1,49 @@
+
+## Model Extensions
+
+### Inception Blocks
+
+MLAir provides an easy interface to add extensions. Specifically, the code comes with an extension for inception blocks 
+as proposed by Szegedy et al. (2014). Those inception blocks are a collection of multiple network towers. A tower is a 
+collection of successive (standard) layers and generally contains at least a padding layer, and one convolution or a 
+pooling layer. Additionally such towers can also contain an additional convolutional layer of kernel size 1x1 for 
+information compression (reduction of filter size), or batch normalisation layers. 
+
+After initialising the the inception blocks by using *InceptionModelBase*, one can add an arbitrary number of 
+individual inception blocks. The initialisation sets all counters for internal naming conventions.
+
+The inception model requires two dictionaries as inputs specifying the convolutional and the pooling towers, 
+respectively. The convolutional dictionary contains dictionaries for each individual tower, allowing to use different 
+reduction filters, kernel and filter sizes of the main convolution and the activation function. 
+
+See a description [here](https://towardsdatascience.com/a-simple-guide-to-the-versions-of-the-inception-network-7fc52b863202)
+or take a look on the papers [Going Deeper with Convolutions (Szegedy et al., 2014)](https://arxiv.org/abs/1409.4842)
+and [Network In Network (Lin et al., 2014)](https://arxiv.org/abs/1312.4400).
+
+
+### Paddings
+
+For some network layers like convolutions, it is common to pad the input data to prevent shrinking of dimensions. In 
+classical image recognition tasks zero paddings are most often used. In the context of meteorology, a zero padding might 
+create artificial effects on the boundaries. We therefore adopted the symmetric and reflection padding layers from 
+*TensorFlow*, to be used as *Keras* layers. The layers are named *SymmetricPadding2D* and *ReflectionPadding2D*. Both 
+layers need the information on *padding* size. We provide a helper function to calculate the padding size given a 
+convolutional kernel size. 
+
+![pad1](./../../docs/_source/_plots/padding_example1.png)
+
+Additionally, we provide the wrapper class *Padding2D*, which combines symmetric, refection and zero padding. This class 
+allows to switch between different types of padding while keeping the overall model structure untouched. 
+
+![pad2](./../../docs/_source/_plots/padding_example2.png)
+
+This figure shows an example on how to easily apply the wrapper Padding2D and specify the *padding_type* (e.g. 
+"SymmetricPadding2D" or "ReflectionPadding2D"). The following table lists all padding types which are currently 
+supported. The padding wrapper can also handle other user specific padding types.
+
+| padding layer (long name) | short name |
+|---------------------------|------------|
+| ReflectionPadding2D*      | RefPad2D   |
+| SymmetricPadding2D*       | SymPad2D   |
+| ZeroPadding2D**           | ZeroPad2D  |
+\*  implemented in MLAir    \** implemented in keras