Skip to content
Snippets Groups Projects
Commit 7c4ebca7 authored by lukas leufen's avatar lukas leufen
Browse files

added information on model extensions in a guide.md inside the model_modules folder

parent a36c71f2
No related branches found
No related tags found
4 merge requests!136update release branch,!135Release v0.11.0,!129added information on model extensions in a guide.md inside the model_modules folder,!119Resolve "Include advanced data handling in workflow"
Pipeline #41709 passed
docs/_source/_plots/padding_example1.png

60.5 KiB

docs/_source/_plots/padding_example2.png

72.6 KiB

## Model Extensions
### Inception Blocks
MLAir provides an easy interface to add extensions. Specifically, the code comes with an extension for inception blocks
as proposed by Szegedy et al. (2014). Those inception blocks are a collection of multiple network towers. A tower is a
collection of successive (standard) layers and generally contains at least a padding layer, and one convolution or a
pooling layer. Additionally such towers can also contain an additional convolutional layer of kernel size 1x1 for
information compression (reduction of filter size), or batch normalisation layers.
After initialising the the inception blocks by using *InceptionModelBase*, one can add an arbitrary number of
individual inception blocks. The initialisation sets all counters for internal naming conventions.
The inception model requires two dictionaries as inputs specifying the convolutional and the pooling towers,
respectively. The convolutional dictionary contains dictionaries for each individual tower, allowing to use different
reduction filters, kernel and filter sizes of the main convolution and the activation function.
See a description [here](https://towardsdatascience.com/a-simple-guide-to-the-versions-of-the-inception-network-7fc52b863202)
or take a look on the papers [Going Deeper with Convolutions (Szegedy et al., 2014)](https://arxiv.org/abs/1409.4842)
and [Network In Network (Lin et al., 2014)](https://arxiv.org/abs/1312.4400).
### Paddings
For some network layers like convolutions, it is common to pad the input data to prevent shrinking of dimensions. In
classical image recognition tasks zero paddings are most often used. In the context of meteorology, a zero padding might
create artificial effects on the boundaries. We therefore adopted the symmetric and reflection padding layers from
*TensorFlow*, to be used as *Keras* layers. The layers are named *SymmetricPadding2D* and *ReflectionPadding2D*. Both
layers need the information on *padding* size. We provide a helper function to calculate the padding size given a
convolutional kernel size.
![pad1](./../../docs/_source/_plots/padding_example1.png)
Additionally, we provide the wrapper class *Padding2D*, which combines symmetric, refection and zero padding. This class
allows to switch between different types of padding while keeping the overall model structure untouched.
![pad2](./../../docs/_source/_plots/padding_example2.png)
This figure shows an example on how to easily apply the wrapper Padding2D and specify the *padding_type* (e.g.
"SymmetricPadding2D" or "ReflectionPadding2D"). The following table lists all padding types which are currently
supported. The padding wrapper can also handle other user specific padding types.
| padding layer (long name) | short name |
|---------------------------|------------|
| ReflectionPadding2D* | RefPad2D |
| SymmetricPadding2D* | SymPad2D |
| ZeroPadding2D** | ZeroPad2D |
\* implemented in MLAir \** implemented in keras
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment