Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
MLAir
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
esde
machine-learning
MLAir
Commits
39a6459c
Commit
39a6459c
authored
5 years ago
by
lukas leufen
Browse files
Options
Downloads
Patches
Plain Diff
adjusted docstrings
parent
08c461c4
Branches
Branches containing commit
Tags
Tags containing commit
3 merge requests
!125
Release v0.10.0
,
!124
Update Master to new version v0.10.0
,
!96
Felix issue114 customise flatten tail
Pipeline
#35483
passed
5 years ago
Stage: test
Stage: pages
Stage: deploy
Changes
1
Pipelines
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
src/model_modules/flatten.py
+11
-7
11 additions, 7 deletions
src/model_modules/flatten.py
with
11 additions
and
7 deletions
src/model_modules/flatten.py
+
11
−
7
View file @
39a6459c
...
...
@@ -10,19 +10,24 @@ def get_activation(input_to_activate: keras.layers, activation: Union[Callable,
"""
Apply activation on a given input layer.
This helper function is able to handle advanced keras activations as well as strings for standard activations
This helper function is able to handle advanced keras activations as well as strings for standard activations
.
:param input_to_activate: keras layer to apply activation on
:param activation: activation to apply on `input_to_activate
'
. Can be a standard keras strings or activation layers
:param kwargs:
:return:
:param kwargs: keyword arguments used inside activation layer
:return: activation
.. code-block:: python
input_x = ... # your input data
x_in = keras.layer(<without activation>)(input_x)
# get activation via string
x_act_string = get_activation(x_in,
'
relu
'
)
# or get activation via layer callable
x_act_layer = get_activation(x_in, keras.layers.advanced_activations.ELU)
"""
if
isinstance
(
activation
,
str
):
name
=
kwargs
.
pop
(
'
name
'
,
None
)
...
...
@@ -42,7 +47,7 @@ def flatten_tail(input_x: keras.layers, inner_neurons: int, activation: Union[Ca
kernel_regularizer
:
keras
.
regularizers
=
None
):
"""
Flatten output of convolutional layers
Flatten output of convolutional layers
.
:param input_x: Multidimensional keras layer (ConvLayer)
:param output_neurons: Number of neurons in the last layer (must fit the shape of labels)
...
...
@@ -55,12 +60,12 @@ def flatten_tail(input_x: keras.layers, inner_neurons: int, activation: Union[Ca
:param inner_neurons: Number of neurons in inner dense layer
:param kernel_regularizer: regularizer to apply on conv and dense layers
:return:
:return:
flatten branch with size n=output_neurons
.. code-block:: python
input_x = ... # your input data
conv_out = Conv2D(*args)(input_x) # your convolution
al
stack
conv_out = Conv2D(*args)(input_x) # your convolution stack
out = flatten_tail(conv_out, inner_neurons=64, activation=keras.layers.advanced_activations.ELU,
output_neurons=4
output_activation=
'
linear
'
, reduction_filter=64,
...
...
@@ -69,7 +74,6 @@ def flatten_tail(input_x: keras.layers, inner_neurons: int, activation: Union[Ca
)
model = keras.Model(inputs=input_x, outputs=[out])
"""
# compression layer
if
reduction_filter
is
None
:
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment