REFAC: clean-up bootstrap workflow
The current implementation of bootstraps works, but is not well designed. Therefore do intens refactoring to clean-up, separate and simplify the code.
Evaluate the usage of the predict_generator method. Because data is loaded and processed. Afterwards, the bootstrap forecasts are splitted and labels are loaded again to "merge" with the bootstrap forecasts. This is somehow counter-intuitive. It is maybe better to loop over the generator, create separate predictions and store them directly instead of collecting all forecasts.
- try out model.predict instead of model.predict_generator -> slower to use model.predict
- split create_boot_straps into smaller code snipplets
- add skip bootstrap creation (if nothing changed) 1
- add skip bootstrap skill score calculation (or merge with preceding step to avoid 2x same loop)
- is it possible to parallelise the bootstrap / skill score creation? -> yes, but if requested open new issue
- documentation
- testing
-
it is difficult to distinguish, if something changed or not. Does train=False imply, that no boots should be calculated? Or maybe this step wasn't performed before and boots should be created for this not trainable network. Therefore this is moved to the parameter
evaluate_bootstraps={True, False}
for a clearer bevahiour.