Not distributed training

Implement this not distributed training before #16 (closed) is used to have runnning code on single node and GPU

Annotation to re-training tasks: Currently, it is implemented to load weights if available. But history is not loaded and therefore starts with epoch 0 again. There should be a possibility to really continue with the training. Check how to store and load a model for this case.

Tasks:

  • implement general workflow
  • implement re-training -> moved to #30 (closed)
  • write tests
Edited by Ghost User