-
- Downloads
Merge branch 'michael_issue#110_parse_hyperparameters_to_dataset_instance' into develop
No related branches found
No related tags found
Checking pipeline status
Showing
- video_prediction_tools/hparams/era5/convLSTM/model_hparams_template.json 2 additions, 2 deletions...n_tools/hparams/era5/convLSTM/model_hparams_template.json
- video_prediction_tools/hparams/era5/mcnet/model_hparams_template.json 2 additions, 2 deletions...tion_tools/hparams/era5/mcnet/model_hparams_template.json
- video_prediction_tools/hparams/era5/ours_vae_l1/model_hparams_template.json 1 addition, 1 deletion...ools/hparams/era5/ours_vae_l1/model_hparams_template.json
- video_prediction_tools/hparams/era5/savp/model_hparams_template.json 1 addition, 1 deletion...ction_tools/hparams/era5/savp/model_hparams_template.json
- video_prediction_tools/hparams/era5/vae/model_hparams_template.json 2 additions, 2 deletions...iction_tools/hparams/era5/vae/model_hparams_template.json
- video_prediction_tools/main_scripts/main_train_models.py 6 additions, 5 deletionsvideo_prediction_tools/main_scripts/main_train_models.py
Loading
-
@langguth1 Now I remember why we use
self.model_hparams_dict_load.update({"sequence_length": self.train_dataset.sequence_length})
, because we remove the hyperperatmers "sequence_length" from dataset, we use the default sequence_length of the tensorRecords. When training model , we need to pass this information to the model part. We have to add it back. I think we revised this together :) should I comment on it? -
Yes, I also remember now. I've also readded it in the current working branch #116 (closed) which will (hopefully) make its way to the develop branch in a few minutes.
-
@gong1 I can also add a comment on this if you like. Just to avoid later confusion :-)
-
@langguth1 cool! Free to add comment there!
Please register or sign in to comment