Skip to content

Test Set Sample Uncertainty in PostProcessing

Test Set Sample Uncertainty via bootstrap method

To get a better understanding of the trained model performance, sample unvertainty using a bootstrap method is performed on test data.

Target

  • Get estimate about sample uncertainty of trained model
  • Get also estimate about sample uncertainty of competitors
  • Visualisation of estimates

Working steps of method

  • set number of permutations (default=1000) 🅰
  • set block length of permutation blocks (default=monthly) 🅰
  • set whether forecasts are harmonised before error calculation or not (default=true) 🆎
  • divide data into blocks: along time axis, all stations together 🅱
  • get total block count 🅰
  • calculate error metric for each block (average for each block on all stations and ahead steps) 🅱
  • draw random block with replacement 🅰
  • draw n="total block count" times 🅰
  • calculate average error metric for single permutation 🅰
  • repeat n times and collect error of each permutation 🅰
  • calculate overall statistical metrics (percentiles, ...) 🎱
  • store overall statistical metrics on disk 🎱
  • visualise overall statistical metrics with box-and-whiskers plot 🎱

Design

  • error metric for each block: time (block date, e.g. "2016-10") x error value
  • average error metric for single permutation: number of permutation x error value
Edited by Ghost User