* and, if needed, as background paper about deep recurrent networks:<br>
Speech Recognition with Deep Recurrent Neural Networks<br>
Alex Graves, Abdel-rahman Mohamed, Geoffrey HintonGraves, 2013<br>
Alex Graves, Abdel-rahman Mohamed, Geoffrey Hinton, 2013<br>
https://arxiv.org/abs/1303.5778
## Monday 20 January 10-11:30am
...
...
@@ -71,23 +71,6 @@ Simple but effective for improving accuracy in regression tasks
Aaditya Prakash, James Storer, Dinei Florencio, Cha Zhang, Brandeis and Microsoft <br> https://arxiv.org/abs/1811.07275<br>
A training schedule using filter pruning and orthogonal reinitialization
### Done
* 20.1.2020 Martin Schultz<br>
**Speech Recognition**
* Deep Speech 2: End-to-End Speech Recognition in English and Mandarin, Amodei et al., 2015<br> https://arxiv.org/abs/1512.02595<br>
According to my (brief) search, this seems to represent the state-of-the-art in speech recognition, discusses some topics, which are relevant for timeseries analysis, and also makes reference to good use of HPC.
* Background paper about deep recurrent networks:<br>
Speech Recognition with Deep Recurrent Neural Networks<br>
Alex Graves, Abdel-rahman Mohamed, Geoffrey HintonGraves, 2013<br>
https://arxiv.org/abs/1303.5778
* ==> Journal Club 17 February 2020
* 17.12.2019 Joshua Scheidt
* Multi-Context Recurrent Neural Networks for Time Series Applications <br>
https://publications.waset.org/3524/pdf
* Global Sparse Momentum SGD for Pruning Very Deep Neural Networks <br>