Changes
Page history
Update JULAIN Talk Thijs Vogels 9 June 2022
authored
May 10, 2022
by
Susanne Wenzel
Show whitespace changes
Inline
Side-by-side
JULAIN-Talk-Thijs-Vogels-9-June-2022.md
View page @
81db4185
...
...
@@ -31,8 +31,8 @@ PowerSGD can yield communication savings of > 100x and was used successfully to
Thijs is a PhD student at EPFL’s Machine Learning & Optimization Laboratory under Martin Jaggi.
He works on developing and understanding practical optimization algorithms for large-scale distributed training of deep learning models.
*
Readings:
*
*
[
RelaySum for Decentralized Deep Learning on Heterogeneous Data
](
https://arxiv.org/pdf/2110.04175.pdf
)
, NeurIPS 202
0
###
Readings:
*
[
RelaySum for Decentralized Deep Learning on Heterogeneous Data
](
https://arxiv.org/pdf/2110.04175.pdf
)
, NeurIPS 202
1
*
[
Practical Low-Rank Communication Compression in Decentralized Deep Learning
](
https://arxiv.org/pdf/2008.01425.pdf
)
, NeurIPS 2020
*
[
PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization
](
https://arxiv.org/pdf/1905.13727.pdf
)
, NeurIPS 2019
...
...
...
...