Changes
Page history
Update JULAIN Talk Thijs Vogels 9 June 2022
authored
May 10, 2022
by
Susanne Wenzel
Hide whitespace changes
Inline
Side-by-side
JULAIN-Talk-Thijs-Vogels-9-June-2022.md
View page @
5046ffbc
...
...
@@ -31,11 +31,11 @@ PowerSGD can yield communication savings of > 100x and was used successfully to
Thijs is a PhD student at EPFL’s Machine Learning & Optimization Laboratory under Martin Jaggi.
He works on developing and understanding practical optimization algorithms for large-scale distributed training of deep learning models.
https://arxiv.org/pdf/1905.13727.pdf
*Readings:*
*
[
RelaySum for Decentralized Deep Learning on Heterogeneous Data
](
https://arxiv.org/pdf/2110.04175.pdf
)
, NeurIPS 2020
*
[
Practical Low-Rank Communication Compression in Decentralized Deep Learning
](
https://arxiv.org/pdf/2008.01425.pdf
)
, NeurIPS 2020
*
[
PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization
](
https://arxiv.org/pdf/1905.13727.pdf
)
, NeurIPS 2019
https://arxiv.org/pdf/2008.01425.pdf
https://arxiv.org/pdf/2110.04175.pdf
---
...
...
...
...