Convergence guarantees for gradient descent methods on optimization for non-convex function approximation (distributed neural network training).
Témavezető: | Horváth Tomáš |
ELTE IK, Telekom Innovation Laboratories | |
email: | horvathtomi1976@gmail.com |
Témavezetők
- Kiss Péter (ELTE IK, Telekom Innovation Laboratories)
Projekt leírás
The problem comes from the so-called "federated learning" (FL) of neural networks. In practice, to train these models we almost exclusively use different versions of gradient descent methods. When the gradients of the loss function over our function approximator (NN) are unbiased, there are already convergence analyses given. In FL however, local gradients are computed over different data distributions and the updates aggregated from them are biased, which makes this analysis very hard. (To the best of our knowledge, nobody gave any meaningful results for this problem)
The goal of the project is to analyze whether it is possible to give convergence guarantees, maybe using evaluation of divergence between local datasets.
Korábbi hallgatók
- Hoang Trung Hieu: Convergence guarantees for gradient descent methods on optimization for non-convex function approximation (distributed neural network training). (2020/21 I. félév Project Work 1)
- Hoang Trung Hieu: Convergence guarantees for gradient descent methods on optimization for non-convex function approximation (distributed neural network training). (2020/21 II. félév Project Work 2)