The minimization problem of the sum of a large set of convex functions arises in various applications. Methods such as incremental gradient, stochastic gradient, and aggregated gradient are popular choices for solving those problems as they do not require a full gradient evaluation at every iteration. In this paper, we analyze a generalization of the stochastic aggregated gradient method via an alternative technique based on the convergence of iterative linear systems. The technique provides a short proof for the $O(\kappa^{-1})$ linear convergence rate in the quadratic case. We observe that the technique is rather restrictive for the general case, and can provide weaker results.
Unconstrained optimization stochastic gradient incremental methods
Birincil Dil | İngilizce |
---|---|
Konular | Matematik |
Bölüm | Makaleler |
Yazarlar | |
Yayımlanma Tarihi | 30 Haziran 2023 |
Yayımlandığı Sayı | Yıl 2023 Cilt: 15 Sayı: 1 |