The minimization problem of the sum of a large set of convex functions arises in various applications. Methods such as incremental gradient, stochastic gradient, and aggregated gradient are popular choices for solving those problems as they do not require a full gradient evaluation at every iteration. In this paper, we analyze a generalization of the stochastic aggregated gradient method via an alternative technique based on the convergence of iterative linear systems. The technique provides a short proof for the $O(\kappa^{-1})$ linear convergence rate in the quadratic case. We observe that the technique is rather restrictive for the general case, and can provide weaker results.
Primary Language | English |
---|---|
Subjects | Mathematical Sciences |
Journal Section | Articles |
Authors | |
Publication Date | June 30, 2023 |
Published in Issue | Year 2023 Volume: 15 Issue: 1 |