A comparison between upper bounds on performance of two consensus-based distributed optimization algorithms
Ion Matei and John S. Baras
submitted to Selected topics in Signal Processing
In this paper we address the problem of multi-agent optimization for convex functions expressible as sums of convex functions. Each agent has access to only one function in the sum and can use only local information to update its current estimate of the optimal solution. We consider two consensus-based iterative algorithms, based on a combination between a consensus step and a subgradient decent update. The main diﬀerence between the two algorithms is the order in which the consensus-step and the subgradient descent update are performed. We show that updating ﬁrst the current estimate in the direction of a subgradient and then executing the consensus step ensures better performance than executing the steps in reversed order. In support of our analytical results, we give some numerical simulations of the algorithms as well.