IEEE Trans Neural Netw Learn Syst. 2021 Mar;32(3):962-974. doi: 10.1109/TNNLS.2020.2979762. Epub 2021 Mar 1.
Gradient-based distributed learning in parameter server (PS) computing architectures is subject to random delays due to straggling worker nodes and to possible communication bottlenecks between PS and workers. Solutions have been recently proposed to separately address these impairments based on the ideas of gradient coding (GC), worker grouping, and adaptive worker selection. This article provides a unified analysis of these techniques in terms of wall-clock time, communication, and computation complexity measures. Furthermore, in order to combine the benefits of GC and grouping in terms of robustness to stragglers with the communication and computation load gains of adaptive selection, novel strategies, named lazily aggregated GC (LAGC) and grouped-LAG (G-LAG), are introduced. Analysis and results show that G-LAG provides the best wall-clock time and communication performance while maintaining a low computational cost, for two representative distributions of the computing times of the worker nodes.
基于梯度的分布式学习在参数服务器(PS)计算架构中受到随机延迟的影响,这些延迟是由于落后的工作节点和 PS 与工作节点之间可能存在的通信瓶颈造成的。最近已经提出了一些解决方案,基于梯度编码(GC)、工作节点分组和自适应工作节点选择的思想来分别解决这些问题。本文从时钟时间、通信和计算复杂度度量的角度对这些技术进行了统一分析。此外,为了结合 GC 和分组在抵御落后节点方面的稳健性优势以及自适应选择在通信和计算负载方面的优势,引入了新的策略,即惰性聚合 GC(LAGC)和分组-LAG(G-LAG)。分析和结果表明,对于工作节点计算时间的两种代表性分布,G-LAG 在保持低计算成本的同时,提供了最佳的时钟时间和通信性能。