Suppr超能文献

通过陈旧同步并行参数服务器实现更高效的分布式机器学习

More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server.

作者信息

Ho Qirong, Cipar James, Cui Henggang, Kim Jin Kyu, Lee Seunghak, Gibbons Phillip B, Gibson Garth A, Ganger Gregory R, Xing Eric P

机构信息

School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213.

Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213.

出版信息

Adv Neural Inf Process Syst. 2013;2013:1223-1231.

Abstract

We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to an ML model's values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes.

摘要

我们提出了一种用于分布式机器学习的参数服务器系统,该系统遵循一种陈旧同步并行(SSP)计算模型,该模型能最大限度地增加计算工作节点在机器学习算法上进行有用工作的时间,同时仍能提供正确性保证。参数服务器提供了一个易于使用的共享接口,用于对机器学习模型的值(参数和变量)进行读/写访问,并且SSP模型允许分布式工作节点从本地缓存中读取这些值的较旧、陈旧版本,而不是等待从中央存储中获取。这显著增加了工作节点用于计算而非等待的时间比例。此外,SSP模型通过限制陈旧值的最大使用期限来确保机器学习算法的正确性。我们提供了SSP模型下的正确性证明,以及实证结果,表明与完全同步和异步方案相比,SSP模型在几个不同的机器学习问题上实现了更快的算法收敛。

相似文献

2
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.
Sensors (Basel). 2017 Sep 21;17(10):2172. doi: 10.3390/s17102172.
4
Faster algorithms for RNA-folding using the Four-Russians method.
Algorithms Mol Biol. 2014 Mar 6;9(1):5. doi: 10.1186/1748-7188-9-5.
5
Straggler-Aware Distributed Learning: Communication-Computation Latency Trade-Off.
Entropy (Basel). 2020 May 13;22(5):544. doi: 10.3390/e22050544.
6
DisSAGD: A Distributed Parameter Update Scheme Based on Variance Reduction.
Sensors (Basel). 2021 Jul 28;21(15):5124. doi: 10.3390/s21155124.
7
A distributed computing tool for generating neural simulation databases.
Neural Comput. 2006 Dec;18(12):2923-7. doi: 10.1162/neco.2006.18.12.2923.
8
Fast 3D iterative image reconstruction for SPECT with rotating slat collimators.
Phys Med Biol. 2009 Feb 7;54(3):715-29. doi: 10.1088/0031-9155/54/3/016. Epub 2009 Jan 9.
9
DiCoDiLe: Distributed Convolutional Dictionary Learning.
IEEE Trans Pattern Anal Mach Intell. 2022 May;44(5):2426-2437. doi: 10.1109/TPAMI.2020.3039215. Epub 2022 Apr 1.
10
Distributed Nesterov Gradient and Heavy-Ball Double Accelerated Asynchronous Optimization.
IEEE Trans Neural Netw Learn Syst. 2021 Dec;32(12):5723-5737. doi: 10.1109/TNNLS.2020.3027381. Epub 2021 Nov 30.

引用本文的文献

1
Federated learning: Overview, strategies, applications, tools and future directions.
Heliyon. 2024 Sep 20;10(19):e38137. doi: 10.1016/j.heliyon.2024.e38137. eCollection 2024 Oct 15.
2
ProteInfer, deep neural networks for protein functional inference.
Elife. 2023 Feb 27;12:e80942. doi: 10.7554/eLife.80942.
3
Dynamic Allocation Method of Economic Information Integrated Data Based on Deep Learning Algorithm.
Comput Intell Neurosci. 2022 May 16;2022:5494123. doi: 10.1155/2022/5494123. eCollection 2022.
4
Deploying and scaling distributed parallel deep neural networks on the Tianhe-3 prototype system.
Sci Rep. 2021 Oct 12;11(1):20244. doi: 10.1038/s41598-021-98794-z.
5
Efficient Privacy-preserving Machine Learning in Hierarchical Distributed System.
IEEE Trans Netw Sci Eng. 2019 Oct-Dec;6(4):599-612. doi: 10.1109/tnse.2018.2859420. Epub 2018 Jul 24.
6
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.
Sensors (Basel). 2017 Sep 21;17(10):2172. doi: 10.3390/s17102172.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验