Parameter Server for Distributed Machine Learning
نویسندگان
چکیده
We propose a parameter server framework to solve distributed machine learning problems. Both data and workload are distributed into client nodes, while server nodes maintain globally shared parameters, which are represented as sparse vectors and matrices. The framework manages asynchronous data communications between clients and servers. Flexible consistency models, elastic scalability and fault tolerance are supported by this framework. We present algorithms and theoretical analysis for challenging nonconvex and nonsmooth problems. To demonstrate the scalability of the proposed framework, we show experimental results on real data with billions of parameters.
منابع مشابه
Communication Efficient Distributed Machine Learning with the Parameter Server
This paper describes a third-generation parameter server framework for distributed machine learning. This framework offers two relaxations to balance system performance and algorithm efficiency. We propose a new algorithm that takes advantage of this framework to solve non-convex non-smooth problems with convergence guarantees. We present an in-depth analysis of two large scale machine learning...
متن کاملA Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead a...
متن کاملFlexPS: Flexible Parallelism Control in Parameter Server Architecture
As a general abstraction for coordinating the distributed storage and access of model parameters, the parameter server (PS) architecture enables distributed machine learning to handle large datasets and high dimensional models. Many systems, such as Parameter Server and Petuum, have been developed based on the PS architecture and widely used in practice. However, none of these systems supports ...
متن کاملExploiting characteristics of machine learning applications for efficient parameter servers
Large scale machine learning has emerged as a primary computing activity in business, science, and services, attempting to extract insight from quantities of observation data. A machine learning task assumes a particular mathematical model will describe the observed training data and use an algorithm to identify model parameter values that make it fit the input data most closely. That is, the c...
متن کاملDistributed Optimization for Client-Server Architecture with Negative Gradient Weights
Availability of both massive datasets and computing resources have made machine learning and predictive analytics extremely pervasive. In this work we present a synchronous algorithm and architecture for distributed optimization motivated by privacy requirements posed by applications in machine learning. We present an algorithm for the recently proposed multiparameter-server architecture. We co...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013