Cyclades: Conflict-free Asynchronous Machine Learning
نویسندگان
چکیده
We present CYCLADES, a general framework for parallelizing stochastic optimization algorithms in a shared memory setting. CYCLADES is asynchronous during model updates, and requires no memory locking mechanisms, similar to HOGWILD!-type algorithms. Unlike HOGWILD!, CYCLADES introduces no conflicts during parallel execution, and offers a black-box analysis for provable speedups across a large family of algorithms. Due to its inherent cache locality and conflictfree nature, our multi-core implementation of CYCLADES consistently outperforms HOGWILD!-type algorithms on sufficiently sparse datasets, leading to up to 40% speedup gains compared to HOGWILD!, and up to 5⇥ gains over asynchronous implementations of variance reduction algorithms.
منابع مشابه
Removing redundant conflict value assignments in resolvent based nogood learning
Taking advantages of popular Resolvent-based (Rslv) and Minimum conflict set (MCS) nogood learning, we propose two new techniques: Unique nogood First Resolvent-based (UFRslv) and Redundant conflict value assignment Free Resolvent-based (RFRslv) nogood learning. By removing conflict value assignments that are redundant, these two new nogood learning techniques can obtain shorter and more effici...
متن کاملFast Asynchronous Parallel Stochastic Gradient Descent: A Lock-Free Approach with Convergence Guarantee
Stochastic gradient descent (SGD) and its variants have become more and more popular in machine learning due to their efficiency and effectiveness. To handle large-scale problems, researchers have recently proposed several parallel SGD methods for multicore systems. However, existing parallel SGD methods cannot achieve satisfactory performance in real applications. In this paper, we propose a f...
متن کاملA lacuna in the theory of asynchronous Boltzmann machine learning
This note rectifies a logical gap in the derivation of the asynchronous Boltzmann machine learning algorithm.
متن کاملRestricted cascade and wreath products of fuzzy finite switchboard state machines
A finite switchboard state machine is a specialized finite state machine. It is built by binding the concepts of switching state machines and commutative state machines. The main purpose of this paper is to give a specific algorithm for fuzzy finite switchboard state machine and also, investigates the concepts of switching relation, covering, restricted cascade products and wreath products of f...
متن کاملThe Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory
Stochastic Gradient Descent (SGD) is a fundamental algorithm in machine learning, representing the optimization backbone for training several classic models, from regression to neural networks. Given the recent practical focus on distributed machine learning, significant work has been dedicated to the convergence properties of this algorithm under the inconsistent and noisy updates arising from...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1605.09721 شماره
صفحات -
تاریخ انتشار 2016