Efficient Distributed Learning with Sparsity
نویسندگان
چکیده
We propose a novel, efficient approach for distributed sparse learning with observations randomly partitioned across machines. In each round of the proposed method, worker machines compute the gradient of the loss on local data and the master machine solves a shifted `1 regularized loss minimization problem. After a number of communication rounds that scales only logarithmically with the number of machines, and independent of other parameters of the problem, the proposed approach provably matches the estimation error bound of centralized methods.
منابع مشابه
Communication-efficient Algorithm for Distributed Sparse Learning via Two-way Truncation
We propose a communicationally and computationally efficient algorithm for high-dimensional distributed sparse learning. At each iteration, local machines compute the gradient on local data and the master machine solves one shifted l1 regularized minimization problem. The communication cost is reduced from constant times of the dimension number for the state-of-the-art algorithm to constant tim...
متن کاملSpeech Enhancement using Adaptive Data-Based Dictionary Learning
In this paper, a speech enhancement method based on sparse representation of data frames has been presented. Speech enhancement is one of the most applicable areas in different signal processing fields. The objective of a speech enhancement system is improvement of either intelligibility or quality of the speech signals. This process is carried out using the speech signal processing techniques ...
متن کاملExclusive Sparsity Norm Minimization with Random Groups via Cone Projection
Many practical applications such as gene expression analysis, multi-task learning, image recognition, signal processing, and medical data analysis pursue a sparse solution for the feature selection purpose and particularly favor the nonzeros evenly distributed in different groups. The exclusive sparsity norm has been widely used to serve to this purpose. However, it still lacks systematical stu...
متن کاملSparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
Contemporary Deep Neural Network (DNN) contains millions of synaptic connections with tens to hundreds of layers. The large computation and memory requirements pose a challenge to the hardware design. In this work, we leverage the intrinsic activation sparsity of DNN to substantially reduce the execution cycles and the energy consumption. An end-to-end training algorithm is proposed to develop ...
متن کاملOn Optimizing Operator Fusion Plans for Large-Scale Machine Learning in SystemML
Many large-scale machine learning (ML) systems allow specifying custom ML algorithms by means of linear algebra programs, and then automatically generate efficient execution plans. In this context, optimization opportunities for fused operators—in terms of fused chains of basic operators—are ubiquitous. These opportunities include (1) fewer materialized intermediates, (2) fewer scans of input d...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2017