نتایج جستجو برای: stochastic gradient descent learning

تعداد نتایج: 840759  

Journal: :IEEE Transactions on Visualization and Computer Graphics 2019

Journal: :IEEE Transactions on Circuits and Systems for Video Technology 2022

Decentralized learning has gained great popularity to improve efficiency and preserve data privacy. Each computing node makes equal contribution collaboratively learn a Deep Learning model. The elimination of centralized Parameter Servers (PS) can effectively address many issues such as privacy, performance bottleneck single-point-failure. However, how achieve Byzantine Fault Tolerance in decen...

Journal: :CoRR 2015
Janis Keuper Franz-Josef Pfreundt

Stochastic Gradient Descent (SGD) is the standard numerical method used to solve the core optimization problem for the vast majority of machine learning (ML) algorithms. In the context of large scale learning, as utilized by many Big Data applications, efficient parallelization of SGD is in the focus of active research. Recently, we were able to show that the asynchronous communication paradigm...

Journal: :CoRR 2018
Yifan Wu Barnabás Póczos Aarti Singh

A major challenge in understanding the generalization of deep learning is to explain why (stochastic) gradient descent can exploit the network architecture to find solutions that have good generalization performance when using high capacity models. We find simple but realistic examples showing that this phenomenon exists even when learning linear classifiers — between two linear networks with t...

2015
Andrés Esteban Páez-Torres Fabio A. González

The problem of efficiently applying a kernel-induced feature space factorization to a largescale data sets is addressed in this thesis. Kernel matrix factorization methods have showed good performances solving machine learning and data analysis problems. However, the present growth of the amount of information available implies the problems can not be solved with conventional methods, due their...

Journal: :IEEE Transactions on Automatic Control 2013

Journal: :CoRR 2017
Hiroyuki Kasai

We consider the problem of finding the minimizer of a function f : R → R of the form min f(w) = 1 n ∑ i fi(w). This problem has been studied intensively in recent years in machine learning research field. One typical but promising approach for large-scale data is stochastic optimization algorithm. SGDLibrary is a flexible, extensible and efficient pure-Matlab library of a collection of stochast...

Journal: :CoRR 2015
Andrew J. R. Simpson

Effective regularisation during training can mean the difference between success and failure for deep neural networks. Recently, dither has been suggested as alternative to dropout for regularisation during batch-averaged stochastic gradient descent (SGD). In this article, we show that these methods fail without batch averaging and we introduce a new, parallel regularisation method that may be ...

2012
Arun Rajkumar Shivani Agarwal

We consider the problem of developing privacypreserving machine learning algorithms in a distributed multiparty setting. Here different parties own different parts of a data set, and the goal is to learn a classifier from the entire data set without any party revealing any information about the individual data points it owns. Pathak et al [7] recently proposed a solution to this problem in whic...

2016
Trung Le Vu Nguyen Tu Dinh Nguyen Dinh Q. Phung

One of the most challenging problems in kernel online learning is to bound the model size. Budgeted kernel online learning addresses this issue by bounding the model size to a predefined budget. However, determining an appropriate value for such predefined budget is arduous. In this paper, we propose the Nonparametric Budgeted Stochastic Gradient Descent that allows the model size to automatica...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید