On empirical stochastic regularization

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Weighted Function Norm Regularization

Deep neural networks (DNNs) have become increasingly important due to their excellent empirical performance on a wide range of problems. However, regularization is generally achieved by indirect means, largely due to the complex set of functions defined by a network and the difficulty in measuring function complexity. There exists no method in the literature for additive regularization based on...

متن کامل

Stochastic Bandit Based on Empirical Moments

In the multiarmed bandit problem a gambler chooses an arm of a slot machine to pull considering a tradeoff between exploration and exploitation. We study the stochastic bandit problem where each arm has a reward distribution supported in [0, 1]. For this model, there exists a policy which achieves the theoretical bound asymptotically. However the optimal policy requires a computation of a conve...

متن کامل

Stochastic Regularization and Boosting for Bioinformatics

Boosting techniques are extremely popular in many domains where one wishes to learn effective classification functions and identify the most relevant variables at the same time. However, in general, these techniques do not perform well on bioinformatic data sets. One known reason that makes bioinformatics data sets particularly problematic for Boosting, is that they typically contain very few t...

متن کامل

Stochastic Cubic Regularization for Fast Nonconvex Optimization

This paper proposes a stochastic variant of a classic algorithm—the cubic-regularized Newton method [Nesterov and Polyak, 2006]. The proposed algorithm efficiently escapes saddle points and finds approximate local minima for general smooth, nonconvex functions in only Õ( −3.5) stochastic gradient and stochastic Hessian-vector product evaluations. The latter can be computed as efficiently as sto...

متن کامل

Accelerated Stochastic Gradient Method for Composite Regularization

Regularized risk minimization often involves nonsmooth optimization. This can be particularly challenging when the regularizer is a sum of simpler regularizers, as in the overlapping group lasso. Very recently, this is alleviated by using the proximal average, in which an implicitly nonsmooth function is employed to approximate the composite regularizer. In this paper, we propose a novel extens...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Banach Center Publications

سال: 1984

ISSN: 0137-6934,1730-6299

DOI: 10.4064/-13-1-313-317