نتایج جستجو برای: risk minimization

تعداد نتایج: 973401  

Journal: :CoRR 2015
Dominik Csiba Peter Richtárik

In this work we develop a new algorithm for regularized empirical risk minimization. Our method extends recent techniques of Shalev-Shwartz [02/2015], which enable a dual-free analysis of SDCA, to arbitrary mini-batching schemes. Moreover, our method is able to better utilize the information in the data defining the ERM problem. For convex loss functions, our complexity results match those of Q...

Journal: :CoRR 2015
Kazuto Fukuchi Jun Sakuma

Fairness-aware learning is a novel framework for classification tasks. Like regular empirical risk minimization (ERM), it aims to learn a classifier with a low error rate, and at the same time, for the predictions of the classifier to be independent of sensitive features, such as gender, religion, race, and ethnicity. Existing methods can achieve low dependencies on given samples, but this is n...

Journal: :CoRR 2012
John C. Duchi Lester W. Mackey Michael I. Jordan

We consider the predictive problem of supervised ranking, where the task is to rank sets of candidate items returned in response to queries. Although there exist statistical procedures that come with guarantees of consistency in this setting, these procedures require that individuals provide a complete ranking of all items, which is rarely feasible in practice. Instead, individuals routinely pr...

Journal: :Journal of Machine Learning Research 2010
Ming Yuan Marten H. Wegkamp

In this paper, we investigate the problem of binary classification with a reject option in which one can withhold the decision of classifying an observation at a cost lower than that of misclassification. Since the natural loss function is non-convex so that empirical risk minimization easily becomes infeasible, the paper proposes minimizing convex risks based on surrogate convex loss functions...

2015

For completeness, in this section we derive the dual (5) to the problem of computing proximal operator for the ERM objective (3).

2011
MAXIM RAGINSKY

1. An abstract framework for ERM To study ERM in a general framework, we will adopt a simplified notation often used in the literature. We have a space Z and a class F of functions f : Z→ [0, 1]. Let P(Z) denote the space of all probability distributions on Z. For each sample size n, the training data are in the form of an n-tuple Zn = (Z1, . . . , Zn) of Z-valued random variables drawn accordi...

2017
Vasilis Syrgkanis

We introduce a new sample complexity measure, which we refer to as split-sample growth rate. For any hypothesis H and for any sample S of size m, the split-sample growth rate τ̂H(m) counts how many different hypotheses can empirical risk minimization output on any sub-sample of S of size m/2. We show that the expected generalization error is upper bounded by O ( √

2007
Guillaume Lecué

Let F be a set of M classification procedures with values in [−1, 1]. Given a loss function, we want to construct a procedure which mimics at the best possible rate the best procedure in F . This fastest rate is called optimal rate of aggregation. Considering a continuous scale of loss functions with various types of convexity, we prove that optimal rates of aggregation can be either ((logM)/n)...

2009
Ingo Steinwart Andreas Christmann

We prove an oracle inequality for generic regularized empirical risk minimization algorithms learning from α-mixing processes. To illustrate this oracle inequality, we use it to derive learning rates for some learning methods including least squares SVMs. Since the proof of the oracle inequality uses recent localization ideas developed for independent and identically distributed (i.i.d.) proces...

2007
ALEXANDRE B. TSYBAKOV

It has been recently shown that, under the margin (or low noise) assumption, there exist classifiers attaining fast rates of convergence of the excess Bayes risk, that is, rates faster than n−1/2. The work on this subject has suggested the following two conjectures: (i) the best achievable fast rate is of the order n−1, and (ii) the plug-in classifiers generally converge more slowly than the cl...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید