نتایج جستجو برای: risk minimization
تعداد نتایج: 973401 فیلتر نتایج به سال:
We study large-scale classification problems in changing environments where a small part of the dataset is modified, and the effect of the data modification must be quickly incorporated into the classifier. When the entire dataset is large, even if the amount of the data modification is fairly small, the computational cost of re-training the classifier would be prohibitively large. In this pape...
We study the recovery of sparse signals from underdetermined linear measurements when a potentially erroneous support estimate is available. Our results are twofold. First, we derive necessary and sufficient conditions for signal recovery from compressively sampled measurements using weighted l1norm minimization. These conditions, which depend on the choice of weights as well as the size and ac...
We introduce a framework for class noise, in which most of the known class noise models for the PAC setting can be formulated. Within this framework, we study properties of noise models that enable learning of concept classes of finite VC-dimension with the Empirical Risk Minimization (ERM) strategy. We introduce simple noise models for which classical ERM is not successful. Aiming at a more ge...
Abstract: Model selection is often performed by empirical risk minimization. The quality of selection in a given situation can be assessed by risk bounds, which require assumptions both on the margin and the tails of the losses used. Starting with examples from the 3 basic estimation problems, regression, classification and density estimation, we formulate risk bounds for empirical risk minimiz...
We study links and diierences between structural risk minimization and bayesian learning. We try to nd which a priori is made in each of the following algorithms, showed equivalent to a Bayesian approach: SVMs, backpropagation, RBF, 1-NN. Finally, we propose new versions of Bayes Point Machines .
The Discrepancy Method is a constructive method for proving upper bounds that has received a lot of attention in recent years. In this paper we revisit a few important results, and show how it can be applied to problems in Machine Learning such as the Empirical Risk Minimization and Risk Estimation by exploiting connections with combinatorial dimension theory.
The Structural Risk Minimization principle allows estimating the generalization ability of a learned hypothesis by measuring the complexity of the entire hypothesis class. Two of the most recent and effective complexity measures are the Rademacher Complexity and the Maximal Discrepancy, which have been applied to the derivation of generalization bounds for kernel classifiers. In this work, we e...
Detectors design requires substantial knowledge of the observation statistical properties, conditionally to the competing hypotheses H0 and H1. However, many applications involve complex phenomena, in which few a priori information is available. Several methods of designing time-frequency-based (TF) receivers from labeled training data have been proposed. Unfortunately, the resulting detectors ...
Recall that in Theorem 2.1, we analyzed empirical risk minimization with a finite hypothesis class F , i.e., |F| < +∞. Here as in Theorem 4.1, we will prove results for a possibly infinite hypothesis class F . We will make a generalization with respect to the previous results. In this lecture, we have z ∈ Z and we use a hypothesis (∀h ∈ F) h : Z → R. This setting is more general than for predic...
Recent work introduced loss functions which measure the error of a prediction based on multiple simultaneous observations or outcomes. In this paper, we explore the theoretical and practical questions that arise when using such multi-observation losses for regression on data sets of (x, y) pairs. When a loss depends on only one observation, the average empirical loss decomposes by applying the ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید