نتایج جستجو برای: risk minimization

تعداد نتایج: 973401  

2015
David B. Smith Vibhav Gogate

Recently, there has been growing interest in systematic search-based and importance sampling-based lifted inference algorithms for statistical relational models (SRMs). These lifted algorithms achieve significant complexity reductions over their propositional counterparts by using lifting rules that leverage symmetries in the relational representation. One drawback of these algorithms is that t...

2009
Vincenzo Auletta Paolo Penna Giuseppe Persiano

Algorithmic mechanism design considers distributed settings where the participants, termed agents, cannot be assumed to follow the protocol but rather their own interests. The protocol can be regarded as an algorithm augmented with a suitable payment rule and the desired condition is termed truthfulness, meaning that it is never convenient for an agent to report false information. Motivated by ...

Journal: :Journal of Logic, Language and Information 2004
Martin Jansche

Local deterministic string-to-string transductions are generalizations of morphisms on free monoids. Learning local transductions reduces to inference of monoid morphisms. However, learning a restricted class of morphisms, the so-called fine morphisms, is an intractable problem, because the decision version of the empirical risk minimization problem contains an NP-complete subproblem.

Journal: :CoRR 2017
Olivier Bachem Mario Lucic S. Hamed Hassani Andreas Krause

Uniform deviation bounds limit the difference between a model’s expected loss and its loss on an empirical sample uniformly for all models in a learning problem. As such, they are a critical component to empirical risk minimization. In this paper, we provide a novel framework to obtain uniform deviation bounds for loss functions which are unbounded. In our main application, this allows us to ob...

Journal: :IEEE Trans. Information Theory 1998
John Shawe-Taylor Peter L. Bartlett Robert C. Williamson Martin Anthony

The paper introduces some generalizations of Vapnik’s method of structural risk minimisation (SRM). As well as making explicit some of the details on SRM, it provides a result that allows one to trade off errors on the training sample against improved generalization performance. It then considers the more general case when the hierarchy of classes is chosen in response to the data. A result is ...

Journal: :CoRR 2017
Hantao Yao Shiliang Zhang Yongdong Zhang Jintao Li Qi Tian

Learning discriminative representations for unseen person images is critical for person Re-Identification (ReID). Most of current approaches learn deep representations in classification tasks, which essentially minimize the empirical classification risk on the training set. As shown in our experiments, such representations commonly focus on several body parts discriminative to the training set,...

1998
Thore Graepel Ralf Herbrich Peter Bollmann-Sdorra Klaus Obermayer

We investigate the problem of learning a classification task on data represented in terms of their pairwise proximities. This representation does not refer to an explicit feature representation of the data items and is thus more general than the standard approach of using Euclidean feature vectors, from which pairwise proximities can always be calculated. Our first approach is based on a combin...

2009
José Luis Montaña César Luis Alonso Cruz E. Borges José Luis Crespo

We discuss here empirical comparation between model selection methods based on Linear Genetic Programming. Two statistical methods are compared: model selection based on Empirical Risk Minimization (ERM) and model selection based on Structural Risk Minimization (SRM). For this purpose we have identified the main components which determine the capacity of some linear structures as classifiers sh...

2014
Yue Wang

We have previously seen how sieve estimators give rise to rates of convergence to the Bayes risk by performing empirical risk minimization over Hk(n), where (Hk)k ≥ 1 is an increasing sequence of sets of classifiers, and k(n) → ∞. However, the rate of convergence depends on k(n). Usually this rate is chosen to minimize the worst-case rate over all distributions of interest. However, it would be...

2004
Hyunjung Shin Sungzoon Cho

Support Vector Machine (SVM) employs Structural Risk Minimization (SRM) principle to generalize better than conventional machine learning methods employing the traditional Empirical Risk Minimization (ERM) principle. When applying SVM to response modeling in direct marketing, however, one has to deal with the practical difficulties: large training data, class imbalance and binary SVM output. Th...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید