نتایج جستجو برای: loss functions

تعداد نتایج: 911381  

2007
Krzysztof Dembczynski Salvatore Greco Wojciech Kotlowski Roman Slowinski

In the paper, we present the relationship between loss functions and confirmation measures. We show that population minimizers for weighted loss functions correspond to confirmation measures. This result can be used in construction of machine learning methods, particularly, ensemble methods.

2003
José Jablo Arias Javier Hernández Jacinto Martín A. Suárez Alfonso Suárez-Llorens

Bayes decision problems require subjective elicitation of the inputs: beliefs and preferences. Sometimes, elicitation methods may not perfectly represent the Decision Maker’s judgements. Several foundations propose to overlay this problem using robust approaches. In these models, beliefs are modelled by a class of probability distributions and preferences by a class of loss functions. Thus, the...

Journal: :Neural computation 2004
Lorenzo Rosasco Ernesto De Vito Andrea Caponnetto Michele Piana Alessandro Verri

In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also derive a general result on the minimizer of the expected risk for a convex loss fun...

Journal: :CoRR 2016
Pritish Mohapatra Michal Rolinek C. V. Jawahar Vladimir Kolmogorov M. Pawan Kumar

The accuracy of information retrieval systems is often measured using complex non-decomposable loss functions such as the average precision (AP) or the normalized discounted cumulative gain (NDCG). Given a set of positive (relevant) and negative (non-relevant) samples, the parameters of a retrieval system can be estimated using a rank SVM framework, which minimizes a regularized convex upper bo...

Journal: :Oper. Res. Lett. 2003
Krishnan Kumaran Michel Mandjes Alexander L. Stolyar

We show that the fluid loss ratio in a fluid queue with finite buffer and constant link capacity is always a jointly convex function of and . This generalizes prior work [6] which shows convexity of the trade-off for large number of i.i.d. multiplexed sources, using the large deviations rate function as approximation for fluid loss. Our approach also leads to a simpler proof of the prior result...

2014
Ofer Dekel Jian Ding Tomer Koren Yuval Peres

We study a new class of online learning problems where each of the online algorithm’s actions is assigned an adversarial value, and the loss of the algorithm at each step is a known and deterministic function of the values assigned to its recent actions. This class includes problems where the algorithm’s loss is the minimum over the recent adversarial values, the maximum over the recent values,...

2017
Aditya Krishna Menon Young Lee

Temporal point processes are a statistical framework for modelling the times at which events of interest occur. The Hawkes process is a well-studied instance of this framework that captures self-exciting behaviour, wherein the occurrence of one event increases the likelihood of future events. Such processes have been successfully applied to model phenomena ranging from earthquakes to behaviour ...

2001
MICHEL DENUIT JAN DHAENE

This paper focuses on techniques for constructing Bonus-Malus systems in third party liability automobile insurance. Specifically, the article presents a practical method for constructing optimal Bonus-Malus scales with reasonable penalties that can be commercially implemented. For this purpose, the symmetry between the overcharges and the undercharges reflected in the usual quadratic loss func...

پایان نامه :0 1391

بیمه گران همیشه بابت خسارات بیمه نامه های تحت پوشش خود نگران بوده و روش هایی را جستجو می کنند که بتوانند داده های خسارات گذشته را با هدف اتخاذ یک تصمیم بهینه مدل بندی نمایند. در این پژوهش توزیع های فیزتایپ در مدل بندی داده های خسارات معرفی شده که شامل استنباط آماری مربوطه و استفاده از الگوریتم em در برآورد پارامترهای توزیع است. در پایان امکان استفاده از این توزیع در مدل بندی داده های گروه بندی ...

2016
John Duchi

Building off of our interpretations of supervised learning as (1) choosing a representation for our problem, (2) choosing a loss function, and (3) minimizing the loss, let us consider a slightly more general formulation for supervised learning. In the supervised learning settings we have considered thus far, we have input data x ∈ R and targets y from a space Y . In linear regression, this corr...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید