نتایج جستجو برای: quadratic loss function

تعداد نتایج: 1596512  

2011
Maxim Raginsky

Regression with quadratic loss is another basic problem studied in statistical learning theory. We have a random couple Z = (X ,Y ), where, as before, X is anRd -valued feature vector (or input vector) and Y is the real-valued response (or output). We assume that the unknown joint distribution P = PZ = PX Y of (X ,Y ) belongs to some class P of probability distributions over Rd ×R. The learning...

Journal: :Neural Computation 1997
David Wolpert

This paper presents a Bayesian additive “correction” to the familiar quadratic loss biasplus-variance formula. It then discusses some other loss-function-specific aspects of supervised learning. It ends by presenting a version of the bias-plus-variance formula appropriate for log loss, and then the Bayesian additive correction to that formula. Both the quadratic loss and log loss correction ter...

2010
R. T. ROCKAFELLAR

Problems are considered in which an objective function expressible as a max of finitely many C2 functions, or more generally as the composition of a piecewise linear-quadratic function with a C2 mapping, is minimized subject to finitely many C2 constraints. The essential objective function in such a problem, which is the sum of the given objective and the indicator of the constraints, is shown ...

Journal: :CoRR 2010
Ahed Hindawi Ludovic Rifford Jean-Baptiste Pomet

We study the optimal transport problem in the Euclidean space where the cost function is given by the value function associated with a Linear Quadratic minimization problem. Under appropriate assumptions, we generalize Brenier’s Theorem proving existence and uniqueness of an optimal transport map. In the controllable case, we show that the optimal transport map has to be the gradient of a conve...

2017
Jiakun Liu Neil Trudinger Xu-Jia Wang

© 2014, Springer-Verlag Berlin Heidelberg. In this paper we study local properties of cost and potential functions in optimal transportation. We prove that in a proper normalization process, the cost function is uniformly smooth and converges locally smoothly to a quadratic cost x · y, while the potential function converges to a quadratic function. As applications we obtain the interior W2, p e...

2016
Frédéric Koriche Daniel Le Berre Emmanuel Lonca Pierre Marquis

Minimizing a cost function under a set of combinatorial constraints is a fundamental, yet challenging problem in AI. Fortunately, in various real-world applications, the set of constraints describing the problem structure is much less susceptible to change over time than the cost function capturing user’s preferences. In such situations, compiling the set of feasible solutions during an offline...

1998
Jeffrey A. Fessler Hakan Erdoğan

We present a new algorithm for penalized-likelihood emission image reconstruction. The algorithm monotonically increases the objective function, converges globally to the unique maximizer, and easily accommodates the nonnegativity constraint and nonquadratic but convex penalty functions. The algorithm is based on finding paraboloidal surrogate functions for the log-likelihood at each iteration:...

Journal: :Automatica 2017
Giorgio Battistelli Luigi Chisci Stefano Gherardini

The paper addresses state estimation for linear discrete-time systems with binary (threshold) measurements. A Moving Horizon Estimation (MHE) approach is followed and different estimators, characterized by two different choices of the cost function to be minimized and/or by the possible inclusion of constraints, are proposed. Specifically, the cost function is either quadratic, when only the in...

Journal: :Comp. Opt. and Appl. 2012
M. J. D. Powell

We consider iterative trust region algorithms for the unconstrained minimization of an objective function F (x), x∈R, when F is differentiable but no derivatives are available, and when each model of F is a linear or a quadratic polynomial. The models interpolate F at n+1 points, which defines them uniquely when they are linear polynomials. In the quadratic case, second derivatives of the model...

2011
Vladimir Karamychev Bauke Visser

This paper analyses the optimal combination of costly and costless messages that a Sender uses in a signaling game if he is able to choose among all equilibrium communication strategies. We provide a complete characterization of the equilibrium that maximizes the Sender’s ex ante expected utility in case of uniformly distributed types and quadratic loss functions. First, the Sender often wants ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید