Inverse Optimization of Convex Risk Functions

نویسندگان

چکیده

The theory of convex risk functions has now been well established as the basis for identifying families that should be used in risk-averse optimization problems. Despite its theoretical appeal, implementation a function remains difficult, because there is little guidance regarding how chosen so it also represents decision maker’s subjective preference. In this paper, we address issue through lens inverse optimization. Specifically, given solution data from some (forward) problem (i.e., minimization with known constraints), develop an framework generates renders solutions optimal forward problem. incorporates well-known properties functions—namely, monotonicity, convexity, translation invariance, and law invariance—as general information about candidate functions, feedback individuals—which include initial estimate pairwise comparisons among random losses—as more specific information. Our particularly novel unlike classical optimization, does not require making any parametric assumption nonparametric). We show resulting problems can reformulated programs are polynomially solvable if corresponding solvable. illustrate imputed portfolio selection demonstrate their practical value using real-life data. This paper was accepted by Yinyu Ye,

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimization of Convex Risk Functions

We consider optimization problems involving convex risk functions. By employing techniques of convex analysis and optimization theory in vector spaces of measurable functions we develop new representation theorems for risk models, and optimality and duality theory for problems with convex risk functions.

متن کامل

Beyond Convex Optimization: Star-Convex Functions

We introduce a polynomial time algorithm for optimizing the class of star-convex functions, under no Lipschitz or other smoothness assumptions whatsoever, and no restrictions except exponential boundedness on a region about the origin, and Lebesgue measurability. The algorithm’s performance is polynomial in the requested number of digits of accuracy and the dimension of the search domain. This ...

متن کامل

Distributed Optimization of Convex Sum of Non-Convex Functions

We present a distributed solution to optimizing a convex function composed of several nonconvex functions. Each non-convex function is privately stored with an agent while the agents communicate with neighbors to form a network. We show that coupled consensus and projected gradient descent algorithm proposed in [1] can optimize convex sum of non-convex functions under an additional assumption o...

متن کامل

Multi-scale exploration of convex functions and bandit convex optimization

We construct a new map from a convex function to a distribution on its domain, with the property that this distribution is a multi-scale exploration of the function. We use this map to solve a decadeold open problem in adversarial bandit convex optimization by showing that the minimax regret for this problem is Õ(poly(n) √ T ), where n is the dimension and T the number of rounds. This bound is ...

متن کامل

Characterizations of Convex Vector Functions and Optimization

In this paper we characterize nonsmooth convex vector functions by first and second order generalized derivatives. We also prove optimality conditions for convex vector problems involving nonsmooth data.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Management Science

سال: 2021

ISSN: ['0025-1909', '1526-5501']

DOI: https://doi.org/10.1287/mnsc.2020.3851