Learning-Based Robust Optimization: Procedures and Statistical Guarantees
نویسندگان
چکیده
Robust optimization (RO) is a common approach to tractably obtain safeguarding solutions for problems with uncertain constraints. In this paper, we study statistical framework integrate data into RO based on learning prediction set using (combinations of) geometric shapes that are compatible established tools and simple data-splitting validation step achieves finite-sample nonparametric guarantees feasibility. We demonstrate how our required sample size achieve feasibility at given confidence level independent of the dimensions both decision space probability governing stochasticity, discuss some approaches improve objective performances while maintaining these dimension-free guarantees. This paper was accepted by Yinyu Ye, optimization.
منابع مشابه
Robust portfolio optimization with derivative insurance guarantees
Robust portfolio optimization aims to maximize the worst-case portfolio return given that the asset returns are allowed to vary within a prescribed uncertainty set. If the uncertainty set is not too large, the resulting portfolio performs well under normal market conditions. However, its performance may substantially degrade in the presence of market crashes, that is, if the asset returns mater...
متن کاملMultiple Optimality Guarantees in Statistical Learning
Multiple Optimality Guarantees in Statistical Learning by John C Duchi Doctor of Philosophy in Computer Science and the Designated Emphasis in Communication, Computation, and Statistics University of California, Berkeley Professor Michael I. Jordan, Co-chair Professor Martin J. Wainwright, Co-chair Classically, the performance of estimators in statistical learning problems is measured in terms ...
متن کاملIncremental Learning-to-Learn with Statistical Guarantees
In learning-to-learn the goal is to infer a learning algorithm that works well on a class of tasks sampled from an unknown meta distribution. In contrast to previous work on batch learning-tolearn, we consider a scenario where tasks are presented sequentially and the algorithm needs to adapt incrementally to improve its performance on future tasks. Key to this setting is for the algorithm to ra...
متن کاملAdaptive Learning with Robust Generalization Guarantees
The traditional notion of generalization—i.e., learning a hypothesis whose empirical error is close to its true error—is surprisingly brittle. As has recently been noted [DFH+15b], even if several algorithms have this guarantee in isolation, the guarantee need not hold if the algorithms are composed adaptively. In this paper, we study three notions of generalization— increasing in strength—that...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Management Science
سال: 2021
ISSN: ['0025-1909', '1526-5501']
DOI: https://doi.org/10.1287/mnsc.2020.3640