PAC-Bayes with Minimax for Confidence-Rated Transduction

نویسندگان

  • Akshay Balsubramani
  • Yoav Freund
چکیده

We consider using an ensemble of binary classifiers for transductive prediction, when unlabeled test data are known in advance. We derive minimax optimal rules for confidence-rated prediction in this setting. By using PAC-Bayes analysis on these rules, we obtain data-dependent performance guarantees without distributional assumptions on the data. Our analysis techniques are readily extended to a setting in which the predictor is allowed to abstain.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Minimax Estimator of a Lower Bounded Parameter of a Discrete Distribution under a Squared Log Error Loss Function

The problem of estimating the parameter ?, when it is restricted to an interval of the form , in a class of discrete distributions, including Binomial Negative Binomial discrete Weibull and etc., is considered. We give necessary and sufficient conditions for which the Bayes estimator of with respect to a two points boundary supported prior is minimax under squared log error loss function....

متن کامل

Tight Bounds for the Expected Risk of Linear Classifiers and PAC-Bayes Finite-Sample Guarantees

We analyze the expected risk of linear classifiers for a fixed weight vector in the “minimax” setting. That is, we analyze the worst-case risk among all data distributions with a given mean and covariance. We provide a simpler proof of the tight polynomial-tail bound for general random variables. For sub-Gaussian random variables, we derive a novel tight exponentialtail bound. We also provide n...

متن کامل

Truncated Linear Minimax Estimator of a Power of the Scale Parameter in a Lower- Bounded Parameter Space

 Minimax estimation problems with restricted parameter space reached increasing interest within the last two decades Some authors derived minimax and admissible estimators of bounded parameters under squared error loss and scale invariant squared error loss In some truncated estimation problems the most natural estimator to be considered is the truncated version of a classic...

متن کامل

Resolution, Nonlinearity, Constraints, Confidence, Design, Minimax and Bayes

Of those things that can be estimated well in an inverse problem, which are best to estimate? Backus-Gilbert resolution theory answers a version of this question for linear (or linearized) inverse problems in Hilbert spaces with additive zero-mean errors with known, finite covariance, and no constraints on the unknown other than the data. This paper extends Backus-Gilbert resolution: it defines...

متن کامل

Invariant Empirical Bayes Confidence Interval for Mean Vector of Normal Distribution and its Generalization for Exponential Family

Based on a given Bayesian model of multivariate normal with  known variance matrix we will find an empirical Bayes confidence interval for the mean vector components which have normal distribution. We will find this empirical Bayes confidence interval as a conditional form on ancillary statistic. In both cases (i.e.  conditional and unconditional empirical Bayes confidence interval), the empiri...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1501.03838  شماره 

صفحات  -

تاریخ انتشار 2015