Localized Complexities for Transductive Learning
نویسندگان
چکیده
We show two novel concentration inequalities for suprema of empirical processes when sampling without replacement, which both take the variance of the functions into account. While these inequalities may potentially have broad applications in learning theory in general, we exemplify their significance by studying the transductive setting of learning theory. For which we provide the first excess risk bounds based on the localized complexity of the hypothesis class, which can yield fast rates of convergence also in the transductive learning setting. We give a preliminary analysis of the localized complexities for the prominent case of kernel classes.
منابع مشابه
Transductive Rademacher Complexities for Learning Over a Graph
Recent investigations [12, 2, 8, 5, 6] and [11, 9] indicate the use of a probabilistic (’learning’) perspective of tasks defined on a single graph, as opposed to the traditional algorithmical (’computational’) point of view. This note discusses the use of Rademacher complexities in this setting, and illustrates the use of Kruskal’s algorithm for transductive inference based on a nearest neighbo...
متن کاملRelax and Localize: From Value to Algorithms
We show a principled way of deriving online learning algorithms from a minimax analysis. Various upper bounds on the minimax value, previously thought to be non-constructive, are shown to yield algorithms. This allows us to seamlessly recover known methods and to derive new ones. Our framework also captures such “unorthodox” methods as Follow the Perturbed Leader and the R forecaster. We emphas...
متن کاملTransductive Rademacher Complexity and Its Applications
We develop a technique for deriving data-dependent error bounds for transductive learning algorithms based on transductive Rademacher complexity. Our technique is based on a novel general error bound for transduction in terms of transductive Rademacher complexity, together with a novel bounding technique for Rademacher averages for particular algorithms, in terms of their “unlabeled-labeled” re...
متن کاملTransductive Gaussian Process Regression with Automatic Model Selection
In contrast to the standard inductive inference setting of predictive machine learning, in real world learning problems often the test instances are already available at training time. Transductive inference tries to improve the predictive accuracy of learning algorithms by making use of the information contained in these test instances. Although this description of transductive inference appli...
متن کاملStable Transductive Learning
We develop a new error bound for transductive learning algorithms. The slack term in the new bound is a function of a relaxed notion of transductive stability, which measures the sensitivity of the algorithm to most pairwise exchanges of training and test set points. Our bound is based on a novel concentration inequality for symmetric functions of permutations. We also present a simple sampling...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014