Separating Distribution-Free and Mistake-Bound Learning Models over the Boolean Domain

نویسنده

  • Avrim Blum
چکیده

Two of the most commonly used models in computational learning theory are the distribution-free model in which examples are chosen from a fixed but arbitrary distribution, and the absolute mistake-bound model in which examples are presented in an arbitrary order. Over the Boolean domain {0, 1}, it is known that if the learner is allowed unlimited computational resources then any concept class learnable in one model is also learnable in the other. In addition, any polynomial-time learning algorithm for a concept class in the mistake-bound model can be transformed into one that learns the class in the distribution-free model. This paper shows that if one-way functions exist, then the mistake-bound model is strictly harder than the distribution-free model for polynomial-time learning. Specifically, given a one-way function, we show how to create a concept class over {0, 1} that is learnable in polynomial time in the distribution-free model, but not in the absolute mistake-bound model. In addition, the concept class remains hard to learn in the mistake-bound model even if the learner is allowed a polynomial number of membership queries. The concepts considered are based upon the Goldreich, Goldwasser and Micali random function construction [9] and involve creating the following new cryptographic object: an exponentially long sequence of strings σ1, σ2, . . . , σr over {0, 1} that is hard to compute in one direction (given σi one cannot compute σj for j < i) but is easy to compute and even make random-access jumps in the other direction (given σi and j > i one can compute σj , even if j is exponentially larger than i). Similar sequences considered previously [6, 7] did not allow random-access jumps forward without knowledge of a seed allowing one to compute backwards as well.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Attribute-Efficient Learning and Weight-Degree Tradeoffs for Polynomial Threshold Functions

We study the challenging problem of learning decision lists attribute-efficiently, giving both positive and negative results. Our main positive result is a new tradeoff between the running time and mistake bound for learning length-k decision lists over n Boolean variables. When the allowed running time is relatively high, our new mistake bound improves significantly on the mistake bound of the...

متن کامل

Adaptive Regularization for Similarity Measures

Algorithms for learning distributions over weight-vectors, such as AROW (Crammer et al., 2009) were recently shown empirically to achieve state-of-the-art performance at various problems, with strong theoretical guaranties. Extending these algorithms to matrix models pose challenges since the number of free parameters in the covariance of the distribution scales as n with the dimension n of the...

متن کامل

Learning Parities in the Mistake-Bound model

We study the problem of learning parity functions that depend on at most k variables (kparities) attribute-efficiently in the mistake-bound model. We design a simple, deterministic, polynomial-time algorithm for learning k-parities with mistake bound O(n1− 1 k ). This is the first polynomial-time algorithm to learn ω(1)-parities in the mistake-bound model with mistake bound o(n). Using the stan...

متن کامل

Image alignment via kernelized feature learning

Machine learning is an application of artificial intelligence that is able to automatically learn and improve from experience without being explicitly programmed. The primary assumption for most of the machine learning algorithms is that the training set (source domain) and the test set (target domain) follow from the same probability distribution. However, in most of the real-world application...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1990