Scribe : Constantinos Daskalakis Notes for Lecture 18 1 Basic Definitions
نویسندگان
چکیده
In the previous lecture we defined the notion of a randomness extractor as follows: Definition 1 A function Ext : {0, 1} n ×{0, 1} d → {0, 1} m is a (k,)-extractor if, for every random variable X of range {0, 1} n-which we will call a source-of min-entropy H ∞ (X) ≥ k, the following holds for the statistical difference of Ext(X, U d) and U m : · the min-entropy of a random variable X of a finite range A is defined as H ∞ (X) = min a∈A 1 Pr[X=a] · the statistical difference ||·|| SD of two random variables Y and Z of a finite range A is defined as: ||Y − Z|| SD = max The general goal is to keep the " high-quality " input randomness-parameter d of the model-as small as possible, while maintaining the output randomness-parameter m-as close to the sum k + d as possible. Last time, we showed some negative results asserting that the best tradeoff we can hope for is something of the flavor: m = k + d − 2 log 1 + Θ(1) and d = log (n − k) + 2 log 1 − Θ(1) Today we will give a construction of randomness extractors. But first we will motivate our pursuit by giving an interesting application. Suppose that A L is a randomized algorithm for a language L ⊆ {0, 1} l , which uses m bits of randomness and has error probability ≤ 1 4. A common way to boost the probability of success of A L is to execute it t times on independent randomness and output the majority answer. In this case, the random bits that are needed are t · m and the probability of error is bounded by e −Ω(t) .
منابع مشابه
CS 49 : Data Stream Algorithms Lecture Notes , Fall 2011 Amit
Acknowledgements These lecture notes began as rough scribe notes for a Fall 2009 offering of the course " Data Stream Algorithms " at Dartmouth College. The initial scribe notes were prepared mostly by students enrolled in the course in 2009. Subsequently, during a Fall 2011 offering of the course, I edited the notes heavily, bringing them into presentable form, with the aim being to create a r...
متن کاملCs49: Data Stream Algorithms Lecture Notes, Fall 2011
Acknowledgements These lecture notes began as rough scribe notes for a Fall 2009 offering of the course " Data Stream Algorithms " at Dartmouth College. The initial scribe notes were prepared mostly by students enrolled in the course in 2009. Subsequently, during a Fall 2011 offering of the course, I edited the notes heavily, bringing them into presentable form, with the aim being to create a r...
متن کامل6.896 Probability and Computation
NOTE: The content of these notes has not been formally reviewed by the lecturer. It is recommended that they are read critically.
متن کامل1 Administrative Notes For those keeping track : the instructor of “ Advanced Cryptography ” is
By a somewhat democratic process, it was established that the graded work for the course will be two-fold: scribe notes (such that the participants and the instructor emerge with a relatively complete set of lecture notes) and homework exercises (probably three sets of challenging-butnot-life-threatening problems, such that everybody gets some practice solving problems and the instructor may om...
متن کاملOccam and Compression Algorithms: General PAC Learning Results
These notes are slightly edited from scribe notes in previous years. Please consult the handout of slide copies for definitions and theorem statements. Theorem 0 from Handout Let X be a domain of examples, and C, H concept classes over X. Let A be a learning algorithm such that ∀c ∈ C A takes a sample of m examples and outputs a hypothesis h ∈ H consistent with sample S. Then when using a sampl...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005