EECS Joint Colloquium Distinguished Lecture Series
Prof. Luca Trevisan
CS Department, Columbia University
Wednesday, February 16, 2000
Hewlett Packard Auditorium, 306 Soda Hall
Randomization is a very powerful tool in the design of algorithms and data structures. Efficient randomized algorithms are known for primality testing (useful in cryptography), factorization of polynomials (useful in coding theory), approximation of "counting" problems and other important applications, but no efficient "deterministic" algorithm is known for such problems.
On the other hand, the use of randomized algorithms requires the generation of random bits, a task that can be complex, costly, and slow. Physical sources of randomness typically produce random bits at a slow rate, and they do not guarantee the production of uniform and independent random bits, but rather they have dependencies and biases.
This motivates two general questions. The first question is how to "purify" the randomness coming from an imperfect sources of randomness, i.e. how to efficiently convert a distribution that contains some entropy (but is also biased and far from uniform) into an almost uniform distribution. This problem is solved by procedures called "randomness extractors".
The other question is how to run a randomized algorithm that requires a certain (large) number of random bits by using a source that only outputs a smaller number of random bits. This problem is solved by procedures called "pseudorandom generators" that stretch a short random input into a much larger output that still "looks random" to computationally bounded observers.
Research in these two directions proceeded independently in the last twenty years.
Eventually, in 1998 I discovered an unsuspected connection: every pseudorandom generator construction of a certain kind, when properly reinterpreted, is also a randomness extractor. As it turned out, known constructions of pseudorandom generators, when seen through this connection, give randomness extractors that are simpler and more efficient (they extract more randomness from weaker random sources) than previously known randomness extractors. This result stimulated a renewed interest in both problems, that are now essentially unified. Two groups of researchers (Raz, Reingold and Vadhan; and Impagliazzo, Shaltiel and Wigderson) have recently designed improved randomness extractors by means of improved pseudorandom generator constructions.
In this talk I will describe the problems of pseudorandom generation and randomness extraction, their history, and some additional results of mine in both directions. I will then present the connection and its applications.231cory@EECS.Berkeley.EDU