On Approximating the Entropy of Polynomial Mappings
- Zeev Dvir ,
- Dan Gutfreund ,
- Guy Rothblum ,
- Salil Vadhan
Proceedings of The Second Symposium on Innovations in Computer Science (ICS 2011), ICS |
Published by Tsinghua University Press
We investigate the complexity of the following computational problem: Polynomial Entropy Approximation (PEA): Given a low-degree polynomial mapping p : Fn ! Fm, where F is a nite eld, approximate the output entropy H(p(Un)), where Un is the uniform distribution on Fn and H may be any of several entropy measures. We show: Approximating the Shannon entropy of degree 3 polynomials p : Fn2 ! Fm2over F2 to within an additive constant (or even n:9) is complete for SZKPL, the class of problems having statistical zero-knowledge proofs where the honest verier and its simulator are computable in logarithmic space. (SZKPL contains most of the natural problems known to be in the full class SZKP.) For prime elds F 6= F2 and homogeneous quadratic polynomials p : Fn ! Fm, there is a probabilistic polynomial-time algorithm that distinguishes the case that p(Un) has entropy smaller than k from the case that p(Un) has min-entropy (or even Renyi entropy) greater than (2 + o(1))k. For degree d polynomials p : Fn2 ! Fm2 , there is a polynomial-time algorithm that distinguishes the case that p(Un) has max-entropy smaller than k (where the max-entropy of a random variable is the logarithm of its support size) from the case that p(Un) has max-entropy at least (1 + o(1)) kd (for xed d and large k).