Theory of Computing
-------------------
Title : A Chasm Between Identity and Equivalence Testing with Conditional Queries
Authors : Jayadev Acharya, Clement L. Canonne, and Gautam Kamath
Volume : 14
Number : 19
Pages : 1-46
URL : https://theoryofcomputing.org/articles/v014a019
Abstract
--------
A recent model for property testing of probability distributions
(Chakraborty et al., ITCS 2013, Canonne et al., SICOMP 2015) enables
tremendous savings in the sample complexity of testing algorithms, by
allowing them to condition the sampling on subsets of the domain. In
particular, Canonne, Ron, and Servedio (SICOMP 2015) showed that, in
this setting, testing identity of an unknown distribution $\D$ (i.e.,
whether $\D=\D^*$ for an explicitly known $\D^*$) can be done
with a _constant_ number of queries (i.e., samples), independent of the
support size $n$ -- in contrast to the required $\Omega(\sqrt{n})$ in
the standard sampling model. However, it was unclear whether the same
stark contrast exists for the case of testing equivalence, where
_both_ distributions are unknown. Indeed, while Canonne et al.
established a $\poly(\log n)$-query upper bound for equivalence
testing, very recently brought down to $\tildeO{\log\log n}$ by
Falahatgar et al. (COLT 2015), whether a dependence on the domain size
$n$ is necessary was still open, and explicitly posed by Fischer at
the Bertinoro Workshop on Sublinear Algorithms (2014). In this article,
we answer the question in the affirmative, showing that any testing
algorithm for equivalence must make $\bigOmega{\sqrt{\log\log n}}$
queries in the conditional sampling model. Interestingly, this
demonstrates a gap between identity and equivalence testing, absent in
the standard sampling model (where both problems have sampling
complexity $n^{\Theta(1)}$).
We also obtain results on the query complexity of uniformity testing
and support-size estimation with conditional samples. In particular,
we answer a question of Chakraborty et al. (ITCS 2013) showing that
_non-adaptive_ uniformity testing indeed requires $\bigOmega{\log n}$
queries in the conditional model. This is an exponential improvement
on their previous lower bound of $\bigOmega{\log\log n}$, and matches
up to polynomial factors their $\poly(\log n)$ upper bound. For the
problem of support-size estimation, we provide both adaptive and non-
adaptive algorithms, with query complexities $\poly(\log\log n)$ and
$\poly(\log n)$, respectively, and complement them with a lower bound
of $\bigOmega{\log n}$ conditional queries for non-adaptive algorithms.
An extended abstract of this paper, with only some of the proofs
and a subset of the results on non-adaptive algorithms, appeared
in the Proceedings of the 2015 Conference on Approximation,
Randomization, and Combinatorial Optimization. Algorithms and
Techniques (APPROX/RANDOM).