论文标题
随机布尔功能评估问题的适应性差距
Adaptivity Gaps for the Stochastic Boolean Function Evaluation Problem
论文作者
论文摘要
我们考虑了随机布尔功能评估(SBFE)问题,其中任务是在未知的位字符串$ x $长度$ n $上有效评估已知的布尔函数$ f $。我们通过顺序测试$ x $的变量来确定$ f(x)$,每个变量都与测试成本和真实的独立概率相关联。如果解决该问题的策略是自适应的,因为它的下一个测试可以取决于先前测试的结果,则其预期成本较低,但可能需要指数级的空间。相反,非自适应策略可能具有更高的预期成本,但可以存储在线性空间中并受益于并行资源。适应性差距是最佳非自适应和适应性策略的预期成本之间的比率,是衡量适应性的益处。我们介绍了流行的布尔函数类别的SBFE问题的适应性差距的下限,包括读取式DNF公式,读取公式和一般DNF。我们的边界从$ω(\ log n)$到$ω(n/\ log n)$,与最近显示的对称函数和线性阈值函数显示的最近$ o(1)$差距形成鲜明对比。
We consider the Stochastic Boolean Function Evaluation (SBFE) problem where the task is to efficiently evaluate a known Boolean function $f$ on an unknown bit string $x$ of length $n$. We determine $f(x)$ by sequentially testing the variables of $x$, each of which is associated with a cost of testing and an independent probability of being true. If a strategy for solving the problem is adaptive in the sense that its next test can depend on the outcomes of previous tests, it has lower expected cost but may take up to exponential space to store. In contrast, a non-adaptive strategy may have higher expected cost but can be stored in linear space and benefit from parallel resources. The adaptivity gap, the ratio between the expected cost of the optimal non-adaptive and adaptive strategies, is a measure of the benefit of adaptivity. We present lower bounds on the adaptivity gap for the SBFE problem for popular classes of Boolean functions, including read-once DNF formulas, read-once formulas, and general DNFs. Our bounds range from $Ω(\log n)$ to $Ω(n/\log n)$, contrasting with recent $O(1)$ gaps shown for symmetric functions and linear threshold functions.