论文标题

连续的LWE与LWE一样困难,并应用于学习高斯混合物

Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures

论文作者

Gupte, Aparna, Vafa, Neekon, Vaikuntanathan, Vinod

论文摘要

我们显示出与错误(LWE)问题的经典学习之间的直接和概念上简单的减少,其连续的模拟(Bruna,Regev,Song and Tang,STOC 2021)。这使我们能够将基于LWE的密码学的强大机械带到Clwe的应用中。例如,我们在GAP最短矢量问题的经典最坏情况下获得了Clwe的硬度。以前,这仅在晶格问题的量子最坏情况下才知道。更广泛地说,随着我们在两个问题之间的减少,LWE的未来发展也将适用于CLWE及其下游应用程序。 作为一种具体的应用,我们显示了高斯混合物密度估计的硬度结果改善。在这个计算问题中,给定样品访问高斯人的混合物,目标是输出估计混合物密度函数的函数。在经典LWE问题的(合理且众所周知的)指数硬度下,我们表明,高斯混合物密度估计$ \ Mathbb {r}^n $与大约$ \ log n $ n $高斯组件给定$ \ mathsf {poly}(poly}(poly}(n)$ samples $ samples $ samples $ nige quasi qiasi quasi-polynom al $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n $ n。在LWE的(保守)多项式硬度下,我们在任何常数$ε> 0 $上显示了密度估计的硬度,这在Bruna,Regev,Song and Tang(Stoc 2021)上有所改善,他们表现出至少$ \ sqrt {N} $ Gaussians polynomial(Quantnomial)的硬性(Quansimial plynomial(Quantnomial)的硬度(量)。 我们的关键技术工具是从古典LWE到LWE的减少,并使用$ k $ -sparse Secrets,其中噪声的乘法增加仅为$ o(\ sqrt {k})$,独立于环境尺寸$ n $。

We show direct and conceptually simple reductions between the classical learning with errors (LWE) problem and its continuous analog, CLWE (Bruna, Regev, Song and Tang, STOC 2021). This allows us to bring to bear the powerful machinery of LWE-based cryptography to the applications of CLWE. For example, we obtain the hardness of CLWE under the classical worst-case hardness of the gap shortest vector problem. Previously, this was known only under quantum worst-case hardness of lattice problems. More broadly, with our reductions between the two problems, any future developments to LWE will also apply to CLWE and its downstream applications. As a concrete application, we show an improved hardness result for density estimation for mixtures of Gaussians. In this computational problem, given sample access to a mixture of Gaussians, the goal is to output a function that estimates the density function of the mixture. Under the (plausible and widely believed) exponential hardness of the classical LWE problem, we show that Gaussian mixture density estimation in $\mathbb{R}^n$ with roughly $\log n$ Gaussian components given $\mathsf{poly}(n)$ samples requires time quasi-polynomial in $n$. Under the (conservative) polynomial hardness of LWE, we show hardness of density estimation for $n^ε$ Gaussians for any constant $ε> 0$, which improves on Bruna, Regev, Song and Tang (STOC 2021), who show hardness for at least $\sqrt{n}$ Gaussians under polynomial (quantum) hardness assumptions. Our key technical tool is a reduction from classical LWE to LWE with $k$-sparse secrets where the multiplicative increase in the noise is only $O(\sqrt{k})$, independent of the ambient dimension $n$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源