论文标题

LeadCache:网络中的遗憾 - 最佳缓存

LeadCache: Regret-Optimal Caching in Networks

论文作者

Paria, Debjit, Sinha, Abhishek

论文摘要

我们考虑在网络缓存中的在线预测问题。假设多个用户通过双方网络连接到多个缓存。在任何时候,每个用户都可以请求从大目录中选择的任意文件。如果请求的文件被缓存至至少一个连接到用户的caches中,则满足用户在插槽中的请求。我们的目标是预测,预取和最佳地在每个插槽处的缓存上分发文件,以最大程度地提高缓存命中的总数。由于目标函数的非凸性和不平滑的性质,问题是非平凡的。在本文中,我们提出了$ \ texttt {lidecache} $ - 基于以下扰动领导者范式的有效的在线缓存策略。我们表明,$ \ texttt {lideCache} $是遗憾的,最佳的最高为$ \ tilde {o}(n^{3/8}),其中$ n $是用户的数量。我们设计了$ \ texttt {LeadCache} $策略的两个有效实现,一个基于移动式圆形,另一个基于Madow的抽样,每个策略都准确地说了一个呼吁通过迭代来调用LP-Solver。此外,使用强律类型的假设,我们表明,在$ \ texttt {lideCache} $下获取的文件总数几乎肯定是无限的地平线上的有限。最后,使用图形着色的结果,我们得出了大约紧密的后悔下限。我们得出的结论是,基于学习的$ \ texttt {ledscache} $策略在理论上和经验上都超过了最先进的缓存策略。

We consider an online prediction problem in the context of network caching. Assume that multiple users are connected to several caches via a bipartite network. At any time slot, each user may request an arbitrary file chosen from a large catalog. A user's request at a slot is met if the requested file is cached in at least one of the caches connected to the user. Our objective is to predict, prefetch, and optimally distribute the files on the caches at each slot to maximize the total number of cache hits. The problem is non-trivial due to the non-convex and non-smooth nature of the objective function. In this paper, we propose $\texttt{LeadCache}$ - an efficient online caching policy based on the Follow-the-Perturbed-Leader paradigm. We show that $\texttt{LeadCache}$ is regret-optimal up to a factor of $\tilde{O}(n^{3/8}),$ where $n$ is the number of users. We design two efficient implementations of the $\texttt{LeadCache}$ policy, one based on Pipage rounding and the other based on Madow's sampling, each of which makes precisely one call to an LP-solver per iteration. Furthermore, with a Strong-Law-type assumption, we show that the total number of file fetches under $\texttt{LeadCache}$ remains almost surely finite over an infinite horizon. Finally, we derive an approximately tight regret lower bound using results from graph coloring. We conclude that the learning-based $\texttt{LeadCache}$ policy decisively outperforms the state-of-the-art caching policies both theoretically and empirically.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源