论文标题

PA-CACHE:EDGE网络中基于学习的普及内容缓存的发展

PA-Cache: Evolving Learning-Based Popularity-Aware Content Caching in Edge Networks

论文作者

Fan, Qilin, Li, Xiuhua, Li, Jian, He, Qiang, Wang, Kai, Wen, Junhao

论文摘要

随着无处不在的个性化服务正在蓬勃发展,大量移动设备在网络上产生了越来越多的流量。结果,内容缓存逐渐扩展到网络边缘,以提供低延迟服务,提高服务质量并减少冗余数据流量。与传统的内容交付网络相比,边缘网络中尺寸较小的缓存通常必须适应更多的爆发请求。在本文中,我们提出了一种基于学习的内容缓存策略,该策略在Edge Networks中名为PA-CACHE。它可以自适应地学习时间变化的内容流行,并确定缓存满足时应更换哪些内容。与传统的深度神经网络(DNN)不同,它使用具有较高计算复杂性的整个培训数据集学习了微调但可能过时或有偏见的预测模型,PA-CACHE重达了一系列内容功能,并训练多层重复的神经网络从Sallow到更深的时间,请要求更多的请求随着时间的推移而到达。我们从大型在线视频服务提供商中广泛评估了我们提出的PA-CACHE在实际痕迹上的性能。 \ rb {结果表明,PA-CACHE的表现优于现有流行的缓存算法,并且在缓存百分比为1.0 \%}时仅使用3.8 \%的性能差距近似于最佳算法。与常规的基于DNN的方法相比,PA-CACHE还大大降低了计算成本。

As ubiquitous and personalized services are growing boomingly, an increasingly large amount of traffic is generated over the network by massive mobile devices. As a result, content caching is gradually extending to network edges to provide low-latency services, improve quality of service, and reduce redundant data traffic. Compared to the conventional content delivery networks, caches in edge networks with smaller sizes usually have to accommodate more bursty requests. In this paper, we propose an evolving learning-based content caching policy, named PA-Cache in edge networks. It adaptively learns time-varying content popularity and determines which contents should be replaced when the cache is full. Unlike conventional deep neural networks (DNNs), which learn a fine-tuned but possibly outdated or biased prediction model using the entire training dataset with high computational complexity, PA-Cache weighs a large set of content features and trains the multi-layer recurrent neural network from shallow to deeper when more requests arrive over time. We extensively evaluate the performance of our proposed PA-Cache on real-world traces from a large online video-on-demand service provider. \rb{The results show that PA-Cache outperforms existing popular caching algorithms and approximates the optimal algorithm with only a 3.8\% performance gap when the cache percentage is 1.0\%}. PA-Cache also significantly reduces the computational cost compared to conventional DNN-based approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源