论文标题
Forlorn:比较离线方法和强化学习的框架,以优化RAN参数
FORLORN: A Framework for Comparing Offline Methods and Reinforcement Learning for Optimization of RAN Parameters
论文作者
论文摘要
对移动网络的复杂性和容量需求的日益增长需要创新技术来优化资源使用。同时,最近的突破使加强学习(RL)进入了对现实世界系统的连续控制领域。作为迈向基于RL的网络控制的一步,本文介绍了一个新的框架,用于根据NS-3模拟网络环境中的RL代理的性能。在此框架内,我们证明了没有特定领域知识的RL代理可以学习如何有效调整无线电访问网络(RAN)参数以匹配静态场景中的离线优化,同时还可以在动态场景中飞行,以提高整体用户体验。我们提出的框架可能是开发工作流程以设计基于RL的RAN控制算法的进一步工作的基础。
The growing complexity and capacity demands for mobile networks necessitate innovative techniques for optimizing resource usage. Meanwhile, recent breakthroughs have brought Reinforcement Learning (RL) into the domain of continuous control of real-world systems. As a step towards RL-based network control, this paper introduces a new framework for benchmarking the performance of an RL agent in network environments simulated with ns-3. Within this framework, we demonstrate that an RL agent without domain-specific knowledge can learn how to efficiently adjust Radio Access Network (RAN) parameters to match offline optimization in static scenarios, while also adapting on the fly in dynamic scenarios, in order to improve the overall user experience. Our proposed framework may serve as a foundation for further work in developing workflows for designing RL-based RAN control algorithms.