论文标题
风险敏感的增强学习:遗憾的是近乎最佳的风险样本折衷
Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret
论文作者
论文摘要
我们研究了具有未知过渡内核的情节马尔可夫决策过程中对风险敏感的增强学习,其目标是在指数效用的风险度量下优化总奖励。我们提出了两种可证明有效的无模型算法,风险敏感的价值迭代(RSVI)和对风险敏感的Q学习(RSQ)。这些算法在面对不确定性的情况下实现了一种对风险敏感的乐观主义形式,它适应了寻求风险和规避风险的探索方式。我们证明rsvi达到了$ \ tilde {o} \ big(λ(|β| h^2)\ cdot \ cdot \ sqrt \ sqrt {h^{3} s^{2} at} \ big)$遗憾\ sqrt {h^{4} sat} \ big)$遗憾,其中$λ(u)=(e^{3u} -1)/u $ for $ u> 0 $。在上面,$β$是指数效用功能的风险参数,$ s $状态数,$ a $ the Active的数量,$ t $ timesteps的总数和$ h $ evisation Lengus。另一方面,我们建立了一个遗憾的下限,表明对$ |β| $和$ h $的指数依赖性是不可避免的。对于任何$ \ tilde {o}(\ sqrt {t})$遗憾的任何算法是不可避免的(即使风险目标与原始奖励相同,因此即使是原始的奖励),从而证明了近乎差异的算法。我们的结果表明,将风险意识纳入强化学习需要$ |β| $和$ h $的指数成本,这量化了风险敏感性(与Aleatoric不确定性)和样本效率(与认知不确定性有关)之间的基本权衡。据我们所知,这是对风险敏感的强化学习的首次遗憾分析。
We study risk-sensitive reinforcement learning in episodic Markov decision processes with unknown transition kernels, where the goal is to optimize the total reward under the risk measure of exponential utility. We propose two provably efficient model-free algorithms, Risk-Sensitive Value Iteration (RSVI) and Risk-Sensitive Q-learning (RSQ). These algorithms implement a form of risk-sensitive optimism in the face of uncertainty, which adapts to both risk-seeking and risk-averse modes of exploration. We prove that RSVI attains an $\tilde{O}\big(λ(|β| H^2) \cdot \sqrt{H^{3} S^{2}AT} \big)$ regret, while RSQ attains an $\tilde{O}\big(λ(|β| H^2) \cdot \sqrt{H^{4} SAT} \big)$ regret, where $λ(u) = (e^{3u}-1)/u$ for $u>0$. In the above, $β$ is the risk parameter of the exponential utility function, $S$ the number of states, $A$ the number of actions, $T$ the total number of timesteps, and $H$ the episode length. On the flip side, we establish a regret lower bound showing that the exponential dependence on $|β|$ and $H$ is unavoidable for any algorithm with an $\tilde{O}(\sqrt{T})$ regret (even when the risk objective is on the same scale as the original reward), thus certifying the near-optimality of the proposed algorithms. Our results demonstrate that incorporating risk awareness into reinforcement learning necessitates an exponential cost in $|β|$ and $H$, which quantifies the fundamental tradeoff between risk sensitivity (related to aleatoric uncertainty) and sample efficiency (related to epistemic uncertainty). To the best of our knowledge, this is the first regret analysis of risk-sensitive reinforcement learning with the exponential utility.