论文标题
在连续时间与连续的在线参数估计和最佳传感器放置的两次尺度随机梯度下降
Two-Timescale Stochastic Gradient Descent in Continuous Time with Applications to Joint Online Parameter Estimation and Optimal Sensor Placement
论文作者
论文摘要
在本文中,我们建立了在一般噪声和稳定条件下连续时间在连续时间的两次尺度随机梯度下降算法的几乎确定的收敛性,从而在离散时间内扩展了众所周知的结果。我们分析具有加性噪声和具有非添加噪声的算法。在非添加的情况下,我们的分析是在噪声是由算法状态控制的连续时间马尔可夫过程的假设下进行的。我们认为的算法可以应用于一系列的二聚体优化问题。我们详细研究了一个这样的问题,即,在部分观察到的扩散过程中,联合在线参数估计和最佳传感器放置问题。我们演示了如何将其作为双层优化问题进行配合,并以连续时间,两次计时的随机梯度下降算法的形式提出解决方案。此外,在潜在信号,滤波器和滤波器衍生物的适当条件下,我们几乎确定了在线参数估计值和最佳传感器位置的收敛,分别分别为渐近对数可能性和渐近滤波器协方差的固定点。我们还提供了数值示例,说明了所提出的方法在部分观察到的beneš方程中的应用,以及部分观察到的随机对流扩散方程。
In this paper, we establish the almost sure convergence of two-timescale stochastic gradient descent algorithms in continuous time under general noise and stability conditions, extending well known results in discrete time. We analyse algorithms with additive noise and those with non-additive noise. In the non-additive case, our analysis is carried out under the assumption that the noise is a continuous-time Markov process, controlled by the algorithm states. The algorithms we consider can be applied to a broad class of bilevel optimisation problems. We study one such problem in detail, namely, the problem of joint online parameter estimation and optimal sensor placement for a partially observed diffusion process. We demonstrate how this can be formulated as a bilevel optimisation problem, and propose a solution in the form of a continuous-time, two-timescale, stochastic gradient descent algorithm. Furthermore, under suitable conditions on the latent signal, the filter, and the filter derivatives, we establish almost sure convergence of the online parameter estimates and optimal sensor placements to the stationary points of the asymptotic log-likelihood and asymptotic filter covariance, respectively. We also provide numerical examples, illustrating the application of the proposed methodology to a partially observed Beneš equation, and a partially observed stochastic advection-diffusion equation.