论文标题
关于最大似然APA的收敛和最佳性
On convergence and optimality of maximum-likelihood APA
论文作者
论文摘要
仿射投影算法(APA)是自适应过滤应用中的一种众所周知的算法,例如音频回声取消。 APA取决于三个参数:$ P $(投影订单),$μ$(步长)和$δ$(正则化参数)。众所周知,用于固定参数的APA会导致收敛速度和准确性之间的权衡。因此,文献中已经提出了各种适应性设置参数的方法。受最大似然(ML)估计的启发,我们得出了一种基于ML的新方法,用于自适应设置APA的参数,我们称之为ML-APA。对于无内存的高斯输入,我们充分表征了ML-APA的预期未对准误差是迭代编号的函数,并表明它将其收敛至零为$ O({1 \ fos t})$。我们进一步证明所达到的误差在渐近上是最佳的。 ML-APA每次$ p $样本就会更新其估算。我们还提出了增量ML-APA(IML-APA),它在我们的仿真结果中更新每个时间步骤的系数,并且超过ML-APA。我们的仿真结果验证了无内存输入的分析结论,并表明新算法对于强相关的输入信号也很好。
Affine projection algorithm (APA) is a well-known algorithm in adaptive filtering applications such as audio echo cancellation. APA relies on three parameters: $P$ (projection order), $μ$ (step size) and $δ$ (regularization parameter). It is known that running APA for a fixed set of parameters leads to a tradeoff between convergence speed and accuracy. Therefore, various methods for adaptively setting the parameters have been proposed in the literature. Inspired by maximum likelihood (ML) estimation, we derive a new ML-based approach for adaptively setting the parameters of APA, which we refer to as ML-APA. For memoryless Gaussian inputs, we fully characterize the expected misalignment error of ML-APA as a function of iteration number and show that it converges to zero as $O({1\over t})$. We further prove that the achieved error is asymptotically optimal. ML-APA updates its estimate once every $P$ samples. We also propose incremental ML-APA (IML-APA), which updates the coefficients at every time step and outperforms ML-APA in our simulations results. Our simulation results verify the analytical conclusions for memoryless inputs and show that the new algorithms also perform well for strongly correlated input signals.