论文标题

随机设计中非参数双变量添加剂模型的自适应估计,并具有长期内存依赖性误差

Adaptive estimation for the nonparametric bivariate additive model in random design with long-memory dependent errors

论文作者

Benhaddou, Rida, Liu, Qing

论文摘要

我们研究了随机设计和长期内存误差中的非参数双变量添加剂回归估计,并基于小波系列构建适应性阈值估计器。当未知函数及其单变量添加剂属于BESOV空间时,所提出的方法在渐近的近乎最佳的收敛速率上实现。我们考虑两个噪声结构下的问题。 (1)同性恋高斯长期内存错误和(2)异性高斯长期内存错误。在同型长期内存误差案例中,相对于长期内存参数,估计值完全适应性。在异性长期内存案例中,除非异源性为多项式形式,否则估计量可能对长期内存参数具有适应性。无论哪种情况,收敛速率仅在长期内存足够强时才取决于长期内存参数,否则,速率与I.I.D下的速率相同。错误。所提出的方法扩展到一般$ r $二维的添加剂案例,$ r> 2 $,相应的收敛率不受维度的诅咒。

We investigate the nonparametric bivariate additive regression estimation in the random design and long-memory errors and construct adaptive thresholding estimators based on wavelet series. The proposed approach achieves asymptotically near-optimal convergence rates when the unknown function and its univariate additive components belong to Besov space. We consider the problem under two noise structures; (1) homoskedastic Gaussian long memory errors and (2) heteroskedastic Gaussian long memory errors. In the homoskedastic long-memory error case, the estimator is completely adaptive with respect to the long-memory parameter. In the heteroskedastic long-memory case, the estimator may not be adaptive with respect to the long-memory parameter unless the heteroskedasticity is of polynomial form. In either case, the convergence rates depend on the long-memory parameter only when long-memory is strong enough, otherwise, the rates are identical to those under i.i.d. errors. The proposed approach is extended to the general $r$-dimensional additive case, with $r>2$, and the corresponding convergence rates are free from the curse of dimensionality.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源