论文标题

路径CVA回归,默认值较弱

Pathwise CVA Regressions With Oversimulated Defaults

论文作者

Abbas-Turki, Lokman, Crépey, Stéphane, Saadeddine, Bouazza

论文摘要

我们考虑通过$(x,y)$的有条件期望的模拟和神经净回归的计算,或更一般的可靠统计数据。在这里,外源成分$ y $(Markov本身)是耗时的,而内源性组件$ x $(带有$ y $的Markov)很快就可以模拟给定$ y $,但负责模拟回报的大多数差异。为了解决相关的方差问题,我们引入了一个有条件独立的分层模拟方案,其中为$ y $的每个模拟路径仿真的几个路径是$ x $的。我们基于这种块依赖性数据分析了回归学习方案的统计收敛性。我们会根据$ y $的路径数量得出启发式方法,对于每个路径的数量,应模拟$ x $。所得算法在将Python/CUDA与Pytorch结合的图形处理单元(GPU)上实现。一项具有嵌套蒙特卡洛基准测试的CVA案例研究表明,分层模拟技术是学习方法成功的关键。

We consider the computation by simulation and neural net regression of conditional expectations, or more general elicitable statistics, of functionals of processes $(X, Y )$. Here an exogenous component $Y$ (Markov by itself) is time-consuming to simulate, while the endogenous component $X$ (jointly Markov with $Y$) is quick to simulate given $Y$, but is responsible for most of the variance of the simulated payoff. To address the related variance issue, we introduce a conditionally independent, hierarchical simulation scheme, where several paths of $X$ are simulated for each simulated path of $Y$. We analyze the statistical convergence of the regression learning scheme based on such block-dependent data. We derive heuristics on the number of paths of $Y$ and, for each of them, of $X$, that should be simulated. The resulting algorithm is implemented on a graphics processing unit (GPU) combining Python/CUDA and learning with PyTorch. A CVA case study with a nested Monte Carlo benchmark shows that the hierarchical simulation technique is key to the success of the learning approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源