论文标题

生成模型的鲁棒性认证

Robustness Certification of Generative Models

论文作者

Mirman, Matthew, Gehr, Timon, Vechev, Martin

论文摘要

生成神经网络可用于通过潜在空间插值来指定图像之间的连续转换。但是,证明图像歧管中所得路径捕获的所有图像满足给定属性可能非常具有挑战性。这是因为该集合是高度非凸,挫败了现有的可扩展鲁棒性分析方法,这些方法通常基于凸松弛。我们提出了一种可扩展的认证方法,该方法成功验证了涉及生成模型和分类器的非平凡规格。通过捕获此类集合的无限非凸线或分布,近似值可以提供声音确定性和概率保证。我们表明,近似值实际上是有用的,可以验证网络潜在空间中有趣的插值。

Generative neural networks can be used to specify continuous transformations between images via latent-space interpolation. However, certifying that all images captured by the resulting path in the image manifold satisfy a given property can be very challenging. This is because this set is highly non-convex, thwarting existing scalable robustness analysis methods, which are often based on convex relaxations. We present ApproxLine, a scalable certification method that successfully verifies non-trivial specifications involving generative models and classifiers. ApproxLine can provide both sound deterministic and probabilistic guarantees, by capturing either infinite non-convex sets of neural network activation vectors or distributions over such sets. We show that ApproxLine is practically useful and can verify interesting interpolations in the networks latent space.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源