论文标题

改善基于深度学习的图像重建的鲁棒性

Improving Robustness of Deep-Learning-Based Image Reconstruction

论文作者

Raj, Ankit, Bresler, Yoram, Li, Bo

论文摘要

已经显示出对不同应用的深度学习方法很容易受到对抗示例的影响。这些示例使得在安全至关重要的任务中部署了此类模型。将深层神经网络用作反问题解决者,引起了包括CT和MRI在内的医学成像的兴奋,但最近也证明了这些任务类似的漏洞。我们表明,对于这种反问题解决者,应该分析和研究对手在测量空间中的影响,而不是像以前的工作一样的信号空间。在本文中,我们建议修改端到端基于深度学习的逆问题解决器的训练策略,以提高鲁棒性。我们引入了一个辅助网络来生成对抗性示例,该示例用于Min-Max公式中,以构建可靠的图像重建网络。从理论上讲,我们为线性重建方案展示了Min-Max公式会导致单数值滤波器正则化解决方案,这抑制了由于测量矩阵中不良条件而发生的对抗性示例的效果。我们发现,使用建议的最低最大学习方案的线性网络确实会收敛到同一解决方案。此外,对于使用深网的非线性压缩感测(CS)重建,我们使用所提出的方法比其他方法显示出鲁棒性的显着改善。我们通过在两个不同数据集上的CS实验来补充理论,并评估增加训练网络扰动的效果。我们发现,条件不良且条件良好的测量矩阵的行为在质量上有所不同。

Deep-learning-based methods for different applications have been shown vulnerable to adversarial examples. These examples make deployment of such models in safety-critical tasks questionable. Use of deep neural networks as inverse problem solvers has generated much excitement for medical imaging including CT and MRI, but recently a similar vulnerability has also been demonstrated for these tasks. We show that for such inverse problem solvers, one should analyze and study the effect of adversaries in the measurement-space, instead of the signal-space as in previous work. In this paper, we propose to modify the training strategy of end-to-end deep-learning-based inverse problem solvers to improve robustness. We introduce an auxiliary network to generate adversarial examples, which is used in a min-max formulation to build robust image reconstruction networks. Theoretically, we show for a linear reconstruction scheme the min-max formulation results in a singular-value(s) filter regularized solution, which suppresses the effect of adversarial examples occurring because of ill-conditioning in the measurement matrix. We find that a linear network using the proposed min-max learning scheme indeed converges to the same solution. In addition, for non-linear Compressed Sensing (CS) reconstruction using deep networks, we show significant improvement in robustness using the proposed approach over other methods. We complement the theory by experiments for CS on two different datasets and evaluate the effect of increasing perturbations on trained networks. We find the behavior for ill-conditioned and well-conditioned measurement matrices to be qualitatively different.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源