论文标题

带有对抗扩散模型的无监督医学图像翻译

Unsupervised Medical Image Translation with Adversarial Diffusion Models

论文作者

Özbey, Muzaffer, Dalmaz, Onat, Dar, Salman UH, Bedel, Hasan A, Özturk, Şaban, Güngör, Alper, Çukur, Tolga

论文摘要

通过源至目标模式翻译丢失图像的插图可以改善医学成像协议中的多样性。合成目标图像的普遍方法涉及通过生成对抗网络(GAN)的单发映射。但是,隐式表征图像分布的GAN模型可能会受到样本保真度有限的损失。在这里,我们提出了一种基于对抗扩散建模Syndiff的新方法,以改善医学图像翻译的性能。为了捕获图像分布的直接相关性,Syndiff利用条件扩散过程逐渐将噪声和源图像映射到目标图像上。对于推断期间快速准确的图像采样,在反向扩散方向上以对抗性投影采取大型扩散步骤。为了对未配对的数据集进行培训,使用耦合的扩散和非扩散模块设计了一个循环符合的体系结构,这些模块在两种模式之间进行了双边翻译。报告了有关集团对竞争性GAN的效用,并报告了多对比度MRI和MRI-CT翻译中的扩散模型的效用。我们的演示表明,Syndiff与竞争基线相比提供了定量和质量上优越的性能。

Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源