论文标题

解开单个MR模式

Disentangling A Single MR Modality

论文作者

Zuo, Lianrui, Liu, Yihao, Xue, Yuan, Han, Shuo, Bilgel, Murat, Resnick, Susan M., Prince, Jerry L., Carass, Aaron

论文摘要

最近,从医学图像中解散解剖学和对比度信息已引起关注,证明了各种图像分析任务的好处。当前的方法使用配对的多模式图像学习了分离的表示形式,该图像具有相同的基础解剖结构或辅助标签(例如,手动描述),以提供用于解散的电感偏见。但是,这些要求可能会大大增加数据收集的时间和成本,并在没有此类数据时限制这些方法的适用性。此外,这些方法通常不能保证分解。在本文中,我们提出了一个新颖的框架,该框架在理论上和实际上从单式磁共振图像中学习。此外,我们提出了一个新的基于信息的指标,以定量评估分离。对现有的分离方法的比较表明,所提出的方法在分离和跨域图像到图像翻译任务中都能达到卓越的性能。

Disentangling anatomical and contrast information from medical images has gained attention recently, demonstrating benefits for various image analysis tasks. Current methods learn disentangled representations using either paired multi-modal images with the same underlying anatomy or auxiliary labels (e.g., manual delineations) to provide inductive bias for disentanglement. However, these requirements could significantly increase the time and cost in data collection and limit the applicability of these methods when such data are not available. Moreover, these methods generally do not guarantee disentanglement. In this paper, we present a novel framework that learns theoretically and practically superior disentanglement from single modality magnetic resonance images. Moreover, we propose a new information-based metric to quantitatively evaluate disentanglement. Comparisons over existing disentangling methods demonstrate that the proposed method achieves superior performance in both disentanglement and cross-domain image-to-image translation tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源