论文标题

ICAM:可解释的分类通过删除表示形式和功能归因映射

ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping

论文作者

Bass, Cher, da Silva, Mariana, Sudre, Carole, Tudosiu, Petru-Daniel, Smith, Stephen M., Robinson, Emma C.

论文摘要

特征归因(FA),或将类别的分配给图像中不同位置的分配对于许多分类问题很重要,但在神经科学领域中尤其重要,在神经科学领域中,精确的行为机械模型或疾病需要了解所有特征的特征。同时,由于表型通常是异质的,并且在显着自然变化的背景下发生变化,因此从大脑图像中预测班级相关性是有挑战性的。在这里,我们提出了一个新颖的框架,用于通过图像到图像翻译创建特定类特定的FA地图。我们建议使用VAE-GAN将类别的相关性与背景特征明确解开,以改善可解释性属性,从而产生有意义的FA图。我们在痴呆症(ADNI数据集),衰老(UK Biobank)和(模拟的)病变检测的2D和3D脑图像数据集上验证方法。我们表明,当我们反对地面真理验证时,我们方法生成的FA地图优于基线FA方法。更重要的是,我们的方法是第一个使用潜在空间采样来支持表型变化的探索。我们的代码将在https://github.com/cherbass/icam上在线提供。

Feature attribution (FA), or the assignment of class-relevance to different locations in an image, is important for many classification problems but is particularly crucial within the neuroscience domain, where accurate mechanistic models of behaviours, or disease, require knowledge of all features discriminative of a trait. At the same time, predicting class relevance from brain images is challenging as phenotypes are typically heterogeneous, and changes occur against a background of significant natural variation. Here, we present a novel framework for creating class specific FA maps through image-to-image translation. We propose the use of a VAE-GAN to explicitly disentangle class relevance from background features for improved interpretability properties, which results in meaningful FA maps. We validate our method on 2D and 3D brain image datasets of dementia (ADNI dataset), ageing (UK Biobank), and (simulated) lesion detection. We show that FA maps generated by our method outperform baseline FA methods when validated against ground truth. More significantly, our approach is the first to use latent space sampling to support exploration of phenotype variation. Our code will be available online at https://github.com/CherBass/ICAM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源