论文标题

使用反事实方法的降低生成模型

De-Biasing Generative Models using Counterfactual Methods

论文作者

Bhat, Sunay, Jiang, Jeffrey, Pooladzandi, Omead, Pottie, Gregory

论文摘要

变异自动编码器(VAE)和其他生成方法不仅激发了其生成特性的兴趣,而且还使能够删除低维的潜在可变空间的能力。但是,现有的生成模型很少考虑因果关系。我们提出了一个新的基于解码器的框架,称为因果反事实生成模型(CCGM),其中包括一个可训练的因果层,其中可以学习因果模型的一部分而不会显着影响重建忠诚度。通过学习图像语义标签或表格变量之间的因果关系,我们可以分析偏见,干预生成模型并模拟新方案。此外,通过修改因果结构,我们可以在原始训练数据的域之外生成样品,并使用此类反事实模型来驱动数据集。因此,具有已知偏差的数据集仍然可以用来训练因果生成模型并学习因果关系,但是我们可以在生成方面产生偏差的数据集。我们提出的方法将因果潜在空间模型与特定的修改相结合,以强调因果关系,从而使对因果层的控制权更加精细地控制和学习强大的干预框架的能力。我们探讨了如何更好地分解因果学习和编码/解码会产生更高的因果干预质量。我们还将模型与类似研究的模型进行了比较,以证明除干预措施以外的明显生成偏差的必要性。我们的初始实验表明,我们的模型可以生成图像和表格数据,并具有高忠诚度到因果框架上,并适应明确的偏见,以忽略与基线相比,在因果数据中忽略了不良关系。

Variational autoencoders (VAEs) and other generative methods have garnered growing interest not just for their generative properties but also for the ability to dis-entangle a low-dimensional latent variable space. However, few existing generative models take causality into account. We propose a new decoder based framework named the Causal Counterfactual Generative Model (CCGM), which includes a partially trainable causal layer in which a part of a causal model can be learned without significantly impacting reconstruction fidelity. By learning the causal relationships between image semantic labels or tabular variables, we can analyze biases, intervene on the generative model, and simulate new scenarios. Furthermore, by modifying the causal structure, we can generate samples outside the domain of the original training data and use such counterfactual models to de-bias datasets. Thus, datasets with known biases can still be used to train the causal generative model and learn the causal relationships, but we can produce de-biased datasets on the generative side. Our proposed method combines a causal latent space VAE model with specific modification to emphasize causal fidelity, enabling finer control over the causal layer and the ability to learn a robust intervention framework. We explore how better disentanglement of causal learning and encoding/decoding generates higher causal intervention quality. We also compare our model against similar research to demonstrate the need for explicit generative de-biasing beyond interventions. Our initial experiments show that our model can generate images and tabular data with high fidelity to the causal framework and accommodate explicit de-biasing to ignore undesired relationships in the causal data compared to the baseline.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源