论文标题

学习最佳的分布在强大的个性化治疗规则上

Learning Optimal Distributionally Robust Individualized Treatment Rules

论文作者

Mo, Weibin, Qi, Zhengling, Liu, Yufeng

论文摘要

数据驱动的决策科学的最新发展在个性化决策中取得了巨大进步。给定具有个别协变量,治疗作业和结果的数据,决策者最佳个性化治疗规则(ITR)最大化了预期的结果,称为价值函数。许多现有方法假设培训和测试分布是相同的。但是,当训练和测试分布不相同时,估计的最佳ITR的概括性可能很差。在本文中,我们考虑了从受限制的ITR类中找到最佳ITR的问题,在该类别中,培训和测试分布之间存在一些未知的协变量。我们提出了一个新颖的分布稳健的ITR(DR-ITR)框架,该框架在一组“接近”训练分布的基础分布中最大化了整个值的最差案例值函数。由此产生的DR-ITR可以很好地保证所有此类分布之间的性能。我们进一步提出了一种校准程序,该程序可将DR-ITR调整为来自目标人群的少量校准数据。通过这种方式,可以证明,根据我们的数值研究,校准的DR-ITR比标准ITR具有更好的概括性。

Recent development in the data-driven decision science has seen great advances in individualized decision making. Given data with individual covariates, treatment assignments and outcomes, policy makers best individualized treatment rule (ITR) that maximizes the expected outcome, known as the value function. Many existing methods assume that the training and testing distributions are the same. However, the estimated optimal ITR may have poor generalizability when the training and testing distributions are not identical. In this paper, we consider the problem of finding an optimal ITR from a restricted ITR class where there is some unknown covariate changes between the training and testing distributions. We propose a novel distributionally robust ITR (DR-ITR) framework that maximizes the worst-case value function across the values under a set of underlying distributions that are "close" to the training distribution. The resulting DR-ITR can guarantee the performance among all such distributions reasonably well. We further propose a calibrating procedure that tunes the DR-ITR adaptively to a small amount of calibration data from a target population. In this way, the calibrated DR-ITR can be shown to enjoy better generalizability than the standard ITR based on our numerical studies.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源