论文标题
测试时间图像到图像翻译结合结合改善组织病理学的分布概括
Test-time image-to-image translation ensembling improves out-of-distribution generalization in histopathology
论文作者
论文摘要
组织病理学全幻灯片图像(WSIS)可以揭示明显的院间变异性,例如照明,颜色或光学伪影。这些变化是由在医疗中心(染色,扫描仪)中使用不同扫描方案引起的,可能会严重损害看不见的协议上的算法概括。这激发了开发新方法以限制这种表现的下降。在本文中,为了增强对看不见的目标协议的鲁棒性,我们提出了基于多域图像到图像翻译的新测试时间数据增强。它允许在对每个源域进行分类并结合预测之前将图像从看不见的协议投射到每个源域。该测试时间增强方法可显着增强域概括的性能。为了证明其有效性,我们的方法已在两项不同的组织病理学任务上进行了评估,在这些任务中,它表现优于常规域的概括,标准H&E特定的颜色增强/归一化和标准测试时间增强技术。我们的代码可在https://gitlab.com/vitadx/articles/test time-i2i-translation-semembling上公开获取。
Histopathology whole slide images (WSIs) can reveal significant inter-hospital variability such as illumination, color or optical artifacts. These variations, caused by the use of different scanning protocols across medical centers (staining, scanner), can strongly harm algorithms generalization on unseen protocols. This motivates development of new methods to limit such drop of performances. In this paper, to enhance robustness on unseen target protocols, we propose a new test-time data augmentation based on multi domain image-to-image translation. It allows to project images from unseen protocol into each source domain before classifying them and ensembling the predictions. This test-time augmentation method results in a significant boost of performances for domain generalization. To demonstrate its effectiveness, our method has been evaluated on 2 different histopathology tasks where it outperforms conventional domain generalization, standard H&E specific color augmentation/normalization and standard test-time augmentation techniques. Our code is publicly available at https://gitlab.com/vitadx/articles/test-time-i2i-translation-ensembling.