论文标题
多模式数据事项:语言模型对结构化和非结构化电子健康记录进行预训练
Multimodal data matters: language model pre-training over structured and unstructured electronic health records
论文作者
论文摘要
作为电子健康记录(EHR)的两种重要文本方式,最近越来越多地应用于医疗保健领域。但是,大多数现有的面向EHR的研究都集中在特定的模态上,或以直接方式整合来自不同模式的数据,通常将结构化和非结构化数据视为有关患者入学的两个独立信息来源,并忽略了它们之间的内在相互作用。实际上,这两种模式是在同一遇到的结构化数据中记录的,其中结构化数据为非结构化数据的文档提供了信息,反之亦然。在本文中,我们提出了一种名为MedM-PLM的医学多模式预训练的语言模型,以了解对结构化和非结构化数据的增强EHR表示,并探索两种模式的相互作用。在MEDM-PLM中,首先采用了两个基于变压器的神经网络组件来从每种模式中学习代表性特征。然后引入跨模块模块以建模其相互作用。我们在模拟III数据集上预先训练MEDM-PLM,并验证了该模型对三个下游临床任务的有效性,即药物建议,30天的再入院预测和ICD编码。与最先进的方法相比,广泛的实验证明了MEDM-PLM的功能。进一步的分析和可视化表明了我们的模型的鲁棒性,这有可能为临床决策提供更全面的解释。
As two important textual modalities in electronic health records (EHR), both structured data (clinical codes) and unstructured data (clinical narratives) have recently been increasingly applied to the healthcare domain. Most existing EHR-oriented studies, however, either focus on a particular modality or integrate data from different modalities in a straightforward manner, which usually treats structured and unstructured data as two independent sources of information about patient admission and ignore the intrinsic interactions between them. In fact, the two modalities are documented during the same encounter where structured data inform the documentation of unstructured data and vice versa. In this paper, we proposed a Medical Multimodal Pre-trained Language Model, named MedM-PLM, to learn enhanced EHR representations over structured and unstructured data and explore the interaction of two modalities. In MedM-PLM, two Transformer-based neural network components are firstly adopted to learn representative characteristics from each modality. A cross-modal module is then introduced to model their interactions. We pre-trained MedM-PLM on the MIMIC-III dataset and verified the effectiveness of the model on three downstream clinical tasks, i.e., medication recommendation, 30-day readmission prediction and ICD coding. Extensive experiments demonstrate the power of MedM-PLM compared with state-of-the-art methods. Further analyses and visualizations show the robustness of our model, which could potentially provide more comprehensive interpretations for clinical decision-making.