论文标题
可解释的多元时间序列分类:一个深层的神经网络,学会参与重要变量以及信息时间间隔
Explainable Multivariate Time Series Classification: A Deep Neural Network Which Learns To Attend To Important Variables As Well As Informative Time Intervals
论文作者
论文摘要
时间序列数据在各种现实世界中都普遍存在,它呼吁为人们理解和完全信任AI解决方案做出的可信赖和可解释的模型。我们考虑从多变量时间序列数据构建可解释的分类器的问题。理解此类预测模型的关键标准涉及阐明和量化时间变化的输入变量对分类的贡献。因此,我们介绍了一种新型,模块化的基于卷积的特征提取和注意机制,该机制同时识别了变量以及确定分类器输出的时间间隔。我们通过几个基准数据集提出了广泛的实验结果,这些数据集表明,所提出的方法的表现优于多变量时间序列分类任务的最新基线方法。我们的案例研究的结果表明,所提出方法确定的变量和时间间隔相对于可用域知识是有意义的。
Time series data is prevalent in a wide variety of real-world applications and it calls for trustworthy and explainable models for people to understand and fully trust decisions made by AI solutions. We consider the problem of building explainable classifiers from multi-variate time series data. A key criterion to understand such predictive models involves elucidating and quantifying the contribution of time varying input variables to the classification. Hence, we introduce a novel, modular, convolution-based feature extraction and attention mechanism that simultaneously identifies the variables as well as time intervals which determine the classifier output. We present results of extensive experiments with several benchmark data sets that show that the proposed method outperforms the state-of-the-art baseline methods on multi-variate time series classification task. The results of our case studies demonstrate that the variables and time intervals identified by the proposed method make sense relative to available domain knowledge.