论文标题
深对比度的一级时间序列异常检测
Deep Contrastive One-Class Time Series Anomaly Detection
论文作者
论文摘要
时间序列数据的积累和标签的缺失使时间序列异常检测(AD)一项自我监督的深度学习任务。基于单一函数 - 基于整个正态性的某个方面的基于单函数 - 提示的方法无能力与大量异常有关的任务。具体而言,对比度学习(CL)方法距离负面对,其中许多由两个正常样品组成,从而降低了AD性能。现有的基于多正常的方法通常是两阶段的,首先是通过某些目标与AD不同的任务进行预训练,从而限制了其性能。为了克服这些缺点,作者遵循CL和一级分类的正态性假设,提出了深层对比的一级单级检测方法(COCA)。它将原始表示和重建的表示视为阳性的无阴性CL,即“序列对比”。接下来,不变性项和差异术语构成了对比度损失函数,其中假设的损失通过不变性项来优化,而“超晶体崩溃”是通过方差项来预防的。此外,在两个现实世界中的时间序列数据集上进行了广泛的实验表明,所提出的方法的出色性能达到了最先进的方法。
The accumulation of time-series data and the absence of labels make time-series Anomaly Detection (AD) a self-supervised deep learning task. Single-normality-assumption-based methods, which reveal only a certain aspect of the whole normality, are incapable of tasks involved with a large number of anomalies. Specifically, Contrastive Learning (CL) methods distance negative pairs, many of which consist of both normal samples, thus reducing the AD performance. Existing multi-normality-assumption-based methods are usually two-staged, firstly pre-training through certain tasks whose target may differ from AD, limiting their performance. To overcome the shortcomings, a deep Contrastive One-Class Anomaly detection method of time series (COCA) is proposed by authors, following the normality assumptions of CL and one-class classification. It treats the original and reconstructed representations as the positive pair of negative-sample-free CL, namely "sequence contrast". Next, invariance terms and variance terms compose a contrastive one-class loss function in which the loss of the assumptions is optimized by invariance terms simultaneously and the "hypersphere collapse" is prevented by variance terms. In addition, extensive experiments on two real-world time-series datasets show the superior performance of the proposed method achieves state-of-the-art.