论文标题
Dyannet:场景动态性引导自训练的视频异常检测网络
DyAnNet: A Scene Dynamicity Guided Self-Trained Video Anomaly Detection Network
论文作者
论文摘要
视频异常检测的无监督方法可能不如监督方法那样出色。但是,使用无监督方法学习未知类型的异常类型比监督方法更实用,因为注释是一种额外的负担。在本文中,我们使用基于隔离树的无监督聚类来分区视频段的深度特征空间。 RGB-Stream会生成伪异常得分,而流流产生视频段的伪动态得分。然后使用多数投票方案融合这些分数,以产生正面和负分段的初步袋。但是,这些袋子可能不准确,因为仅使用当前段生成得分,该段不代表典型异常事件的全局行为。然后,我们使用基于使用流行的I3D网络设计的跨支付馈送网络来完善这两个分数的精炼策略。然后,通过段重新映射策略来完善袋子。以异常分数增加段的动态得分的直觉是提高证据的质量。该方法已在三个流行的视频异常数据集(即UCF-Crime,CCTV-Fights和UBI-Fights)上进行了评估。实验结果表明,与最先进的视频异常检测方法相比,所提出的框架具有竞争精度。
Unsupervised approaches for video anomaly detection may not perform as good as supervised approaches. However, learning unknown types of anomalies using an unsupervised approach is more practical than a supervised approach as annotation is an extra burden. In this paper, we use isolation tree-based unsupervised clustering to partition the deep feature space of the video segments. The RGB- stream generates a pseudo anomaly score and the flow stream generates a pseudo dynamicity score of a video segment. These scores are then fused using a majority voting scheme to generate preliminary bags of positive and negative segments. However, these bags may not be accurate as the scores are generated only using the current segment which does not represent the global behavior of a typical anomalous event. We then use a refinement strategy based on a cross-branch feed-forward network designed using a popular I3D network to refine both scores. The bags are then refined through a segment re-mapping strategy. The intuition of adding the dynamicity score of a segment with the anomaly score is to enhance the quality of the evidence. The method has been evaluated on three popular video anomaly datasets, i.e., UCF-Crime, CCTV-Fights, and UBI-Fights. Experimental results reveal that the proposed framework achieves competitive accuracy as compared to the state-of-the-art video anomaly detection methods.