论文标题

SAHDL:稀疏注意超图正规词典学习

SAHDL: Sparse Attention Hypergraph Regularized Dictionary Learning

论文作者

Shao, Shuai, Xu, Rui, Wang, Yan-Jiang, Liu, Weifeng, Liu, Bao-Di

论文摘要

近年来,注意机制对基于超图的神经网络产生了重大贡献。但是,这些方法随着网络的传播更新了注意力权重。也就是说,这种类型的注意机制仅适用于基于深度学习的方法,而不适用于传统的机器学习方法。在本文中,我们提出了一种基于超图的稀疏注意机制来解决此问题并将其嵌入字典学习中。更具体地说,我们首先使用$ \ ell_1 $ - norm稀疏的正则化来挖掘样本特征之间的高阶关系,从而构建了稀疏的注意力图,资产的注意力重量。然后,我们介绍了HyperGraph Laplacian操作员,以保留字典学习中的子空间转换的局部结构。此外,我们将歧视性信息纳入超图中,作为聚集样品的指导。与以前的作品不同,我们的方法独立更新了注意力权重,不依赖深层网络。我们在四个基准数据集上演示了方法的功效。

In recent years, the attention mechanism contributes significantly to hypergraph based neural networks. However, these methods update the attention weights with the network propagating. That is to say, this type of attention mechanism is only suitable for deep learning-based methods while not applicable to the traditional machine learning approaches. In this paper, we propose a hypergraph based sparse attention mechanism to tackle this issue and embed it into dictionary learning. More specifically, we first construct a sparse attention hypergraph, asset attention weights to samples by employing the $\ell_1$-norm sparse regularization to mine the high-order relationship among sample features. Then, we introduce the hypergraph Laplacian operator to preserve the local structure for subspace transformation in dictionary learning. Besides, we incorporate the discriminative information into the hypergraph as the guidance to aggregate samples. Unlike previous works, our method updates attention weights independently, does not rely on the deep network. We demonstrate the efficacy of our approach on four benchmark datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源