论文标题
从单个图模板中预测大脑多编码人群以提高单次分类
Predicting Brain Multigraph Population From a Single Graph Template for Boosting One-Shot Classification
论文作者
论文摘要
训练单次学习模型的核心挑战是数据空间可用镜头的有限代表性。特别是在网络神经科学领域中表示大脑为图,这种模型在对大脑状态进行分类时可能会导致低性能(例如,典型与自闭症)。为了应对这一点,大多数现有的作品都涉及一个数据增强步骤,以增加培训集的规模,其多样性和代表性。尽管有效,但这种增强方法仅限于生成与输入镜头相同的样品(例如,从单个射击矩阵中产生大脑连接矩阵)。据我们所知,从单个脑图中产生多种类型的节点(即解剖区域)之间捕获多种类型的连通性的问题仍然无法解决。在本文中,我们前所未有地提出了一个混合图神经网络(GNN)架构,即多人发电机网络或短暂的多graphgnet,包括两个子网络:(1)多个对一个GNN,将大脑的输入人群集成到单个模板图中,并将其集成到单个模板图中,并将其转换为一个连接的大脑temply(cbt),并且是cbt(cbt),并且是2 2)这将在每个训练步骤中采用学识渊博的CBT,并输出重建的输入多数法文人群。这两个网络均使用循环损失以端到端的方式训练。实验结果表明,与每个班级的单个CBT训练相比,对在增强大脑多数式培训进行训练时,我们的多gnetet会提高独立分类器的性能。我们希望我们的框架可以阐明单个图的未来对多编码增强的研究。我们的Multigraphgnet源代码可在https://github.com/basiralab/multigraphgnet上找到。
A central challenge in training one-shot learning models is the limited representativeness of the available shots of the data space. Particularly in the field of network neuroscience where the brain is represented as a graph, such models may lead to low performance when classifying brain states (e.g., typical vs. autistic). To cope with this, most of the existing works involve a data augmentation step to increase the size of the training set, its diversity and representativeness. Though effective, such augmentation methods are limited to generating samples with the same size as the input shots (e.g., generating brain connectivity matrices from a single shot matrix). To the best of our knowledge, the problem of generating brain multigraphs capturing multiple types of connectivity between pairs of nodes (i.e., anatomical regions) from a single brain graph remains unsolved. In this paper, we unprecedentedly propose a hybrid graph neural network (GNN) architecture, namely Multigraph Generator Network or briefly MultigraphGNet, comprising two subnetworks: (1) a many-to-one GNN which integrates an input population of brain multigraphs into a single template graph, namely a connectional brain temple (CBT), and (2) a reverse one-to-many U-Net network which takes the learned CBT in each training step and outputs the reconstructed input multigraph population. Both networks are trained in an end-to-end way using a cyclic loss. Experimental results demonstrate that our MultigraphGNet boosts the performance of an independent classifier when trained on the augmented brain multigraphs in comparison with training on a single CBT from each class. We hope that our framework can shed some light on the future research of multigraph augmentation from a single graph. Our MultigraphGNet source code is available at https://github.com/basiralab/MultigraphGNet.