论文标题

LP-SPARSEMAP:稀疏结构化预测的可微分放松优化

LP-SparseMAP: Differentiable Relaxed Optimization for Sparse Structured Prediction

论文作者

Niculae, Vlad, Martins, André F. T.

论文摘要

结构化预测需要操纵大量组合结构,例如依赖树或对齐,无论是潜在变量还是输出变量。最近,Sparsemap方法已被认为是最大后验(MAP)和边缘推断的可区分的稀疏替代方法。 Sparsemap返回了少量结构的组合,这是某些下游应用程序中理想的属性。但是,Sparsemap需要一个可拖动的地图推理Oracle。这不包括具有逻辑约束的循环图形模型或因子图,通常需要大约推断。在本文中,我们介绍了LP-SparseMap,这是一种sparsemap的扩展,该扩展通过局部多层松弛来解决此限制。 LP-SPARSEMAP使用因素图的灵活且功能强大的域特定语言,通过任意隐藏结构来定义和反向传播,支持粗糙的分解,硬逻辑约束和高阶相关性。我们得出使用LP-SPARSEMAP作为隐藏或输出层所需的前进算法。与SPARSEMAP和结构化SVM相比,三个结构化预测任务的实验显示出好处。

Structured prediction requires manipulating a large number of combinatorial structures, e.g., dependency trees or alignments, either as latent or output variables. Recently, the SparseMAP method has been proposed as a differentiable, sparse alternative to maximum a posteriori (MAP) and marginal inference. SparseMAP returns a combination of a small number of structures, a desirable property in some downstream applications. However, SparseMAP requires a tractable MAP inference oracle. This excludes, e.g., loopy graphical models or factor graphs with logic constraints, which generally require approximate inference. In this paper, we introduce LP-SparseMAP, an extension of SparseMAP that addresses this limitation via a local polytope relaxation. LP-SparseMAP uses the flexible and powerful domain specific language of factor graphs for defining and backpropagating through arbitrary hidden structure, supporting coarse decompositions, hard logic constraints, and higher-order correlations. We derive the forward and backward algorithms needed for using LP-SparseMAP as a hidden or output layer. Experiments in three structured prediction tasks show benefits compared to SparseMAP and Structured SVM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源