论文标题
代表性学习的构图语义学习和主题拟合的情况在哪里
Where's the Learning in Representation Learning for Compositional Semantics and the Case of Thematic Fit
论文作者
论文摘要
观察到,对于某些NLP任务,例如语义角色预测或主题拟合估计,随机嵌入以及预处理的嵌入方式,我们探索了哪些设置允许的允许并检查大多数学习的编码:语义嵌入一词,语义角色嵌入或``网络''。根据任务及其与培训目标的关系,我们发现了细微的答案。我们研究了多任务学习中的这些表示学习方面,在这些方面,角色预测和角色填补是受监督的任务,而几个主题拟合任务不在模型的直接监督之外。我们观察到某些任务的质量得分与培训数据规模之间的非单调关系。为了更好地理解此观察结果,我们使用这些任务的每个动力版本分析了这些结果。
Observing that for certain NLP tasks, such as semantic role prediction or thematic fit estimation, random embeddings perform as well as pretrained embeddings, we explore what settings allow for this and examine where most of the learning is encoded: the word embeddings, the semantic role embeddings, or ``the network''. We find nuanced answers, depending on the task and its relation to the training objective. We examine these representation learning aspects in multi-task learning, where role prediction and role-filling are supervised tasks, while several thematic fit tasks are outside the models' direct supervision. We observe a non-monotonous relation between some tasks' quality score and the training data size. In order to better understand this observation, we analyze these results using easier, per-verb versions of these tasks.