论文标题
通过逐步验证器使大型语言模型更好
Making Large Language Models Better Reasoners with Step-Aware Verifier
论文作者
论文摘要
很少有学习是一项具有挑战性的任务,需要语言模型从有限的示例中概括。 GPT-3和Palm等大型语言模型在这一领域取得了令人印象深刻的进步,但是在推理任务(例如GSM8K)中,它们仍然面临困难,例如GSM8K,这是算术问题的基准。为了提高他们的推理能力,先前的工作提出了通过提示在给出最终答案之前引发一系列推理步骤的提示,从而使GSM8K从17.9%提高到58.1%的问题解决率。在本文中,我们介绍了多样化的(关于推理步骤的多元化验证者),这是一种新的方法,可以进一步增强语言模型的推理能力。多元化有三个主要组成部分:首先,它产生了不同的提示,以探索同一问题的不同推理路径;其次,它使用验证者根据加权投票方案过滤错误的答案;第三,它可以单独验证每个推理步骤,而不是整个链。我们评估了最新语言型号代码davinci-002的多样性,并表明它在八个推理基准中的六个(例如GSM8K 74.4%至83.2%)中取得了新的最新结果。
Few-shot learning is a challenging task that requires language models to generalize from limited examples. Large language models like GPT-3 and PaLM have made impressive progress in this area, but they still face difficulties in reasoning tasks such as GSM8K, a benchmark for arithmetic problems. To improve their reasoning skills, previous work has proposed to guide the language model with prompts that elicit a series of reasoning steps before giving the final answer, achieving a significant improvement on GSM8K from 17.9% to 58.1% in problem-solving rate. In this paper, we present DIVERSE (Diverse Verifier on Reasoning Step), a novel approach that further enhances the reasoning capability of language models. DIVERSE has three main components: first, it generates diverse prompts to explore different reasoning paths for the same question; second, it uses a verifier to filter out incorrect answers based on a weighted voting scheme; and third, it verifies each reasoning step individually instead of the whole chain. We evaluate DIVERSE on the latest language model code-davinci-002 and show that it achieves new state-of-the-art results on six of eight reasoning benchmarks (e.g., GSM8K 74.4% to 83.2%).