论文标题
学会识别可行的静态代码警告(本质上是简单的)
Learning to Recognize Actionable Static Code Warnings (is Intrinsically Easy)
论文作者
论文摘要
静态代码警告工具通常会产生程序员忽略的警告。可以通过选择“可行”警告的数据挖掘算法使此类工具更有用;即通常不忽略的警告。 在本文中,我们在31,058张静态代码警告中查找了5,675个可行警告的样本中的可行警告。我们发现,数据挖掘算法可以轻松找到可操作的警告。具体而言,一系列数据挖掘方法(深度学习者,随机森林,决策树学习者和支持向量机)都取得了非常好的结果(召回和AUC(TRN,TPR)通常超过95%的措施,通常低于5%的错误警报)。 鉴于所有这些学习者都非常容易成功,因此可以询问此任务是否本质上很容易。我们报告说,尽管我们的数据集具有多达58个原始功能,但这些功能可以通过少于两个基础维度近似。对于这种本质上简单的数据,许多不同类型的学习者可以生成具有相似性能的有用模型。 基于上述内容,我们得出结论,要使用广泛的学习算法来识别可操作的静态代码警告很容易,因为基础数据本质上很简单。如果我们不得不为此任务选择一个特定的学习者,我们会建议线性SVM(至少在样本中,学习者相对较快地运行并实现了最佳的中位数表现),并且我们不建议深入学习(因为这些数据本质上非常简单)。
Static code warning tools often generate warnings that programmers ignore. Such tools can be made more useful via data mining algorithms that select the "actionable" warnings; i.e. the warnings that are usually not ignored. In this paper, we look for actionable warnings within a sample of 5,675 actionable warnings seen in 31,058 static code warnings from FindBugs. We find that data mining algorithms can find actionable warnings with remarkable ease. Specifically, a range of data mining methods (deep learners, random forests, decision tree learners, and support vector machines) all achieved very good results (recalls and AUC (TRN, TPR) measures usually over 95% and false alarms usually under 5%). Given that all these learners succeeded so easily, it is appropriate to ask if there is something about this task that is inherently easy. We report that while our data sets have up to 58 raw features, those features can be approximated by less than two underlying dimensions. For such intrinsically simple data, many different kinds of learners can generate useful models with similar performance. Based on the above, we conclude that learning to recognize actionable static code warnings is easy, using a wide range of learning algorithms, since the underlying data is intrinsically simple. If we had to pick one particular learner for this task, we would suggest linear SVMs (since, at least in our sample, that learner ran relatively quickly and achieved the best median performance) and we would not recommend deep learning (since this data is intrinsically very simple).