论文标题

测试学习指数结构的鲁棒性

Testing the Robustness of Learned Index Structures

论文作者

Bachfischer, Matthias, Borovica-Gajic, Renata, Rubinstein, Benjamin I. P.

论文摘要

虽然早期的经验证据支持了学到的指数结构的案例,因为他们的平均表现有利,但对其最差的表现知之甚少。相比之下,已知经典结构可以实现最佳的最坏情况行为。这项工作评估了在存在对抗工作量的情况下学习指数结构的鲁棒性。为了模拟对抗性工作负载,我们对线性回归模型进行了数据中毒攻击,该模型操纵了训练学到的索引模型的累积分布函数(CDF)。攻击通过将一组中毒键注入训练数据集,从而恶化了基础ML模型的拟合度,从而导致模型的预测误差增加,从而减少了学习指数结构的整体性能。我们评估了各种回归方法的性能和学习指数实现Alex和PGM索引。我们表明,在对中毒与非毒品数据集进行评估时,学到的指数结构可能会遭受高达20%的显着性能恶化。

While early empirical evidence has supported the case for learned index structures as having favourable average-case performance, little is known about their worst-case performance. By contrast, classical structures are known to achieve optimal worst-case behaviour. This work evaluates the robustness of learned index structures in the presence of adversarial workloads. To simulate adversarial workloads, we carry out a data poisoning attack on linear regression models that manipulates the cumulative distribution function (CDF) on which the learned index model is trained. The attack deteriorates the fit of the underlying ML model by injecting a set of poisoning keys into the training dataset, which leads to an increase in the prediction error of the model and thus deteriorates the overall performance of the learned index structure. We assess the performance of various regression methods and the learned index implementations ALEX and PGM-Index. We show that learned index structures can suffer from a significant performance deterioration of up to 20% when evaluated on poisoned vs. non-poisoned datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源