论文标题

通过物流损失的半空间的不可知论

Agnostic Learnability of Halfspaces via Logistic Loss

论文作者

Ji, Ziwei, Ahn, Kwangjun, Awasthi, Pranjal, Kale, Satyen, Karp, Stefani

论文摘要

我们研究logistic回归提供的近似保证,以解决对同质半空间不可知的基本问题。以前,对于示例中的一定一类广泛的“行为举止”分布,Diakonikolas等人。 (2020)证明了一个$ \tildeΩ(\ textrm {opt})$下限,而Frei等人。 (2021)证明了一个$ \ tilde {o}(\ sqrt {\ textrm {opt}})$上限,其中$ \ textrm {opt} $表示同质半空间的最佳零/错误分类风险。在本文中,我们通过构建一个行为良好的分布来缩小这一差距,以使该分布的逻辑风险的全球最小化器仅实现$ω(\ sqrt {\ sqrt {\ textrm {opt}})$ missclassification风险,匹配上限(Frei等人,20211年)。另一方面,我们还表明,如果我们强加了径向 - 卢比奇的条件,则除了表现出良好的分布性外,有界半径的球上的逻辑回归达到$ \ tilde {o}(\ textrm {opt} {opt})$错误分类风险。 Our techniques also show for any well-behaved distribution, regardless of radial Lipschitzness, we can overcome the $Ω(\sqrt{\textrm{OPT}})$ lower bound for logistic loss simply at the cost of one additional convex optimization step involving the hinge loss and attain $\tilde{O}(\textrm{OPT})$ misclassification risk.此两步凸优化算法比获得此保证的先前方法要简单,所有这些都需要求解$ o(\ log(1/\ textrm {opt}))$最小化问题。

We investigate approximation guarantees provided by logistic regression for the fundamental problem of agnostic learning of homogeneous halfspaces. Previously, for a certain broad class of "well-behaved" distributions on the examples, Diakonikolas et al. (2020) proved an $\tildeΩ(\textrm{OPT})$ lower bound, while Frei et al. (2021) proved an $\tilde{O}(\sqrt{\textrm{OPT}})$ upper bound, where $\textrm{OPT}$ denotes the best zero-one/misclassification risk of a homogeneous halfspace. In this paper, we close this gap by constructing a well-behaved distribution such that the global minimizer of the logistic risk over this distribution only achieves $Ω(\sqrt{\textrm{OPT}})$ misclassification risk, matching the upper bound in (Frei et al., 2021). On the other hand, we also show that if we impose a radial-Lipschitzness condition in addition to well-behaved-ness on the distribution, logistic regression on a ball of bounded radius reaches $\tilde{O}(\textrm{OPT})$ misclassification risk. Our techniques also show for any well-behaved distribution, regardless of radial Lipschitzness, we can overcome the $Ω(\sqrt{\textrm{OPT}})$ lower bound for logistic loss simply at the cost of one additional convex optimization step involving the hinge loss and attain $\tilde{O}(\textrm{OPT})$ misclassification risk. This two-step convex optimization algorithm is simpler than previous methods obtaining this guarantee, all of which require solving $O(\log(1/\textrm{OPT}))$ minimization problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源