论文标题

毫无根据的表示:任务应该结束吗?

Unsupervisedly Learned Representations: Should the Quest be Over?

论文作者

Nissani, Daniel N.

论文摘要

经过四十年的研究,在我们最好的毫无根据的表示方法和智能动物达到的准确性率之间仍然存在分类准确性差距约为20%。因此,很可能我们正在朝着错误的方向看。提出了这个难题的可能解决方案。我们证明,强化学习可以学习具有与动物相同准确性的表示形式。我们的主要贡献在于:当应用于现实世界环境时,强化学习不需要标签,因此可以合法地将其视为无监督的学习,b。相反,当在模拟环境中应用增强学习时,它确实需要标签,因此通常应被视为监督学习。这些观察结果的推论是,在模拟环境中可能训练的无监督学习竞争范例可能是徒劳的。

After four decades of research there still exists a Classification accuracy gap of about 20% between our best Unsupervisedly Learned Representations methods and the accuracy rates achieved by intelligent animals. It thus may well be that we are looking in the wrong direction. A possible solution to this puzzle is presented. We demonstrate that Reinforcement Learning can learn representations which achieve the same accuracy as that of animals. Our main modest contribution lies in the observations that: a. when applied to a real world environment Reinforcement Learning does not require labels, and thus may be legitimately considered as Unsupervised Learning, and b. in contrast, when Reinforcement Learning is applied in a simulated environment it does inherently require labels and should thus be generally be considered as Supervised Learning. The corollary of these observations is that further search for Unsupervised Learning competitive paradigms which may be trained in simulated environments may be futile.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源