论文标题

通过动态平均来限制资源的设备学习

Resource-Constrained On-Device Learning by Dynamic Averaging

论文作者

Heppe, Lukas, Kamp, Michael, Adilova, Linara, Heinrich, Danny, Piatkowski, Nico, Morik, Katharina

论文摘要

数据生成设备之间的通信部分造成了世界上不断增长的功耗的部分原因。因此,从经济和生态学的角度来看,减少沟通至关重要。对于机器学习,设备学习避免发送原始数据,这可以大大减少通信。此外,不集中数据保护对隐私敏感的数据。但是,大多数学习算法都需要具有较高计算能力的硬件,因此需要高能耗。相比之下,超低功率处理器(例如FPGA或微型控制器)允许对本地模型进行节能学习。结合沟通效率高的分布式学习策略,这减少了整体能源消耗,并启用了由于本地设备上能源有限而无法进行的应用程序。那么,主要的挑战是,低功率处理器通常只具有整数处理功能。本文调查了可以在低功率处理器上执行的整数指数家庭的沟通效率的实机性学习方法,具有隐私性,并有效地最大程度地减少了沟通。经验评估表明,该方法可以达到模型质量,可与中央学到的常规模型相当,而通信的通信较小。比较总体能耗,这减少了解决机器学习任务的所需能量。

The communication between data-generating devices is partially responsible for a growing portion of the world's power consumption. Thus reducing communication is vital, both, from an economical and an ecological perspective. For machine learning, on-device learning avoids sending raw data, which can reduce communication substantially. Furthermore, not centralizing the data protects privacy-sensitive data. However, most learning algorithms require hardware with high computation power and thus high energy consumption. In contrast, ultra-low-power processors, like FPGAs or micro-controllers, allow for energy-efficient learning of local models. Combined with communication-efficient distributed learning strategies, this reduces the overall energy consumption and enables applications that were yet impossible due to limited energy on local devices. The major challenge is then, that the low-power processors typically only have integer processing capabilities. This paper investigates an approach to communication-efficient on-device learning of integer exponential families that can be executed on low-power processors, is privacy-preserving, and effectively minimizes communication. The empirical evaluation shows that the approach can reach a model quality comparable to a centrally learned regular model with an order of magnitude less communication. Comparing the overall energy consumption, this reduces the required energy for solving the machine learning task by a significant amount.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源