论文标题

边缘方案中的灵活平行学习:沟通,计算和能源成本

Flexible Parallel Learning in Edge Scenarios: Communication, Computational and Energy Cost

论文作者

Malandrino, Francesco, Chiasserini, Carla Fabiana

论文摘要

传统上,分布式机器学习以(i)不同节点训练相同模​​型的幌子(与联合学习一样),或(ii)一个模型在多个节点之间分配(如分布式随机梯度下降)。在这项工作中,我们强调了基于雾和物联网的场景通常需要结合这两种方法,并且我们提出了一个灵活并行学习(FPL)的框架,从而实现数据和模型并行性。此外,我们研究了在参与节点上分配和并行化学习任务的不同方式如何导致不同的计算,通信和能源成本。我们的实验是使用最先进的深网架构和大规模数据集进行的,确认FPL可以在计算(因此)成本,沟通开销和学习绩效之间进行良好的权衡。

Traditionally, distributed machine learning takes the guise of (i) different nodes training the same model (as in federated learning), or (ii) one model being split among multiple nodes (as in distributed stochastic gradient descent). In this work, we highlight how fog- and IoT-based scenarios often require combining both approaches, and we present a framework for flexible parallel learning (FPL), achieving both data and model parallelism. Further, we investigate how different ways of distributing and parallelizing learning tasks across the participating nodes result in different computation, communication, and energy costs. Our experiments, carried out using state-of-the-art deep-network architectures and large-scale datasets, confirm that FPL allows for an excellent trade-off among computational (hence energy) cost, communication overhead, and learning performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源