论文标题

自由扩展和可重新配置的光学硬件,用于深度学习

Freely scalable and reconfigurable optical hardware for deep learning

论文作者

Bernstein, Liane, Sludds, Alexander, Hamerly, Ryan, Sze, Vivienne, Emer, Joel, Englund, Dirk

论文摘要

随着深度神经网络(DNN)模型的发展越来越大,它们可以达到更高的准确性并解决更复杂的问题。可用的计算功率增加了这种趋势。但是,沟通,热管理,电力传递和时钟的成本阻碍了继续扩展电子处理器的努力。为了提高可伸缩性,我们提出了一个具有内部光学互连和可重构输入值的数字光学神经网络(DONN)。光学消耗的接近路径长度独立性可实现发射器和任意布置的接收器之间的信息局部性,这使建筑设计的灵活性更大,以规避缩放限制。在概念验证实验中,我们在500个具有3层,完全连接的网络的MNIST图像分类中演示了光学多播。我们还分析了DONN的能耗,并发现当计算单元的间隔在10微米的范围内时,光学数据传输比电子设备有益。

As deep neural network (DNN) models grow ever-larger, they can achieve higher accuracy and solve more complex problems. This trend has been enabled by an increase in available compute power; however, efforts to continue to scale electronic processors are impeded by the costs of communication, thermal management, power delivery and clocking. To improve scalability, we propose a digital optical neural network (DONN) with intralayer optical interconnects and reconfigurable input values. The near path-length-independence of optical energy consumption enables information locality between a transmitter and arbitrarily arranged receivers, which allows greater flexibility in architecture design to circumvent scaling limitations. In a proof-of-concept experiment, we demonstrate optical multicast in the classification of 500 MNIST images with a 3-layer, fully-connected network. We also analyze the energy consumption of the DONN and find that optical data transfer is beneficial over electronics when the spacing of computational units is on the order of >10 micrometers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源