论文标题

Darknetz:使用可信赖的执行环境在边缘建模隐私

DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments

论文作者

Mo, Fan, Shamsabadi, Ali Shahin, Katevas, Kleomenis, Demetriou, Soteris, Leontiadis, Ilias, Cavallaro, Andrea, Haddadi, Hamed

论文摘要

我们提出了Darknetz,该框架与模型分区结合使用Edge设备的受信任的执行环境(TEE),以限制攻击表面针对深神经网络(DNNS)。 Edge设备(智能手机和消费者IoT设备)越来越多地配备了用于多种应用的预训练DNN。这种趋势带有隐私风险,因为模型可以通过有效的会员推理攻击(MIAS)泄露有关其培训数据的信息。我们使用两个小型和六个大图像分类模型评估了Darknetz的性能,包括CPU执行时间,内存使用和准确的功耗。由于边缘设备的T恤的内存有限,我们将模型层划分为更敏感的层(将在设备T恤中执行),以及在操作系统的不信任部分中执行的一组层。我们的结果表明,即使隐藏了单层,我们也可以提供可靠的模型隐私并防止最先进的MIA,只有3%的性能开销。当充分利用T恤时,Darknetz提供了最多10%开销的模型保护。

We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against Deep Neural Networks (DNNs). Increasingly, edge devices (smartphones and consumer IoT devices) are equipped with pre-trained DNNs for a variety of applications. This trend comes with privacy risks as models can leak information about their training data through effective membership inference attacks (MIAs). We evaluate the performance of DarkneTZ, including CPU execution time, memory usage, and accurate power consumption, using two small and six large image classification models. Due to the limited memory of the edge device's TEE, we partition model layers into more sensitive layers (to be executed inside the device TEE), and a set of layers to be executed in the untrusted part of the operating system. Our results show that even if a single layer is hidden, we can provide reliable model privacy and defend against state of the art MIAs, with only 3% performance overhead. When fully utilizing the TEE, DarkneTZ provides model protections with up to 10% overhead.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源