Exploiting Adversarial Examples to Drain Computational Resources on Mobile Deep Learning Systems

2020 IEEE/ACM Symposium on Edge Computing (SEC)(2020)

引用 1|浏览15
暂无评分
摘要
In order to perform deep learning tasks everywhere, many optimizations have been proposed to address the resource limitations on mobile systems like IoTs. A key approach among others is to dynamically adjust computational resources of the deep learning inference according to the characteristics of incoming inputs. For example, one of popular optimizations is to pick for each input a suitable combination of computations with respect to its inference difficulty. However, we find out that such “dynamic routing” of computations could be exploited to drain/waste precious resources on mobile deep learning systems. In this work, we introduce a new deep learning attack dimension, the computational resources draining, and demonstrate its feasibility in one of possible attack manners, the adversarial examples of input data. We describe how to construct our special adversarial examples aiming to the resource draining, and show that these poisoned inputs are able to increase the computation loads on purpose with two experiment datasets. We hope that our findings can shed light on the path of improving the robustness of mobile deep learning optimizations.
更多
查看译文
关键词
deep learning,adversarial example,dynamic routing,computational resource attack,result caching
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要