Enabling Flexible Resource Allocation in Mobile Deep Learning Systems

IEEE Transactions on Parallel and Distributed Systems(2019)

引用 9|浏览90
暂无评分
摘要
Deep learning provides new opportunities for mobile applications to achieve higher performance than before. Rather, the deep learning implementation on mobile device today is largely demanding on expensive resource overheads, imposes a significant burden on the battery life and limited memory space. Existing methods either utilize cloud or edge infrastructure that require to upload user data, however, resulting in a risk of privacy leakage and large data transfers; or adopt compressed deep models, nevertheless, downgrading the algorithm accuracy. This paper provides DeepShark, a platform to enable mobile devices with the ability of flexible resource allocation in using commercial-off-the-shelf (COTS) deep learning systems. Compared to existing approaches, DeepShark seeks a balanced point between time and memory efficiency by user requirements, breaks down sophisticated deep model into code block stream and incrementally executes such blocks on system-on-chip (SoC). Thus, DeepShark requires significantly less memory space on mobile device and achieves the default accuracy. In addition, all referred user data of model processing is handled locally, thus to avoid unnecessary data transfer and network latency. DeepShark is now developed on two COTS deep learning systems, i.e., Caffe and TensorFlow. The experimental evaluations demonstrate its effectiveness in the aspects of memory space and energy cost.
更多
查看译文
关键词
Machine learning,Mobile handsets,Computational modeling,Tools,Memory management,Resource management,Performance evaluation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要