MJOA-MU: End-to-edge collaborative computation for DNN inference based on model uploading.

Huan Yang,Sheng Sun,Min Liu, Qiuping Zhang,Yuwei Wang

Comput. Networks(2023)

引用 0|浏览6
暂无评分
摘要
As an emerging computing paradigm, edge computing can assist user equipments (UEs) in executing computation-intensive deep neural network (DNN) inference tasks, thereby satisfying the stringent QoS requirement and relieving the burden of UEs. Due to the customizability of DNN models and limited capacity of the edge server, it is more realistic to upload DNN models on demand during end-to-edge co-inference, instead of deploying all DNN models at the edge server in advance. Existing works adopt the serial model uploading manner that uploads subsequent DNN layers only after antecedent DNN layers finish execution, inevitably prolonging the DNN execution latency. To this end, we innovatively design a parallel-efficient model uploading mechanism that allows subsequent DNN layers to be uploaded simultaneously when executing antecedent DNN layers, so as to efficiently mitigate the performance drop caused by model uploading. On this basis, we propose a Multi-UE Joint Optimization Algorithm based on Model Uploading (MJOA-MU) to optimize DNN partitioning and resource allocation for heterogeneous UEs. Specifically, MJOA-MU includes a Pruned Binary Tree based DNN Partitioning (PBT-DP) sub-algorithm to efficiently make the near-optimal partitioning decision for chain and non-chain models based on the long-term influence between DNN layers, and an Asynchronous Resource Allocation (ARA) sub-algorithm to allocate computation and communication resources for UEs by quantifying the inner- and inter-association, so as to match with individual demand and resource budget. Extensive simulation results demonstrate that MJOA-MU outperforms the state-of-the-art in terms of the DNN execution latency, and specifically achieves up to 64.5% reduction.
更多
查看译文
关键词
DNN inference, Model uploading, DNN partitioning, Resource allocation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要