Neural Networks Meet Physical Networks: Distributed Inference Between Edge Devices and the Cloud.

HotNets(2018)

引用 44|浏览127
暂无评分
摘要
We believe that most future video uploaded over the network will be consumed by machines for sensing tasks such as automated surveillance and mapping rather than for human consumption. Today's systems typically collect raw data from distributed sensors, such as drones, with the computer vision logic implemented in the cloud using deep neural networks (DNNs). They use standard video encoding techniques, send it over the network, and then decompress it at the cloud before using the vision DNN. In other words, data encoding and distribution is decoupled from the sensing goal. This is bandwidth inefficient because video encoding schemes, such as MPEG4, might send data tailored for human perception but irrelevant for the overall sensing goal. We argue that data collection and distribution mechanisms should be co-designed with the eventual sensing objective. Specifically, we propose a distributed DNN architecture that learns end-to-end how to represent the raw sensor data and send it over the network such that it meets the eventual sensing task's needs. Such a design naturally adapts to varying network bandwidths between the sensors and the cloud, as well as automatically sends task-appropriate data features.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要