Flow-Based Visual Stream Compression for Event Cameras
arxiv(2024)
摘要
As the use of neuromorphic, event-based vision sensors expands, the need for
compression of their output streams has increased. While their operational
principle ensures event streams are spatially sparse, the high temporal
resolution of the sensors can result in high data rates from the sensor
depending on scene dynamics. For systems operating in
communication-bandwidth-constrained and power-constrained environments, it is
essential to compress these streams before transmitting them to a remote
receiver. Therefore, we introduce a flow-based method for the real-time
asynchronous compression of event streams as they are generated. This method
leverages real-time optical flow estimates to predict future events without
needing to transmit them, therefore, drastically reducing the amount of data
transmitted. The flow-based compression introduced is evaluated using a variety
of methods including spatiotemporal distance between event streams. The
introduced method itself is shown to achieve an average compression ratio of
2.81 on a variety of event-camera datasets with the evaluation configuration
used. That compression is achieved with a median temporal error of 0.48 ms and
an average spatiotemporal event-stream distance of 3.07. When combined with
LZMA compression for non-real-time applications, our method can achieve
state-of-the-art average compression ratios ranging from 10.45 to 17.24.
Additionally, we demonstrate that the proposed prediction algorithm is capable
of performing real-time, low-latency event prediction.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要