Pixel-to-Model Distance for Robust Background Reconstruction

IEEE Trans. Circuits Syst. Video Techn.(2016)

引用 27|浏览61
暂无评分
摘要
Background information is crucial for many video surveillance applications such as object detection and scene understanding. In this paper, we present a novel Pixel-to-Model (P2M) paradigm for background modeling and restoration in surveillance scenes. In particular, the proposed approach models the background with a set of context features for each pixel, which are compressively sensed from local patches. We determine whether a pixel belongs to the background according to the minimum P2M distance, which measures the similarity between the pixel and its background model in the space of compressive local descriptors. The pixel feature descriptors of the background model are properly updated with respect to the minimum P2M distance. Meanwhile, the neighboring background model will be renewed according to the maximum P2M distance in order to handle ghost holes. The P2M distance plays an important role of background reliability in the 3D spatial-temporal domain of surveillance videos, leading to the robust background model and recovered background videos. We applied the proposed P2M distance for foreground detection and background restoration on synthetic and real-world surveillance videos. Experimental results show that the proposed P2M approach outperforms the stateof- the-art approaches both in indoor and outdoor surveillance scenes.
更多
查看译文
关键词
Pixel-to-model distance,background modeling,background restoration,local context descriptor,video surveillance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要