Spatiotemporal Edges for Arbitrarily Moving Video Classification in Protected and Sensitive Scenes

Artificial Intelligence and Applications(2023)

引用 0|浏览1
暂无评分
摘要
Classification of arbitrary moving objects including vehicles and human beings in a real environment (such as protected and sensitive areas) is challenging due to arbitrary deformation and directions caused by shaky camera and wind. This work aims at adopting a spatio-temporal approach for classifying arbitrarily moving objects. The intuition to propose the approach is that the behavior of the arbitrary moving objects caused by wind and shaky camera are inconsistent and unstable while for static objects, the behavior is consistent and stable. The proposed method segments foreground objects from background using the frame difference between median frame and individual frame. This step outputs several different foreground information. The method finds static and dynamic edges by subtracting Canny of foreground information from the Canny edges of respective input frames. The ratio of the number of static and dynamic edges of each frame is considered as features. The features are normalized to avoid the problems of imbalanced feature size and irrelevant features. For classification, the work uses 10-fold cross-validation to choose the number of training and testing samples and the random forest classifier is used for the final classification of frames with static objects and arbitrary movement objects. For evaluating the proposed method, we construct our own dataset, which contains video of static and arbitrarily moving objects caused by shaky camera and wind. The results on the video dataset show that the proposed method achieves the state-of-the-art performance (76% classification rate) which is 14% better than the best existing method.
更多
查看译文
关键词
moving video classification,sensitive scenes,protected
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要