Long-Short Graph Memory Network for Skeleton-based Action Recognition

2020 IEEE Winter Conference on Applications of Computer Vision (WACV)(2020)

引用 15|浏览69
暂无评分
摘要
Current studies have shown the effectiveness of long short-term memory network (LSTM) for skeleton-based human action recognition in capturing temporal and spatial features of the skeleton sequence. Nevertheless, it still remains challenging for LSTM to extract the latent structural dependency among nodes. In this paper, we introduce a new long-short graph memory network (LSGM) to improve the capability of LSTM to model the skeleton sequence - a type of graph data. Our proposed LSGM can learn high-level temporal-spatial features end-to-end, enabling LSTM to extract the spatial information that is neglected but intrinsic to the skeleton graph data. To improve the discriminative ability of the temporal and spatial module, we use a calibration module termed as graph temporal-spatial calibration (GTSC) to calibrate the learned temporal-spatial features. By integrating the two modules into the same framework, we obtain a stronger generalization capability in processing dynamic graph data and achieve a significant performance improvement on the NTU and SYSU dataset. Experimental results have validated the effectiveness of our proposed LSGM+GTSC model in extracting temporal and spatial information from dynamic graph data. 1
更多
查看译文
关键词
long-short graph memory network,long short-term memory network,LSTM,skeleton-based human action recognition,skeleton sequence,spatial information,skeleton graph data,graph temporal-spatial calibration,dynamic graph data,high-level temporal-spatial features,LSGM+GTSC model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要