YogaTube: A Video Benchmark for Yoga Action Recognition

IEEE International Joint Conference on Neural Network (IJCNN)(2022)

引用 4|浏览9
暂无评分
摘要
Yoga can be seen as a set of fitness exercises involving various body postures. Most of the available pose and action recognition datasets are comprised of easy-to-moderate body pose orientations and do not offer much challenge to the learning algorithms in terms of the complexity of pose. In order to observe action recognition from a different perspective, we introduce YogaTube, a new large-scale video benchmark dataset for yoga action recognition. YogaTube aims at covering a wide range of complex yoga postures, which consist of 5484 videos belonging to a taxonomy of 82 classes of yoga asanas. Also, a three-stream architecture has been designed for yoga asanas pose recognition using two modules, feature extraction, and classification. Feature extraction comprises three parallel components. First, pose is estimated using the part affinity fields model to extract meaningful cues from the practitioner. Second, optical flow is used to extract temporal features. Third, raw RGB videos are used for extracting the spatiotemporal features. Finally in the classification module, pose, optical flow, and RGB streams are fused to get the final results of the yoga asanas. To the best of our knowledge, this is the first attempt to establish a video benchmark yoga recognition dataset. The code and dataset will be released soon.
更多
查看译文
关键词
Action recognition,Yoga,Multi-stream fusion,Deep Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要