Long range gait matching using 3D body fitting with gait-specific motion constraints

2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)(2023)

引用 0|浏览6
暂无评分
摘要
Drawing upon the many works in estimating the 3D human body shape and motion in images and video, some have recently proposed using 3D human models for gait recognition to overcome viewpoint variation. However, the problem is the fit quality, particularly in the motion aspects. While the overall 3D shape aspects look good, the limb configurations over the video only capture walking in some cases. To address this problem, we build on the recent trend of fitting a 3D deformable body model - the SMPL model - to gait videos using deep neural networks to obtain disentangled shape and pose representations for each frame. This work is the first to use adversarial training for gait recognition, and it helps us to enforce motion consistency in the network output. To this end, a subset of walking activity instances from the AMASS mocap dataset serves as the natural motion distribution. We benchmark our solution to the state-of-the-art on the well-known USF HumanID and CASIA-B datasets in terms of variations concerning viewpoint, clothing, carrying condition, walking surface, and time. We are either the best or close to the best-reported performance on these datasets. We demonstrate the quality of the 3D fitted models for gait recognition on the newly constructed IARPA BRIAR dataset of IRB consented 375 subjects with videos taken at 100m, 200m, 400m, and 500m. We are among the first to report gait recognition estimates at long range.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要