Estimating Egocentric 3D Human Pose in the Wild with Weak External Supervision **Supplementary Material**

semanticscholar(2022)

引用 0|浏览46
暂无评分
摘要
In Table 2 of our main paper, we show that our method outperforms the state-of-the-art methods: MoCap and xR-egopose. In order to further compare the performance on different types of motions, we show the quantitative comparisons on Wang et al. [7]’s test dataset in Table 1 and on MoCap dataset [9] in Table 2. We show that our method outperforms all of the baselines on most types of motion in these results. Note that our method is trained on the EgoPW dataset while the focal length and distortion of the fisheye camera in the EgoPW dataset is different from the fisheye camera used in MoCap, which affects the performance of our method on the MoCap test dataset.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要