Vunet: Dynamic Scene View Synthesis For Traversability Estimation Using An Rgb Camera

IEEE ROBOTICS AND AUTOMATION LETTERS(2019)

引用 23|浏览137
暂无评分
摘要
We present VUNet, a novel view(VU) synthesis method for mobile robots in dynamic environments, and its application to the estimation of future traversability. Our method predicts future images for given virtual robot velocity commands using only RGB images at previous and current time steps. The future images result from applying two types of image changes to the previous and current images: first, changes caused by different camera pose. Second, changes due to the motion of the dynamic obstacles. We learn to predict these two types of changes disjointly using two novel network architectures, SNet and DNet. We combine SNet and DNet to synthesize future images that we pass to our previously presented method GONet [N. Hirose, A. Sadeghian, M. Vazquez, P. Goebel, and S. Savarese, "Gonet: A semi-supervised deep learning approach for traversability estimation," in Proc. IEEE International Conference on Intelligent Robots and Systems, 2018, pp. 3044-3051] to estimate the traversable areas around the robot. Our quantitative and qualitative evaluation indicate that our approach for view synthesis predicts accurate future images in both static and dynamic environments. We also show that these virtual images can be used to estimate future traversability correctly. We apply our view synthesis-based traversability estimation method to two applications for assisted teleoperation.
更多
查看译文
关键词
Robot safety, computer vision for other robotic applications, collision avoidance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要