3D Grasp Pose Generation from 2D Anchors and Local Surface

VRCAI(2023)

引用 0|浏览5
暂无评分
摘要
This work proposes a three-dimensional (3D) robot grasp pose generation method for robot manipulators from the predicted two-dimensional (2D) anchors and the depth information of the local surface. Compared to the traditional image-based grasp area detection methods in which the grasp pose is only presented by two contact points, the proposed method can generate a more accurate 3D grasp pose. Furthermore, different from the 6-DoF object pose regression methods in which the point cloud of the whole objects is considered, the proposed method is very lightweight, since the 3D computation is only processed on the depth information of the local grasp surface. The method consists of three steps: (1) detecting the 2D grasp anchor and extracting the local grasp surface from the image; (2) obtaining the average vector of the objects’ local grasp surface from the objects’ local point cloud; (3) generating the 3D grasp pose from 2D grasp anchor based on the average vector of local grasp surface. The experiments are carried out on the Cornell and Jacquard grasp datasets. It is found that the proposed method yields improvement in the grasp accuracy compared to state-of-the-art 2D anchor methods. And the proposed method is also validated on the practical grasp tasks deployed on a UR5 arm with Robotiq Grippers F85. It outperforms state-of-the-art 2D anchor methods on the grasp success rate for dozens of practical grasp tasks.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要