LieGrasPFormer: Point Transformer-Based 6-DOF Grasp Detection with Lie Algebra Grasp Representation.

CASE(2023)

引用 0|浏览2
暂无评分
摘要
With the significant advancements made in the 6-DOF Grasp learning network, grasp selection for unseen objects has garnered much attention. However, most existing approaches rely on complex sequence pipelines to generate potential Grasp, which can be challenging to implement. In this work, we propose an end-to-end grasp detection network that can create a diverse and accurate 6-DOF grasp posture based solely on pure point clouds. We utilize the hierarchical PointNet++ with a skip-connection point transformer encoder block to extract contextual local region point features, which we refer to as LieGrasPFormer. This network can efficiently generate a distribution of 6-DoF parallel-jaw grasps directly from a pure point cloud. Moreover, we introduce two different grasp detection loss functions, which give the neural network the ability to generalize to unseen objects, such as generators. These loss functions also enable a continuously differentiable property for the network. We trained our LieGrasPFormer using the synthesized grasp dataset ACRONYM, which contains 17 million parallel-jaw grasps, and found that it generalized well with an actual scanned YCB dataset consisting of 77 objects. Finally, we conducted experiments in the PyBullet simulator, which showed that our proposed grasp detection network can outperform most state-of-the-art approaches with respect to the grasp success rate.
更多
查看译文
关键词
6-DOF grasp learning network,6-DoF parallel-jaw grasps,ACRONYM,complex sequence pipelines,contextual local region point features,grasp detection loss functions,lie algebra grasp representation,LieGrasPFormer,neural network,point transformer-based 6-DOF grasp detection,pure point cloud,PyBullet simulator,skip-connection point transformer encoder
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要