Flexible Gear Assembly with Visual Servoing and Force Feedback

2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2023)

引用 0|浏览37
暂无评分
摘要
This paper presents a vision-guided two-stage approach with force feedback to achieve high-precision and flexible gear assembly. The proposed approach integrates YOLO to coarsely localize the target workpiece in a searching phase and deep reinforcement learning (DRL) to complete the insertion. Specifically, DRL addresses the challenge of partial visibility when the on-wrist camera is too close to the workpiece of a small size. Moreover, we use force feedback to improve the robustness of the vision-guided assembly process. To reduce the effort of collecting training data on real robots, we use synthetic RGB images for training YOLO and construct an offline interaction environment leveraging sampled real-world data for training DRL agents. The proposed approach was evaluated in an industrial gear assembly experiment, which requires an assembly clearance of 0.3 mm, demonstrating high robustness and efficiency in gear searching and insertion from arbitrary positions.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要