Learning Bimanual End-Effector Poses From Demonstrations Using Task-Parameterized Dynamical Systems

2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2015)

引用 74|浏览73
暂无评分
摘要
Very often, when addressing the problem of human-robot skill transfer in task space, only the Cartesian position of the end-effector is encoded by the learning algorithms, instead of the full pose. However, orientation is just as important as position, if not more, when it comes to successfully performing a manipulation task. In this paper, we present a framework that allows robots to learn the full poses of their end-effectors in a task-parameterized manner. Our approach permits the encoding of complex skills, such as those found in bimanual manipulation scenarios, where the generalized coordination patterns between end-effectors (i.e. position and orientation patterns) need to be considered. The proposed framework combines a dynamical systems formulation of the demonstrated trajectories, both in R-3 and SO(3), and task-parameterized probabilistic models that build local task representations in both spaces, based on which it is possible to extract the relevant features of the demonstrated skill. We validate our approach with an experiment in which two 7-DoF WAM robots learn to perform a bimanual sweeping task.
更多
查看译文
关键词
skill demonstration,bimanual sweeping task,7-DoF WAM robots,local task representations,task-parameterized probabilistic models,trajectories,orientation patterns,position patterns,generalized coordination patterns,bimanual manipulation,complex skills encoding,learning algorithms,Cartesian position,task space,human-robot skill,task-parameterized dynamical systems,bimanual end-effector poses
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要