NAO Robot Learns to Interact with Humans through Imitation Learning from Video Observation

Seyed Adel Alizadeh Kolagar,Alireza Taheri,Ali F. Meghdari

J. Intell. Robotic Syst.(2023)

引用 1|浏览1
暂无评分
摘要
One option for teaching a robot new skills is to use learning from demonstration techniques. While traditional techniques often involve expensive sensors/equipment, advancements in computer vision have made it possible to achieve similar outcomes at a lower cost. To the best of our knowledge, there is no previous research on a robot learning to produce 3D motions from 2D data and then using this knowledge to interact with people. To this end, we designed a study using a NAO robot to imitate human behavior by reproducing motions in 3D space after viewing a small number of 2D RGB videos for each motion. The goal is for the robot to learn certain social interactive skills by learning from video observation and then apply them during human-robot interaction. Five steps were taken to achieve this objective: 1) collecting a dataset, 2) human pose estimation, 3) transferring data from human space to the robot space, 4) robot control, and 5) human-robot interaction. These steps were separated into two phases, robot imitation learning and human-robot social interaction. The majority of the algorithms employed are deep learning-based, achieving ~96% accuracy in the action recognition on our dataset. The results were also promising when implemented on the robot. Overall, this preliminary exploratory study successfully showed the proof of concept for producing 3D motions from 2D data. This approach is noteworthy because of the amount of online training data, the robot can be trained quickly, and it does not require an expert.
更多
查看译文
关键词
Imitation Learning,Learning from Observation,Pose Estimation,Human-Robot Interaction,Action Recognition,Robot Control,NAO Robot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要