Comparison of Multimodal Heading and Pointing Gestures for Co-Located Mixed Reality Human-Robot Interaction

2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2018)

引用 65|浏览29
暂无评分
摘要
Mixed reality (MR)opens up new vistas for human-robot interaction (HRI)scenarios in which a human operator can control and collaborate with co-located robots. For instance, when using a see-through head-mounted-display (HMD)such as the Microsoft HoloLens, the operator can see the real robots and additional virtual information can be superimposed over the real-world view to improve security, acceptability and predictability in HRI situations. In particular, previewing potential robot actions in-situ before they are executed has enormous potential to reduce the risks of damaging the system or injuring the human operator. In this paper, we introduce the concept and implementation of such an MR human-robot collaboration system in which a human can intuitively and naturally control a co-located industrial robot arm for pick-and-place tasks. In addition, we compared two different, multimodal HRI techniques to select the pick location on a target object using (i)head orientation (aka heading)or (ii)pointing, both in combination with speech. The results show that heading-based interaction techniques are more precise, require less time and are perceived as less physically, temporally and mentally demanding for MR-based pick-and-place scenarios. We confirmed these results in an additional usability study in a delivery-service task with a multi-robot system. The developed MR interface shows a preview of the current robot programming to the operator, e. g., pick selection or trajectory. The findings provide important implications for the design of future MR setups.
更多
查看译文
关键词
multimodal heading,pointing gestures,human operator,co-located robots,head-mounted-display,HRI situations,enormous potential,MR human-robot collaboration system,industrial robot arm,multimodal HRI techniques,heading-based interaction techniques,multirobot system,current robot programming,human-robot interaction scenarios,virtual information,pick-and-place scenarios,potential robot actions,co-located mixed reality human-robot interaction,Microsoft HoloLens
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要