Co-speech gestures for human-robot collaboration

A. Ekrekli,A. Angleraud, G. Sharma,R. Pieters

2023 Seventh IEEE International Conference on Robotic Computing (IRC)(2023)

引用 0|浏览0
暂无评分
摘要
Collaboration between human and robot requires effective modes of communication to assign robot tasks and coordinate activities. As communication can utilize different modalities, a multi-modal approach can be more expressive than single modal models alone. In this work we propose a co-speech gesture model that can assign robot tasks for human-robot collaboration. Human gestures and speech, detected by computer vision and speech recognition, can thus refer to objects in the scene and apply robot actions to them. We present an experimental evaluation of the multi-modal co-speech model with a real-world industrial use case. Results demonstrate that multi-modal communication is easy to achieve and can provide benefits for collaboration with respect to single modal tools.
更多
查看译文
关键词
Human-robot collaboration,multi-modal perception,speech recognition,gesture detection,object detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要