Learning Multimodal Latent Dynamics for Human-Robot Interaction
CoRR(2023)
摘要
This article presents a method for learning well-coordinated Human-Robot
Interaction (HRI) from Human-Human Interactions (HHI). We devise a hybrid
approach using Hidden Markov Models (HMMs) as the latent space priors for a
Variational Autoencoder to model a joint distribution over the interacting
agents. We leverage the interaction dynamics learned from HHI to learn HRI and
incorporate the conditional generation of robot motions from human observations
into the training, thereby predicting more accurate robot trajectories. The
generated robot motions are further adapted with Inverse Kinematics to ensure
the desired physical proximity with a human, combining the ease of joint space
learning and accurate task space reachability. For contact-rich interactions,
we modulate the robot's stiffness using HMM segmentation for a compliant
interaction. We verify the effectiveness of our approach deployed on a Humanoid
robot via a user study. Our method generalizes well to various humans despite
being trained on data from just two humans. We find that Users perceive our
method as more human-like, timely, and accurate and rank our method with a
higher degree of preference over other baselines.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要