Adaptive Human-Robot Collaboration: Evolutionary Learning of Action Costs Using an Action Outcome Simulator

2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN(2023)

引用 0|浏览4
暂无评分
摘要
One of the main challenges for successful human-robot collaborative applications lies in adapting the plan to the human agent's changing state and preferences. A promising solution is to bridge the gap between agent modelling and AI task planning, which can be done by integrating the agent state as action costs in the task planning domain. This allows for the plan to be adapted to different partners, by influencing the action allocation. The difficulty then lies in setting appropriate action costs. This paper presents a novel framework to learn a set of planning action costs considering the preferred actions for an agent based on their state. An evolutionary optimisation algorithm is used for this purpose, and an action outcome simulator is developed to act as the black-box function, based on both an agent model and an action type model. This addresses the challenge of collecting data in HRC real-world scenarios, accelerating the learning for posterior fine-tuning in real applications. The coherence of the models and the simulator is proven through a conducted survey, and the learning algorithm is shown to learn appropriate action costs, producing plans that satisfy both the agents' preferences and the prioritised plan requisites. The resulting system is a generic learning framework integrating components that can be easily extended to a wide range of applications, models and planning formalisms.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要