Asking Human Help in Contingent Planning.

AAMAS(2017)

引用 1|浏览3
暂无评分
摘要
Contingent planning models a robot in a partially observable environment and (non)deterministic actions. In a contingent planning problem, a solution can be found by doing a search in a space of belief states, where a belief state is represented by a set of possible states. However, in the presence of dead-end belief states, a situation where a robot may fail to complete its task, the only way to accomplish it is to ask for human help. In this work, rather than limiting a contingent planning task to those that only include actions and observations that an agent can perform autonomously, agents can instead reason about asking humans in the environment for help in order to complete tasks that would be unsolvable otherwise. We are interested in develop agents with symbiotic autonomy: an agent that proactively and autonomously ask for human help. We make the assumption that humans may only be interrupted when it is extremely necessary, i.e. when the planner can not solve a task or solving it implies in a too high cost. Also we consider the competence and/or availability of the humans in the environment. To solve this problem we propose an extension of an existing translation technique [2] that translates a contingent planning problem into a non-deterministic fully observable planning problem. The proposed translation can deal with different types of dead-end belief states, including a pure dead-end (e.g. a broken robot) and a dead-end that is due to uncertainty about the environment.
更多
查看译文
关键词
Artificial Intelligence,Automated Planning,Robot Planning,Human-Robot Collaboration
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要