Teaching Social Behavior through Human Reinforcement for Ad hoc Teamwork -The STAR Framework

AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS(2019)

引用 7|浏览60
暂无评分
摘要
As AI technology continues to develop, more and more agents will become capable of long term autonomy alongside people. Thus, a recent line of research has studied the problem of teaching autonomous agents the concept of ethics and human social norms. Most existing work considers the case of an individual agent attempting to learn a predefined set of rules. In reality, however, social norms are not always pre-defined and are very difficult to represent algorithmically. Moreover, the basic idea behind the social norms concept is ensuring that one's actions do not negatively influence others' utilities, which is inherently a multiagent concept. Thus, here we investigate a way to teach agents, as a team, how to act according to human social norms. In this research, we introduce the STAR framework used to teach an ad hoc team of agents to act in accordance with human social norms. Using a hybrid team (agents and people), when taking an action considered to be socially unacceptable, the agents receive negative feedback from the human teammate(s) who has(have) an awareness of the team's norms. We view STAR as an important step towards teaching agents to act more consistently with respect to human morality.
更多
查看译文
关键词
Ad hoc,Reinforcement Learning,Social Norms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要