Chrome Extension
WeChat Mini Program
Use on ChatGLM

Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision

CORL 2022(2022)

Univ Calif Berkeley | PhD student | Amazon Scholar | Full Professor

Cited 7|Views152
Abstract
Commercial and industrial deployments of robot fleets at Amazon, Nimble, Plus One, Waymo, and Zoox query remote human teleoperators when robots are at risk or unable to make task progress. With continual learning, interventions from the remote pool of humans can also be used to improve the robot fleet control policy over time. A central question is how to effectively allocate limited human attention. Prior work addresses this in the single-robot, single-human setting; we formalize the Interactive Fleet Learning (IFL) setting, in which multiple robots interactively query and learn from multiple human supervisors. We propose Return on Human Effort (ROHE) as a new metric and Fleet-DAgger, a family of IFL algorithms. We present an open-source IFL benchmark suite of GPU-accelerated Isaac Gym environments for standardized evaluation and development of IFL algorithms. We compare a novel Fleet-DAgger algorithm to 4 baselines with 100 robots in simulation. We also perform a physical block-pushing experiment with 4 ABB YuMi robot arms and 2 remote humans. Experiments suggest that the allocation of humans to robots significantly affects the performance of the fleet, and that the novel Fleet-DAgger algorithm can achieve up to 8.8x higher ROHE than baselines. See https://tinyurl.com/fleet-dagger for supplemental material.
More
Translated text
Key words
Fleet Learning,Interactive Learning,Human Robot Interaction
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers
David Kortenkamp, D Keirnschreckenghost,R P Bonasso
2000

被引用37 | 浏览

Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了交互式机器人车队学习框架Fleet-DAgger,通过可扩展的人类监督来优化机器人车队的控制策略,并引入了新的性能评价指标ROHE(Return on Human Effort)。

方法】:作者通过构建Interactive Fleet Learning(IFL)设置,将多个机器人与多个人类监督者之间的交互学习形式化,并提出了基于ROHE的Fleet-DAgger算法。

实验】:在模拟环境中,作者将Fleet-DAgger算法与4个基线算法进行了比较,使用100个机器人进行了测试。此外,还进行了实体实验,使用4个ABB YuMi机器人臂和2个远程人类操作者进行块推动任务。实验结果表明,Fleet-DAgger算法相比基线可达到高达8.8倍的ROHE提升。实验使用了Isaac Gym环境,这是一个GPU加速的开源模拟环境。