Learning Where To Trust Unreliable Models In An Unstructured World For Deformable Object Manipulation

SCIENCE ROBOTICS(2021)

引用 34|浏览52
暂无评分
摘要
The world outside our laboratories seldom conforms to the assumptions of our models. This is especially true for dynamics models used in control and motion planning for complex high-degree of freedom systems like deformable objects. We must develop better models, but we must also consider that, no matter how powerful our simulators or how big our datasets, our models will sometimes be wrong. What is more, estimating how wrong models are can be difficult, because methods that predict uncertainty distributions based on training data do not account for unseen scenarios. To deploy robots in unstructured environments, we must address two key questions: When should we trust a model and what do we do if the robot is in a state where the model is unreliable. We tackle these questions in the context of planning for manipulating rope-like objects in clutter. Here, we report an approach that learns a model in an unconstrained setting and then learns a classifier to predict where that model is valid, given a limited dataset of rope-constraint interactions. We also propose a way to recover from states where our model prediction is unreliable. Our method statistically significantly outperforms learning a dynamics function and trusting it everywhere. We further demonstrate the practicality of our method on real-world mock-ups of several domestic and automotive tasks.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要