The purpose of our lab's research program is to advance visual navigation of mobile robots. Our work finds application in transportation, planetary exploration, mining, warehouses, and military scenarios.

Much of our work is focused on a navigation stack we pioneered called visual teach and repeat (VT&R). VT&R has been particularly interesting in that it allows a robot to repeat a long (several kilometre) route that was taught manually, using only a single vision sensor (stereo camera, lidar, kinect) for feedback (no GPS needed). We have also layered a planning framework on top of VT&R to allow a robot to build a network of reusable paths (NRP) autonomously while exploring a space. Imagine a robot finding its way down a long canyon and then realizing it is a dead-end; because it has saved the outbound route, it can backtrack along it using VT&R and then try something else. VT&R has been successful because it avoids the need to construct a visual map of the world in a single priviledged coordinate frame and instead utilizes a topometric map.

Today we are interested in extending our ability to navigate visually to truly long durations (months or years) in order to enable real applications. We need to deal with changes in appearance (lighting, weather), in geometry (obstructions, dynamic objects), in our robots (hardware degradation/replacement/upgrades), and even in our algorithms. As a challenge, how could we build a map that a robot could use to navigate safely for 10 years? We plan to spend the next several years finding out.