Learning to Prevent Monocular SLAM Failure using Reinforcement Learning
Monocular SLAM refers to using a single camera to estimate robot ego motion\nwhile building a map of the environment. While Monocular SLAM is a well studied\nproblem, automating Monocular SLAM by integrating it with trajectory planning\nframeworks is particularly challenging. This paper presents a novel formulation\nbased on Reinforcement Learning (RL) that generates fail safe trajectories\nwherein the SLAM generated outputs do not deviate largely from their true\nvalues. Quintessentially, the RL framework successfully learns the otherwise\ncomplex relation between perceptual inputs and motor actions and uses this\nknowledge to generate trajectories that do not cause failure of SLAM. We show\nsystematically in simulations how the quality of the SLAM dramatically improves\nwhen trajectories are computed using RL. Our method scales effectively across\nMonocular SLAM frameworks in both simulation and in real world experiments with\na mobile robot.\n

