Future rover missions will be enhanced with the opportunistic search of salient targets during the planetary traverse phase. An essential component of the search is the locating and tracking of… Click to show full abstract
Future rover missions will be enhanced with the opportunistic search of salient targets during the planetary traverse phase. An essential component of the search is the locating and tracking of targets at the camera control level. The rover visual system must be able to follow quantified information gradients for smooth tracking in the visual field with limited information from images and delayed positional feedback caused by long communication delays inherent in planetary exploration. We propose a control algorithm based on vestibulo-ocular reflexes employed by the human cerebellum. The controller uses a feedback error learning model, which is able to track targets by compensating for the rover motion at the pan–tilt using a network trained prediction of the pan–tilt dynamics. The feedforward controller proved capable in tracking objects in the visual field as was demonstrated in both simulation and on the Barrett WAM.
               
Click one of the above tabs to view related content.