This letter presents an algorithm to simultaneously reconstruct the geometry of an unknown object and its environment via physical interactions. Applications involving highly cluttered or occluded workspaces prevent the effective… Click to show full abstract
This letter presents an algorithm to simultaneously reconstruct the geometry of an unknown object and its environment via physical interactions. Applications involving highly cluttered or occluded workspaces prevent the effective use of vision. To address some of the challenges that arise, we propose an approach that instead utilizes force and torque measurements at the robot end-effector to solve for possible contact locations and probabilistically determine the occupancy likelihood on a 3D map. Our procedure constructs two occupancy maps: one fixed that represents the environment and another map that moves with the robot end-effector and reconstructs the grasped object shape, where each map informs the probability updates on the other. The algorithm is applied and tested on two scenarios: retrieving a tangled object from a scene and reconstructing the geometry of an object. We compare the results against a configuration space planner and a reinforcement learning algorithm, with our method requiring fewer collisions with the environment to extract the object.
               
Click one of the above tabs to view related content.