We present a novel approach for manipulating high-DOF deformable objects such as cloth. Our approach uses a random-forest-based controller that maps the observed visual features of the cloth to an… Click to show full abstract
We present a novel approach for manipulating high-DOF deformable objects such as cloth. Our approach uses a random-forest-based controller that maps the observed visual features of the cloth to an optimal control action of the manipulator. The topological structure of this random-forest is determined automatically based on the training data, which consists of visual features and control signals. The training data is constructed online using an imitation learning algorithm. We have evaluated our approach on different cloth manipulation benchmarks such as flattening, folding, and twisting. In all these tasks, we have observed convergent behavior for the random-forest. On convergence, the random-forest-based controller exhibits superior robustness to observation noise compared with other techniques such as convolutional neural networks and nearest neighbor searches. Videos and supplemental material are available at http://gamma.cs.unc.edu/ClothM/.
               
Click one of the above tabs to view related content.