With the development of 3D sensors, estimating the point movements between two consecutive point clouds is becoming increasingly attractive. Using correlation-based deep neural networks, existing solutions achieve promising performance. However,… Click to show full abstract
With the development of 3D sensors, estimating the point movements between two consecutive point clouds is becoming increasingly attractive. Using correlation-based deep neural networks, existing solutions achieve promising performance. However, these methods couple the non-occluded points and occluded points into the same cost volume and attempt to regress the scene flows of the occluded points from those invalid matching costs directly, which severely suppress the performance of the flow predictor. As an alternative to previous scene flow estimation methods, our method adopts a subnet to predict the occlusion mask and explicitly masks those occluded points, which ensures our flow predictor to focus on estimating the motion flows of non-occluded points from those valid matching costs. Moreover, we further improve the flow estimation performance by employing a local-adaptive cost volume, which can deal with the local structure dissimilarity induced by the sparse sampling of depth sensors (LiDAR). For occluded points, we design an uncertainties-truncated propagation network to propagate the flows of non-occluded to those occluded points. We demonstrate the effectiveness of the proposed method by quantitative and qualitative comparisons with recent baseline works on the FlyingThings3D dataset and the KITTI 2015 dataset, and our results surpass all competing methods by a large margin.
               
Click one of the above tabs to view related content.