ABSTRACT Light detection and ranging (LiDAR) scanning has become a prevalent technique for digitalizing outdoor scenes with three-dimensional (3D) point clouds. An automatic segmentation of the LiDAR data is important… Click to show full abstract
ABSTRACT Light detection and ranging (LiDAR) scanning has become a prevalent technique for digitalizing outdoor scenes with three-dimensional (3D) point clouds. An automatic segmentation of the LiDAR data is important for understanding and reconstructing the outdoor scenes. However, it is still a challenging task due to complex and various objects in the outdoor scenes and some scanning drawbacks in the LiDAR data. Observing that most of the objects, such as the ground, road, roof, and facade, can be locally described with a group of geometric shapes, e.g. plane, sphere, and cylinder, we propose an automatic method to segment raw LiDAR data by robustly extracting the shapes in this paper. Firstly, our method divides the raw LiDAR data into a number of supervoxels considering the geometric and spectral consistencies of LiDAR data. Secondly, we robustly extract shapes from each supervoxels using a random sample consensus (RANSAC)-based method and evaluates and optimises the extracted shapes with a density loss estimation technique. Finally, our method outputs the segmentation result by merging the extracted shapes into a group of complete shapes, each of them represents a meaningful object in the outdoor scene. Experiments show that the method is efficient and robust for extracting most of the shapes in the outdoor scenes from a given raw LiDAR point cloud, and no preprocessing is required.
               
Click one of the above tabs to view related content.