LiDAR semantic segmentation is essential in autonomous vehicle safety. A rotating 3D LiDAR projects more laser points onto nearby objects and fewer points onto farther objects. Therefore, when projecting points… Click to show full abstract
LiDAR semantic segmentation is essential in autonomous vehicle safety. A rotating 3D LiDAR projects more laser points onto nearby objects and fewer points onto farther objects. Therefore, when projecting points onto a 2D image, such as spherical coordinates, nearer objects appear larger than more distant objects. Recognizing a closer object requires a larger receptive field, whereas recognizing a nearer object requires a smaller receptive field. However, existing CNNs have always used the same receptive field, making it difficult to express objects of various sizes in a single-sized receptive field, restricting their performance in terms of the recognition of larger (or nearer) objects that require a larger receptive field. In response to these limitations, we propose a transformable dilated convolution (TD Conv) to adjust the convolution filter’s size according to the input distance. Leveraging the distance information of LiDAR and dilated convolution, a large convolution was applied to nearby objects, and a small convolution was applied to farther objects. The proposed method yielded good performance when recognizing nearer objects or larger objects such as roads and buildings and showed similar performance to the conventional method for farther or smaller objects. To test the proposed method, we used the SemanticKITTI dataset.
               
Click one of the above tabs to view related content.