Due to the large amount of data involved in the three-dimensional (3D) LiDAR point clouds, point cloud compression (PCC) becomes indispensable to many real-time applications. In autonomous driving of connected… Click to show full abstract
Due to the large amount of data involved in the three-dimensional (3D) LiDAR point clouds, point cloud compression (PCC) becomes indispensable to many real-time applications. In autonomous driving of connected vehicles for example, point clouds are constantly acquired along the time and subjected to be compressed. Among the existing PCC methods, very few of them have effectively removed the temporal redundancy inherited in the point clouds. To address this issue, a novel lossy LiDAR PCC system is proposed in this paper, which consists of the inter-frame coding and the intra-frame coding. For the former, a deep-learning approach is proposed to conduct bi-directional frame prediction using an asymmetric residual module and 3D space-time convolutions; the proposed network is called the bi-directional prediction network (BPNet). For the latter, a novel range-adaptive floating-point coding (RAFC) algorithm is proposed for encoding the reference frames and the B-frame prediction residuals in the 32-bit floating-point precision. Since the pixel-value distribution of these two types of data are quite different, various encoding modes are designed for providing adaptive selection. Extensive simulation experiments have been conducted using multiple point cloud datasets, and the results clearly show that our proposed PCC system consistently outperforms the state-of-the-art MPEG G-PCC in terms of data fidelity and localization, while delivering real-time performance.
               
Click one of the above tabs to view related content.