Lidar sensors are commonly equipped on a mobile mapping system (MMS) to establish point clouds for HD map creation. However, the point clouds themselves do not contain object attributes. Therefore,… Click to show full abstract
Lidar sensors are commonly equipped on a mobile mapping system (MMS) to establish point clouds for HD map creation. However, the point clouds themselves do not contain object attributes. Therefore, human operators have to manually obtain objects’ position to assign attributes for further HD map conversion, inevitably resulting in time-consuming processes and significant labor costs. To solve the above problems, in this paper, we present an MMS equipped with non-survey grade Lidar, commercial grade camera, and entry level GNSS/INS, which incorporates ground control points (GCPs) with a Normal Distribution Transform Simultaneously Localization and Mapping (NDT SLAM) refinement and fluctuation adjustment to secure both absolute position accuracy and relative position accuracy of the reconstructed point cloud. Meanwhile, a deep neural network for image detection is employed to obtain the bounding box of traffic signs from each image frame. By applying the translation and rotation transformation between Lidar points and camera pixels, intersection of the detected object in the image and Lidar scan points can be found. By accumulating extracted Lidar points of the traffic sign in several detection frames, we can then obtain an accurate 3D geodetic coordinate of the traffic signs. Experimental results show that point clouds can be reconstructed with an average 3D RMSE of only 8.6cm, and center geodetic coordinates of traffic signs can be further extracted in sub-meter accuracy to significantly reduce labor work in HD map creation.
               
Click one of the above tabs to view related content.