Over the last decades, deep neural networks (DNNs) have penetrated all fields of science and the real world. As a result of the lack of quantifiable data and model uncertainty,… Click to show full abstract
Over the last decades, deep neural networks (DNNs) have penetrated all fields of science and the real world. As a result of the lack of quantifiable data and model uncertainty, deep learning is frequently brittle, illogical, and challenging to provide trustworthy assurance for autonomous vehicles’ (AVs) perception. This hole is filled by the suggested approach to uncertainty quantification. Nevertheless, most of the previous studies focused on the methodology and there is still a lack of research on the application of AV. To the best of our knowledge, this survey is the first time to review the application of uncertainty in the field of AV perception and localization. First, this survey analyzes the sources of uncertainty in autonomous perception, including the uncertainty brought on by sensor internal and external factors as well as the sensor distortion caused by complex scenes. Second, we propose an evaluation criterion and use the criterion to carry out a quantitative analysis of the perception field of application for AVs, and we discuss the mainstream datasets. Third, we put forward a number of open issues and raise some future research directions, which are of guiding significance to readers who are beginning to enter this field. We believe that epistemic uncertainty is currently the dominant research direction and that there is still a long way to go in the study of aleatoric uncertainty. And this survey will be devoted to promoting the development of uncertainty research on AV perception.
               
Click one of the above tabs to view related content.