Autonomous, or self-driving, cars are emerging as the solution to several problems primarily caused by humans on roads, such as accidents and traffic congestion. However, those benefits come with great… Click to show full abstract
Autonomous, or self-driving, cars are emerging as the solution to several problems primarily caused by humans on roads, such as accidents and traffic congestion. However, those benefits come with great challenges in the verification and validation (V&V) for safety assessment. In fact, due to the possibly unpredictable nature of Artificial Intelligence (AI), its use in autonomous cars creates concerns that need to be addressed using appropriate V&V processes that can address trustworthy AI and safe autonomy. In this study, the relevant research literature in recent years has been systematically reviewed and classified in order to investigate the state-of-the-art in the software V&V of autonomous cars. By appropriate criteria, a subset of primary studies has been selected for more in-depth analysis. The first part of the review addresses certification issues against reference standards, challenges in assessing machine learning, as well as general V&V methodologies. The second part investigates more specific approaches, including simulation environments and mutation testing, corner cases and adversarial examples, fault injection, software safety cages, techniques for cyber-physical systems, and formal methods. Relevant approaches and related tools have been discussed and compared in order to highlight open issues and opportunities.
               
Click one of the above tabs to view related content.