LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Learn to See Fast: Lessons Learned From Autonomous Racing on How to Develop Perception Systems

Photo by ldxcreative from unsplash

The objective of this work is to provide a comprehensive understanding of the development of autonomous vehicle perception systems. So far, most autonomy perception research has been concentrated on improving… Click to show full abstract

The objective of this work is to provide a comprehensive understanding of the development of autonomous vehicle perception systems. So far, most autonomy perception research has been concentrated on improving perception systems’ algorithmic quality or combining different sensor setups. In our work, we draw conclusions from participating in the Indy Autonomous Challenge 2021 and its follow-up event in Las Vegas 2022. These were the first head-to-head autonomous racing competitions that required an entire perception pipeline to perceive the environment and the opposing surrounding vehicles. Our research includes quantitative results from collected vehicle data and qualitative results from simulation, video, and multiple race analysis. The Indy Autonomous Challenge was one of the few research projects that considered the entire autonomous vehicle. Therefore, our findings indicate insights on the system level, including hardware setup and full-stack software. We can demonstrate that different sensor modalities in the vehicle have strengths and weaknesses when they are deployed. Our results further show the difficulties and challenges that emerge when multi-modal perception systems must run in real-time on real-world autonomous vehicles. The most concise finding from our investigation is the summary of critical learnings when developing and deploying perception systems for autonomous systems. Given the background of the study, it was inevitable that our conclusions were influenced by driving on the racetrack and only one hardware setup available. Therefore, in the discussion, we draw further parallels to driving on public roads in dense traffic. More studies are needed to investigate the development and deployment of multi-modal perception systems for autonomous road vehicles with different hardware setups and various object detection, localization, and prediction algorithms. The novel contributions of this work are given by 12 lessons learned, summarized in 5 categories. These were derived and validated through a realized real-world application project. The videos of the final events in Indianapolis and Las Vegas can be watched here:IAC: https://www.youtube.com/watch?v=ERTffn3IpIs&ab_channel=CNETHighlightsAC@CES: https://www.youtube.com/watch?v=df9f4Qfa0uU&ab_channel=CNETHighlightsMultiple modules of the software stack are open source: https://github.com/TUMFTM.

Keywords: perception systems; autonomous racing; lessons learned; learn see; vehicle; perception

Journal Title: IEEE Access
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.