Recent trends in the development of autonomous vehicles focus on real-time processing of vast amounts of data from various sensors. The data can be acquired using multiple cameras, lidars, ultrasonic… Click to show full abstract
Recent trends in the development of autonomous vehicles focus on real-time processing of vast amounts of data from various sensors. The data can be acquired using multiple cameras, lidars, ultrasonic sensors, and radars to collect useful information about the state of the traffic and the surroundings. Significant computational power is required to process the data fast enough, and this is even more pronounced in vehicles that not only assist the driver but are capable of fully autonomous driving. This article proposes speed and accuracy improvement of traffic sign detection and recognition in high-definition images, based on focusing on different regions of interest in traffic images. These regions are determined with efficient and parallelized preprocessing of every traffic image, after which convolutional neural network is applied for detection and recognition in parallel on graphics processing units. We employed different “You Only Look Once” (YOLO) architectures as baseline detectors, due to their speed, straightforward architecture, and high accuracy in general object detection tasks. Several preprocessing procedures were proposed, to achieve real-time performance requirement. Our experiments using a large-scale traffic sign dataset show that we can achieve real-time detection in high-definition images with high recognition accuracy.
               
Click one of the above tabs to view related content.