Traffic sign recognition (TSR) plays an important role in driving assistance system and traffic safety insurance. However, existing methods focus on extracting features of traffic signs and ignore the constraints… Click to show full abstract
Traffic sign recognition (TSR) plays an important role in driving assistance system and traffic safety insurance. However, existing methods focus on extracting features of traffic signs and ignore the constraints of spatial positional relationships between traffic signs and other objects in the scene. This way results in incorrectly detecting other similar objects as traffic signs and failing to detect very small traffic signs. A TSR method based on semantic scene understanding and structural traffic sign location is proposed in this study to solve the aforementioned problems. A scene structure model based on the constraints of spatial positional relationships between traffic signs and other objects is proposed to establish trusted search regions. An improved Light-weight RefineNet is used to analyze and understand a scene semantically and accurately and then segment objects in complicated environments precisely. A new network multiscale densely connected object detector (MDCOD) based on densely connected style, multiscale feature fusion, and improved K-means++ algorithms is proposed to recognize very small traffic signs. The trusted traffic signs are found by filtering false candidates outside the scene structure model. The proposed method is tested on Tsinghua-Tencent 100K and German Traffic Sign Detection Benchmark datasets and achieves accuracies of 92.8% and 99.90%, respectively, outperforming the existing methods.
               
Click one of the above tabs to view related content.