In the past few decades, license plate detection and recognition (LPDR) systems have made great strides relying on Convolutional Neural Networks (CNN). However, these methods are evaluated on small and… Click to show full abstract
In the past few decades, license plate detection and recognition (LPDR) systems have made great strides relying on Convolutional Neural Networks (CNN). However, these methods are evaluated on small and non-representative datasets that perform poorly in complex natural scenes. Besides, most of existing license plate datasets are based on a single image, while the information source in the actual application of license plates is frequently based on video. The mainstream algorithms also ignore the dynamic clue between consecutive frames in the video, which makes the LPDR system have a lot of room for improvement. In order to solve these problems, this paper constructs a large-scale video-based license plate dataset named LSV-LP, which consists of 1,402 videos, 401,347 frames and 364,607 annotated license plates. Compared with other data sets, LSV-LP has stronger diversity, and at the same time, it has multiple sources due to different collection methods. There may be multiple license plates in a frame, which is more in line with complex natural scenes. Based on the proposed dataset, we further design a new framework that explores the information between adjacent frames, called MFLPR-Net. In addition to these, we release the annotation tools for license plates or vehicles in videos. By evaluating the performance of MFLPR-Net and some mainstream methods, it is proved that the proposed model is superior to other LPDR systems.In order to be more intuitive, we put some samples on https://drive.google.com/file/d/1udqRddpJZMpTdHHQdwZRll6vaYALUiql/view?usp=sharingGoogle Drive. The whole dataset is available at https://github.com/Forest-art/LSV-LP.
               
Click one of the above tabs to view related content.