全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于CRNN的车牌识别方法
CRNN-Based License Plate Recognition Method

DOI: 10.12677/CSA.2021.1111285, PP. 2804-2816

Keywords: 车牌检测,车牌识别,YOLOv4-Tiny,CRNN,STN,残差学习,注意力机制
License Plate Detection
, License Plate Recognition, YOLOv4-Tiny, CRNN, STN, Residual Learning, Attention Mechanism

Full-Text   Cite this paper   Add to My Lib

Abstract:

车牌识别是道路交通、智慧城市建设的重要组成部分,传统的车牌识别需要先检测出车牌位置,然后通过像素映射等方法分割出单个字符,最后利用模板匹配等方法进行识别。整个过程不仅速度慢,而且操作繁琐,分割或识别的效果也很难令人满意。本文基于YOLOv4-tiny和卷积循环神经网络(Convolution Recurrent Neural Network, CRNN)提出了一种端到端的方法。该方法利用注意力机制与YOLO4-tiny的融合,有效且快速的检测车牌位置,然后利用空间变换网络(Spatial Transformer Networks, STN)、残差学习(Residual Learning)以及注意力机制(Attention)与CRNN的融合高效的识别车牌信息。本文使用平均精度(Average Precision, AP)和识别准确率(Accuracy)作为检测和识别结果的主要评估指标。实验结果表明,车牌检测模型在交并比(Intersection-over-Union, IoU)为0.5的前提下AP值达到了93.60%,并且识别模型在蓝牌、绿牌的混合车牌下达到了92.15%左右的识别准确率。该方法相比于之前的车牌识别模型,不但识别准确率更高,而且能够直接通过该模型识别混合车牌,大大减少了现实情况下车牌识别的复杂度。
License plate recognition is an important part of road traffic and smart city construction. Traditional license plate recognition needs to detect the position of the license plate first, then segment a single character by pixel mapping, and finally use template matching and other methods for recognition. The whole process is not only slow, but also cumbersome to operate, and the effect of segmentation or recognition is difficult to be satisfied. This paper proposes an end-to-end method based on YOLOv4-tiny and Convolution Recurrent Neural Network (CRNN). This method uses the fusion of the attention mechanism and YOLO4-tiny to effectively and quickly detect the position of the license plate, and then uses the spatial transformation network (STN), residual learning, attention mechanism and CRNN to efficiently recognition of license plate information. This article uses Average Precision (AP) and Recognition Accuracy as the main evaluation indicators for detection and recognition results. The experimental results show that the AP value of the license plate detection model reaches 93.60% under the premise that the Intersection-over-Union (IoU) is 0.5, and the recognition accuracy reaches about 92.15% under the mixed license plate of blue and green plates. Compared with the previous license plate recognition model, this method not only has higher recognition accuracy, but also can directly recognize mixed license plates through the model, which greatly reduces the complexity of license plate recognition in real situations.

References

[1]  Redmon, J., Kumar Divvala, S., Girshick, R., et al. (2016) You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 27-30 June 2016, 779-788.
https://doi.org/10.1109/CVPR.2016.91
[2]  Liu, W., et al. (2016) SSD: Single Shot MultiBox Detector. In: Leibe, B., Matas, J., Sebe, N. and Welling, M., Eds., Computer Vision—ECCV 2016. Lecture Notes in Computer Science, Springer, Cham, 21-37.
https://doi.org/10.1007/978-3-319-46448-0_2
[3]  Ren, S.Q., He, K.M., Girshick, R., et al. (2017) Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 1137-1149.
https://doi.org/10.1109/TPAMI.2016.2577031
[4]  洛雪超, 刘桂雄, 冯云庆. 一种基于车牌特征信息的车牌识别方法[J]. 华南理工大学学报, 2003, 31(4): 70-73.
[5]  Xia, H. and Liao, D. (2011) The Study of License Plate Character Segmentation Algorithm Based on Vetical Projection. 2011 International Conference on Consumer Electron-ics, Communications and Networks, Xianning, 16-18 April 2011, 4583-4586.
https://doi.org/10.1109/CECNET.2011.5768714
[6]  Anagnostopoulos, C.N.E. (2006) A License Plant-Recognition Algorithm for Intelligent Transportation System Applications. IEEE Transactions on Intelligent Transportation System, 7, 377-392.
https://doi.org/10.1109/TITS.2006.880641
[7]  Li, C. (2002) Cluster-Based Method of Characters Segmentation of License Plate. Computer Engineering & Application, 6, 221-222.
[8]  Wang, X.H., Yu, J.J., Miao, Z.H., et al. (2014) License Plate Recognition Based on Pulse Coupled Neutral Networks and Template Matching. Proceedings of the 33rd Chinese Control Conference, Nanjing, 28-30 July 2014, 5086-5090.
https://doi.org/10.1109/ChiCC.2014.6895805
[9]  肖秀春, 吴伟鹏. 基于深度学习的车牌字符识别的设计与实现[J]. 电子技术与软件工程, 2018(16): 65-66.
[10]  Chen, L.C., Papandreou, G., Kokkinos, I., et al. (2016) DeepLab: Semantic image Segmentation with Deep Convolution Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Transactions on Pattern Analysis & Machine Intelligence, 40, 834-848.
[11]  赵志宏, 杨绍普, 马增强. 基于神经网络LeNet-5的车牌字符识别研究[J]. 系统仿真学报, 2010(3): 638-641.
[12]  Jain, V., Sasindran, Z., Rajagopal, A., Biswas, S., Bharadwaj, H.S. and Ramakrishnan, K.R. (2016) Deep Automatic License Plate Recognition System. Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing, New York, 18-22 December 2016, 1-8.
https://doi.org/10.1145/3009977.3010052
[13]  Li, H., Wang, P. and Shen, C. (2019) To-wards End-to-End Car License Plates Detection and Recognition with Deep Neural Networks. IEEE Transactions on In-telligent Transportation Systems, 20, 1126-1136.
[14]  Zherzdev, S. and Gruzdev, A. (2018) LPRnet: License Plate Recognition via Deep Neural Network.
https://arxiv.org/abs/1806.10447
[15]  Bochkovskiy, A., Wang, C.Y. and Liao, H.Y.M. (2020) YOLOv4: Optimal Speed and Accuracy of Object Detection.
https://arxiv.org/pdf/2004.10934.pdf
[16]  杨俊闯, 赵超. K-Means聚类算法研究综述[J]. 计算机工程与应用, 2019, 55(23): 7-14+63.
[17]  Woo, S.H., Park, J.C., Lee, J.Y. and Kweon, I.S. (2018) CBAM: Convolutional Block At-tention Module. In: Ferrari, V., Hebert, M., Sminchisescu, C. and Weiss Y., Eds., Computer Vision—ECCV 2018. Lec-ture Notes in Computer Science, Springer, Cham, 3-19.
https://doi.org/10.1007/978-3-030-01234-2_1
[18]  Hu, J., Shen, L. and Sun, G. (2017) Squeeze-and-Excitation Networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 7132-7141.
https://doi.org/10.1109/CVPR.2018.00745
[19]  Shi, B.G., Bai, X. and Yao, C. (2015) An End-to-End Trainable Neural Network for Image-Based Sequence Recognition and Its Application to Scene Text Recognition. IEEE Transac-tions on Pattern Analysis and Machine Intelligence, 39, 2298-2304.
[20]  Graves, A., Fernandez, S., Gomez, F. and Schmidhu, J. (2006) Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neu-ral Network. Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, 25-29 June 2006, 369-376.
https://doi.org/10.1145/1143844.1143891
[21]  Jaderberg, M., Simonyan, K., Zisserman, A. and Kavuk-cuoglu, K. (2016) Spatial Transformer Networks.
https://arxiv.org/abs/1506.02025
[22]  He, K.M., Zhang, X.Y., Ren, X.Q. and Sun, J. (2015) Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Ve-gas, 27-30 June 2016, 770-778.
https://doi.org/10.1109/CVPR.2016.90
[23]  Xu, Z., Yang, W., Meng, A., et al. (2018) Towards End-to-End Li-cense Plate Detection and Recognition: A Large Dataset and Baseline. In: Ferrari, V., Hebert, M., Sminchisescu, C. and Weiss Y., Eds., Computer Vision—ECCV 2018. Lecture Notes in Computer Science, Springer, Cham, 261-277.
https://doi.org/10.1007/978-3-030-01261-8_16

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133