Search by item HOME > Access full text > Search by item

JBE, vol. 26, no. 4, pp.453-462, July, 2021

DOI: https://doi.org/10.5909/JBE.2021.26.4.453

Improved Method of License Plate Detection and Recognition using Synthetic Number Plate

Il-Sik Chang and Gooman Park

C.A E-mail: gmpark@seoul.ac.kr

Abstract:

A lot of license plate data is required for car number recognition. License plate data needs to be balanced from past license plates to the latest license plates. However, it is difficult to obtain data from the actual past license plate to the latest ones. In order to solve this problem, a license plate recognition study through deep learning is being conducted by creating a synthetic license plates. Since the synthetic data have differences from real data, and various data augmentation techniques are used to solve these problems. Existing data augmentation simply used methods such as brightness, rotation, affine transformation, blur, and noise. In this paper, we apply a style transformation method that transforms synthetic data into real-world data styles with data augmentation methods. In addition, real license plate data are noisy when it is captured from a distance and under the dark environment. If we simply recognize characters with input data, chances of misrecognition are high. To improve character recognition, in this paper, we applied the DeblurGANv2 method as a quality improvement method for character recognition, increasing the accuracy of license plate recognition. The method of deep learning for license plate detection and license plate number recognition used YOLO-V5. To determine the performance of the synthetic license plate data, we construct a test set by collecting our own secured license plates. License plate detection without style conversion recorded 0.614 mAP. As a result of applying the style transformation, we confirm that the license plate detection performance was improved by recording 0.679mAP. In addition, the successul detection rate without image enhancement was 0.872, and the detection rate was 0.915 after image enhancement, confirming that the performance improved.



Keyword: Synthetic license plate, Data augmentation, Style transformation, DeblurGANv2, YOLO-V5

Reference:
[1] J. Van Hulse, T. M. Khoshgoftaar, A. Napolitano, “Experimental perspectives on learning from imbalanced data,” in Proceedings of the ACM International Conference on Machine Learning, New York, pp.935-942, 2007.
[2] Jae-Hyeon Lee, Sung-Man Cho, Seung-Ju Lee, Cheong-Hwa Kim, Goo-Man Park. “License Plate Recognition System Using Synthetic Data,” pp.107–115, 2020, doi:10.5573/ieie.2020.57.1.107
[3] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, et al. “Domain-Adversarial Training of Neural Networks,” Journal of Machine Learning Research vol. 17, pp.1-35, 2016
[4] Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. “Generative adversarial nets,” Adv Neural Inf Process Syst. 2014.
[5] Connor Shorten, Taghi M, Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” Journal of Big Data 2019.
[6] Terrance Devries, Graham W, Taylor, “Improved regularization of convolutional neural networks with Cutout,” arXiv preprint arXiv:1708.04552, 2017.
[7] Hongyu Guo, Yongyi Mao, and Richong Zhang, “Mixup as locally linear out-of-manifold regularization,” In AAAI, 2019.
[8] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo, “Cutmix: Regularization strategy to train strong classifiers with localizable features,” ICCV, 2019.
[9] Dan Hendrycks, Norman Mu, Ekin Dogus Cubuk, Barret Zoph, Justin Gilmer, Balaji Lakshminarayanan. “AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty,” ICLR, 2020.
[10] Orest Kupyn, Tetiana Martyniuk, Junru Wu, Zhangyang Wang, “DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better,” ICCV, 2019.
[11] Glenn Jocher.https://github.com/ultralytics/yolov5
[12] Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” arXiv: 1311.2524v5, Oct 2014.
[13] Ross Girshick, “Fast R-CNN,” arXiv:1504.08083v2, Sep 2015.
[14] Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” arXiv: 1506.01497v3, Jan 2016.
[15] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg, “SSD: Single Shot MultiBox Detector,” arXiv:1512.02325v5, Dec 2016.
[16] Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” arXiv:1506.02640v5, May 2016. [17] Joseph Redmon, XNOR.ai, “YOLO9000: Better, Faster, Stronger,” CVPR, 2017.
[18] Joseph Redmon, Ali Farhadi, “YOLOv3: An Incremental Improvement. arXiv:1804.02767, Apr 2018.
[19] Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” arXiv:2004.10934, Apr 2020.
[20] Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, “Image Style Transfer Using Convolutional Neural Networks,” CVPR. 2016.
[21] Xun Huang Serge Belongie, “Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization,” ICCV. 2017.
[22] Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A. Efros, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” ICCV. 2017.
[23] Lee, Yu-Jin, Kim, Sang-Joon, Park, Gyeong-Moo, Park, Goo- Man, “Comparison of number plate recognition performance of Synthetic number plate generator using 2D and 3D rotation,” The Korean Society Of Broad Engineers, pp.141-144, 2020.
[24] Ujjwal Saxena. https://github.com/UjjwalSaxena/Automold—Road- Augmentation-Library
[25] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” CVPR. 2017.
[26] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” CVPR. 2017.
[27] Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, Chen Change Loy, “ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks,” ECCV. 2018.
[28] Orest Kupyn, Volodymyr Budzan, et al. “DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks,” CVPR. 2018.
[29] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, “Inception-v4, inception-resnet and the impact of residual connections on learning,” In: Thirty-First AAAI Conference on Artificial Intelligence. 2017.
[30] Mark Sandler, Andrew Howard, Menglong Zhu, et al. “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” CVPR. 2018.

Comment


Editorial Office
1108, New building, 22, Teheran-ro 7-gil, Gangnam-gu, Seoul, Korea
Homepage: www.kibme.org TEL: +82-2-568-3556 FAX: +82-2-568-3557
Copyrightⓒ 2012 The Korean Institute of Broadcast and Media Engineers
All Rights Reserved