|Search by item||HOME > Access full text > Search by item|
JBE, vol. 23, no. 6, pp.760-767, November, 2018
Night-to-Day Road Image Translation with Generative Adversarial Network for Driver Safety Enhancement
Namhyun Ahn and Suk-Ju Kang
C.A E-mail: firstname.lastname@example.org
Advanced driver assistance system(ADAS) is a major technique in the intelligent vehicle field. The techniques for ADAS can be separated in two classes, i.e., methods that directly control the movement of vehicle and that indirectly provide convenience to driver. In this paper, we propose a novel system that gives a visual assistance to driver by translating a night road image to a day road image. We use the black box images capturing the front road view of vehicle as inputs. The black box images are cropped into three parts and simultaneously translated into day images by the proposed image translation module. Then, the translated images are recollected to original size. The experimental result shows that the proposed method generates realistic images and outperforms the conventional algorithms.
Keyword: Image translation, Image enhancement, Deep learning, Cycle consistency, Generative adversarial network
 A. Paul, R. Chauhan, R. Srivastava, and M. Baruah, “Advanced Driver Assistance Systems”, SAE Technical Paper 2016-28-0223, 2016, doi:10.4271/2016-28-0223.
 Z. Sun, G. Bebis and R. Miller, "On-road vehicle detection using optical sensors: a review," Proceedings. The 7th International IEEE Conference on Intelligent Transportation Systems (IEEE Cat. No.04TH8749), Washington, WA, USA, pp. 585-590, 2004, doi: 10.1109/ITSC.2004.1398966.
 S. Plainis and IJ. Murray, “Reaction times as an index of visual conspicuity when driving at night”, Ophthalmic Physiol Opt., Vol.22, No.5, pp.409-415, Sep, 2002.
 T. Hacibekir, S. Karaman, E. Kural, E.S. Ozturk, M. Demirci and B. Aksun Guvenc, “Adaptive Headlight System Design Using Hard- ware-In-The-Loop Simulation”, 2006 IEEE Conference on Computer Aided Control System Design, 2006 IEEE International Conference on Control Applications, 2006 IEEE International Symposium on Intell- igent Control, Munich, Germany, pp.2165-3011, October, 2006, doi:10.1109/CACSD-CCA-ISIC.2006. 4776767.
 A. A. Efros and T. K. Leung. “Texture synthesis by non-parametric sampling”, In ICCV, 1999.
 A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin. “Image analogies”. In SIGGRAPH, pp.327–340. ACM, 2001.
 J. Zhu, T. Park, P. Isola, and A.A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks”, In ICCV, 2017.
 I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. “Generative adversarial nets”. In NIPS, 2014.
 X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley. “Least squares generative adversarial networks”, In CVPR, 2017.
 S.–D. Chen, and A. R. Ramli, “Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation,” IEEE Transactions on Consumer Electronics, Vol. 49, No. 4, pp.1301-1309, 2003.
 W. Kubinger, M. Vincze and M. Ayromlou, "The role of gamma correction in colour image processing," 9th European Signal Processing Conference (EUSIPCO 1998), Rhodes, pp. 1-4, 1998.
 A. Hore and D. Ziou, "Image Quality Metrics: PSNR vs. SSIM," 2010 20th International Conference on Pattern Recognition, Istanbul, pp. 2366-2369, 2010, doi: 10.1109/ICPR.2010.579.
 T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” In Advances in Neural Information Processing Systems, pp. 2234–2242, 2016.
 Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In ECCV. 2018.
 Ke Gu, Jun Zhou, Jun-Fei Qiao, Guangtao Zhai, Weisi Lin, Alan Conrad Bovik, "No-reference quality assessment of screen content pictures," IEEE Transactions on Image Processing (T-IP), vol. 26, no. 8, pp. 4005-4018, Aug. 2017.