Search by item HOME > Access full text > Search by item

JBE, vol. 26, no. 1, pp.88-98, January, 2021

DOI: https://doi.org/10.5909/JBE.2021.26.1.88

Color Noise Detection and Image Restoration for Low Illumination Environment

Gyoheak Oh, Jaelin Lee, and Byeungwoo Jeon

C.A E-mail: bjeon@skku.edu

Abstract:

Recently, the crime prevention and culprit identification even in a low illuminated environment by CCTV is becoming ever more important. In a low lighting situation, CCTV applications capture images under infrared lighting since it is unobtrusive to human eye. Although the infrared lighting leads to advantage of capturing an image with abundant fine texture information, it is hard to capture the color information which is very essential in identifying certain objects or persons in CCTV images. In this paper, we propose a method to acquire color information through DCGAN from an image captured by CCTV in a low lighting environment with infrared lighting and a method to remove color noise in the acquired color image.



Keyword: Machine learning, DCGAN, color, filter, Near-Infrared

Reference:
[1] X. Guo, Y. Li, and H. Ling, “LIME: Low-Light Image Enhancement via Illumination Map Estimation,” IEEE Transactions on Image Processing, Vol. 26, No. 2, pp. 982-993, Feb, 2017.
[2] W. Wang, C. Wei, W. Yang, J. Liu, “GLADNet: Low-Light Enhancement Network with Global Awareness,” IEEE international Conference on Automatic Face & Gesture Recognition, Xi’an, China, pp. 751-755, May, 2018.
[3] Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, “EnlightenGAN: Deep Light Enhancement without Paired Supervision,” arXiv preprint arXiv:1906.06972, 2019.
[4] A. Radford, and L. Metz, “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,” International Conference on Learning Representations, Caribe Hilton, Puerto Rico, pp. 1-16, 2016.
[5] I. Goodfellow, J. P. Abadie, M. Mirza, B. Xu, D. W. Farley, S. Ozair, A. Courvile, Y. Bengio, “Generative Adversarial Networks,” Proceedings of the International Conference on Neural Information Processing Systems, Montreal, Canada, pp. 2672–2680, 2014.
[6] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” International Conference on Medical Computing and Computer-Assisted Intervention, Munich, Germany, pp. 234-241, 2015.
[7] T. Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature Pyramid Networks for Object Detection,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, Hawaii, pp. 2117-2125, 2017.
[8] G. Huang, Z. Liu, and L. Maaten, “Densely Connected Convolutional Networks,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, Hawaii, pp. 4700-4708,2017.
[9] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, Hawaii, pp. 2961-2969,2017.
[10] G. Oh, J. Lee, and B. Jeon, “Noise Removal in Reconstructed Color Image by GAN for Low Light CCTV Application,” International conference on Electronics, information, and Communication 2020, Barcelona, Spain, 2020.

Comment


Editorial Office
1108, New building, 22, Teheran-ro 7-gil, Gangnam-gu, Seoul, Korea
Homepage: www.kibme.org TEL: +82-2-568-3556 FAX: +82-2-568-3557
Copyrightⓒ 2012 The Korean Institute of Broadcast and Media Engineers
All Rights Reserved