Search by item HOME > Access full text > Search by item

JBE, vol. 27, no. 5, pp.808-811, September, 2022

DOI: https://doi.org/10.5909/JBE.2022.27.5.808

Algorithm for Improving Visibility under Ambient Lighting Using Deep Learning

Hee Jin Lee and Byung Cheol Song

C.A E-mail: bcsong@inha.ac.kr

Abstract:

Display under strong ambient lighting is perceived darker than it really is. Existing techniques for solving the problem in terms of software show limitations in that image enhancement techniques are applied regardless of ambient lighting or chrominance is not improved compared to luminance. Therefore, this paper proposes a visibility enhancement algorithm using deep learning to adaptively respond to ambient lighting values and an equation to restore optimal chrominance for luminance. The algorithm receives an ambient lighting value with the input image, and then applies a deep learning model and chrominance restoration equation to generate an image to minimize the difference between the degradation modeling of enhanced image and the input image. Qualitative evaluation proves that the algorithm shows excellent performance in improving visibility under strong ambient lighting through comparison of images applied with degradation modeling.



Keyword: visibility improvement, human visual system, ambient lighting, deep learning

Reference:
[1] A. Reza, “Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement”, Journal of VLSI signal processing systems for signal, image and video technology, Vol. 18, No.1, pp.35-44, 2004.doi: https://doi.org/10.1023/B:VLSI.0000028532.53893.82
[2] C. Hessel and J. Morel, “An extended exposure fusion and its application to single image contrast enhancement”, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 137-146, 2020.doi: https://doi.org/10.1109/WACV45572.2020.9093643
[3] L. Wang and C. Jung, “Surrounding adaptive tone mapping in displayed images under ambient light”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.1992-1996, 2017.doi: https://doi.org/10.1109/ICASSP.2017.7952505
[4] R. Mantiuk, S. Daly and L. Kerofsky, “Display adaptive tone mapping”, ACM SIGGRAPH, pp.1-10, 2008.doi: https://doi.org/10.1145/1399504.1360667
[5] O. Ronneberger, P. Fischer and T. Brox, “Unet: Convolutional networks for biomedical image segmentation”, International Conference on Medical image computing and computer-assisted intervention. pp. 234-241, 2015.doi: https://doi.org/10.1007/978-3-319-24574-4_28
[6] W. Wang, C. Wei, W. Yang, and J. Liu, “GLADNet: Low-Light Enhancement Network with Global Awareness, IEEE international conference on automatic face & gesture recognition”, pp. 751-75. 2018doi: https://doi.org/10.1109/FG.2018.00118
[7] J. Bauer, “Effiziente und optimierte Darstellungen von Informationen auf Grafikanzeigen im Fahrzeug: Situations adaptive Bildauf bereitungs algorithmen und intelligente Backlightkonzepte”, 2013.
[8] Z. Wang, A. Bovik, H. Sheikh and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity”, IEEE transactions on image processing, vol. 13, no. 4, pp. 600-612, April 2004.doi: https://doi.org/10.1109/TIP.2003.819861
[9] K. Prabhakar and R. Babu, “Ghosting free multi-exposure image fusion in gradient domain”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1766-1770, 2016.doi: https://doi.org/10.1109/ICASSP.2016.7471980
[10] E. Reinhard, W. Heidrich, P. Debevec, S. Pattanaik, G. Ward, and K. Myszkowski, “High dynamic range imaging: acquisition, display, and image-based lighting”, Morgan Kaufmann, 2010.

Comment


Editorial Office
1108, New building, 22, Teheran-ro 7-gil, Gangnam-gu, Seoul, Korea
Homepage: www.kibme.org TEL: +82-2-568-3556 FAX: +82-2-568-3557
Copyrightⓒ 2012 The Korean Institute of Broadcast and Media Engineers
All Rights Reserved