Search by item HOME > Access full text > Search by item

JBE, vol. 24, no. 2, pp.217-226, March, 2019

DOI: https://doi.org/10.5909/JBE.2019.24.2.217

Image Filtering Method for an Effective Inverse Tone-mapping

Rahoon Kang, Bumjun Park, and Jechang Jeong

C.A E-mail: jjeong@hanyang.ac.kr

Abstract:

In this paper, we propose a filtering method that can improve the results of inverse tone-mapping using guided image filter. Inverse tone-mapping techniques have been proposed that convert LDR images to HDR. Recently, many algorithms have been studied to convert single LDR images into HDR images using CNN. Among them, there exists an algorithm for restoring pixel information using CNN which learned to restore saturated region. The algorithm does not suppress the noise in the non-saturation region and cannot restore the detail in the saturated region. The proposed algorithm suppresses the noise in the non-saturated region and restores the detail of the saturated region using a WGIF in the input image, and then applies it to the CNN to improve the quality of the final image. The proposed algorithm shows a higher quantitative image quality index than the existing algorithms when the HDR quantitative image quality index was measured.

 



Keyword: Inverse Tone-mapping, HDR, CNN, Deep Learning, Guided Image Filter

Reference:
[1] H. Landis, “Production-ready global illumination,” SIGGRAPH Course Notes, Vol.16, pp.87-101, 2002.
[2] A. O. Akyüz, R. Fleming, B. E. Riecke, E. Reinhard, and H. H. Bülthoff, “Do HDR displays support LDR Content?: A psychophysical evaluation,” ACM Transaction on Graphics, Vol.26, No.38, July 2007.
[3] F. Banterle, A. Artusi, K. Debattista, and A. Chalmers, Advanced High Dynamic Range Imaging:theory and practice, A K Peters/CRC Press, New York, February 2011.
[4] A. G. Rempel, M. Trentacoste, H. Seetzen, H. D. Young, W. Heidrich, L. Whiteheadm, and G. Ward, “LDR2HDR: On-the–fly reverse tone mapping of legacy video and photographs,” ACM Transaction on Graphics, Vol.26, No.39, 2007.
[5] G. Eilertsen, J. Kronander, G. Denes, R. Mantiuk, and J. Unger, "HDR image reconstruction from a single exposure using deep CNNs," ACM Transactions on Graphics, Vol.36, No.6, pp.1-15, 2017.
[6] D. Marnerides, T. Bashford-Rogers, J. Hatchett, and K. Debattista, "ExpandNet: A Deep Convolutional Neural Network for High Dynamic Range Expansion from Low Dynamic Range Content," Computer Graphics Forum, Vol.37, No.2, pp.37-49, 2018
[7] A. A. Goshtasby, “Fusion of Multi-exposure Images,” Image and Vision Computing, Vol.23, pp. 611-618, June 2005.
[8] S. Hecht, “The visual discrimination of intensity and the Weber-Fechner law,” The Journal of General Physiology, Vol.7, pp.235-267, 1924.
[9] G. E. Hinton and R. Salakhutdinov, "Reducing the Dimensionality of Data with Neural Networks," Science, Vol.313, No.5786, pp.504-507, 2006.
[10] P. Vincent, H. Larochelle, Y. Bengio and P. Manzagol, "Extracting and composing robust features with denoising autoencoders,” Proceeding of 25th International Conference on Machine Learning(ICML), pp.1096-1103, 2008.
[11] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition", Arxiv.org, 2014, https://arxiv.org/abs/1409.1556
[12] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proceeding of IEEE conference on computer vision and pattern recognition(CVPR), pp.770-778, 2016.
[13] M. D. Fairchild, “The HDR Photographic survey,” Proceeding of Color and Imaging Conference, Vol.2007, No.1, pp.233-238, 2007.
[14] G. Ward, “Hight Dynamic Range Image Encodings,” 2006.
[15] T. O. Aydın, R. Mantiuk, and H. P. Seidel, “Extending quality metrics to full luminance range images,” Human Vision and Electronic Imaging XIII, Vol.6806, pp.68060B, March 2008.
[16] M. Narwaria, R. Mantiuk, M. P. Da Silva, and P. Le Callet, “HDR-VDP-2.2: a calibrated method for objective quality prediction of high-dynamic range and standard images,” Journal of Electronic Imaging, Vol.24, No.010501, 2015.
[17] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” Proceeding of the 27th International Conference on Machine Learning(ICML), pp.807-814, 2010.
[18] Y. Huo, F. Yang, L. Domg, and V. Brost, “Physiological inverse tone mapping based on retina response,” The Visual Computer, Vol.30, pp.507-517, 2014.
[19] Z. Liand J. Zheng, Z. Zhu, W. Yao, and S. Wu, “Weighted Guided Image Filtering,” IEEE Transaction on Image Processing, Vol.24, No.1, pp.120-129, 2015.
[20] K. He, J. Sun, and X. Tang, “Guided Image Filtering,” Proceeding of European Conference on Computer Vision(ECCV), Berlin, Heidelberg, pp.1-14, 2010.
[21] J. An, S. Lee, J. Kuk, and N. Cho, “A multi-exposure image fusion algorithm without ghost effect,” Proceeding of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011.

Comment


Editorial Office
1108, New building, 22, Teheran-ro 7-gil, Gangnam-gu, Seoul, Korea
Homepage: www.kibme.org TEL: +82-2-568-3556 FAX: +82-2-568-3557
Copyrightⓒ 2012 The Korean Institute of Broadcast and Media Engineers
All Rights Reserved