Search by item HOME > Access full text > Search by item

JBE, vol. 25, no. 5, pp.655-664, September, 2020

DOI: https://doi.org/10.5909/JBE.2020.25.5.655

MPEG-I RVS Software Speed-up for Real-time Application

Heejune Ahn and Myeong-jin Lee

C.A E-mail: heejune@seoultech.ac.kr

Abstract:

Free viewpoint image synthesis technology is one of the important technologies in the MPEG-I (Immersive) standard. RVS (Reference View Synthesizer) developed by MPEG-I and in use in MPEG group is a DIBR (Depth Information-Based Rendering) program that generates an image at a virtual (intermediate) viewpoint from multiple viewpoints’ inputs. RVS uses the mesh surface method based on computer graphics, and outperforms the pixel-based ones by 2.5dB or more compared to the previous pixel method. Even though its OpenGL version provides 10 times speed up over the non OpenGL based one, it still shows a non-real-time processing speed, i.e., 0.75 fps on the two 2k resolution input images. In this paper, we analyze the internal of RVS implementation and modify its structure, achieving 34 times speed up, therefore, real-time performance (22-26 fps), through the 3 key improvements: 1) the reuse of OpenGL buffers and texture objects 2) the parallelization of file I/O and OpenGL execution 3) the parallelization of GPU shader program and buffer transfer.



Keyword: MPEG-I, RVS (Reference View Synthesizer), OpenGL, Real-time Optimization, Free viewpoint

Reference:
[1] G. Lafruit, D. Bonatto, C. Tulvan, M. Preda and L. Yu, "Understanding MPEG-I Coding Standardization in Immersive VR/AR Applications," in SMPTE Motion Imaging Journal, vol. 128, no. 10, pp. 33-39, Nov.-Dec. 2019.
[2] G. Park, “Trend of free-view point video and A case study of extensible view point selection,” Information & communications magazine, Vol. 36, No. 12, pp. 3-9, 2019.
[3] J. Yang, M. Song, G. Park, “Implementation of Integrated Player System based on Free-Viewpoint Video Service according to User Selection,” J. of Broadcasting Engineering, Vol. 25, No. 2, 2020.
[4] C. Fehn, “Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV,” Stereoscopic Displays and Virtual Reality Systems XI. Vol. 5291. International Society for Optics and Photonics, 2004.
[5] S. Rogge, D Bonatto, J. Sancho, R. Salvador, E. Juarez, A. Munteanu, and G. Lafruit, “MPEG-I Depth Estimation Reference Software,” In IEEE 2019 International Conference on 3D Immersion (IC3D) pp. 1-6, December, 2019
[6] S. Fachada, D. Bonatto, A. Schenkel and G. Lafruit, “Depth image based view synthesis with multiple reference views for virtual reality,” (3DTV-CON), Helsinki, 2018.
[7] B. Kroon, G. Lafruit, “Proposed update of the RVS manual,” ISO/IEC JTC1/SC29/WG11, w18068, Oct. 2018.
[8] H.-H. Kim, J.-G. Kim, “Performance Analysis on View Synthesis of 360 Videos for Omnidirectional 6DoF in MPEG-I” J. of Broadcasting Engineering, Vol. 24, No. 2, 2019.
[9] T. Senoh, K. Yamamoto, N. Tetsutni, H. Yasuda, and K. Wenger, “View synthesis reference software (VSRS) 4.2 with improved in-painting and hole filling, “ ISO/IEC JTC1/SC29/WG11, M40657, Apr. 2017.
[10] OpenGL Programming Guide, 9th Edition. ISBN 978-0-134-49549-1.
[11] Cozzi, P., & Riccio, C. (Eds.). (2012). OpenGL insights. CRC press.
[12] B. Kroon, “ClassroomImage: A frame of ClassroomVideo with less noise and more views,” ISO/IEC JTC1/SC29/WG11, m44762, Oct. 2018.

Comment


Editorial Office
1108, New building, 22, Teheran-ro 7-gil, Gangnam-gu, Seoul, Korea
Homepage: www.kibme.org TEL: +82-2-568-3556 FAX: +82-2-568-3557
Copyrightⓒ 2012 The Korean Institute of Broadcast and Media Engineers
All Rights Reserved