Search by item HOME > Access full text > Search by item

JBE, vol. 25, no. 5, pp.798-807, September, 2020

DOI: https://doi.org/10.5909/JBE.2020.25.5.798

Fusing Algorithm for Dense Point Cloud in Multi-view Stereo

Hyeon-Deok Han and Jong-Ki Han

C.A E-mail: hjk@sejong.edu

Abstract:

As technologies using digital camera have been developed, 3D images can be constructed from the pictures captured by using multiple cameras. The 3D image data is represented in a form of point cloud which consists of 3D coordinate of the data and the related attributes. Various techniques have been proposed to construct the point cloud data. Among them, Structure-from-Motion (SfM) and Multi-view Stereo (MVS) are examples of the image-based technologies in this field. Based on the conventional research, the point cloud data generated from SfM and MVS may be sparse because the depth information may be incorrect and some data have been removed. In this paper, we propose an efficient algorithm to enhance the point cloud so that the density of the generated point cloud increases. Simulation results show that the proposed algorithm outperforms the conventional algorithms objectively and subjectively.



Keyword: Point Cloud, Depth Information, Multi-view Stereo

Reference:
[1] D. Lowe, “Distinctive image features from scale-invariant keypoints,” IJCV, 2004
[2] P. F. Alcantarilla, A. Bartoli, and A. J. Davison, “KAZE features,” In Eur. Conf. on Computer Vision (ECCV), pages 214–227, 2012.
[3] N. Snavely, S. M. Seitz, and R. Szeliski, “Modeling the world from internet photo collections,” Int. J. Comput. Vis., vol. 80, no. 2, pp. 189–210, 2008.
[4] Schönberger JL, and Frahm JM: Structure-from-motion revisited. In: Agapito L, Berg T, Kosecka J, et al. (eds) Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, June, 2016
[5] M. Goesele, B. Curless, and S. M. Seitz, “Multi-View Stereo Revisited,” Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, Washington, DC, USA, pp. 2402-2409, 2006.
[6] Shuhan Shen, “Accurate Multiple View 3D Reconstruction Using Patch-Based Stereo for Large-Scale Scenes,” IEEE transactions on image processing, 22(5):1901–1914, 2013.
[7] Johannes L. Schönberger, Enliang Zheng, Jan-Michael Frahm, and Marc Pollefeys, “Pixelwise View Selection for Unstructured Multi-View Stereo,” In European Conference on Computer Vision, pages 501–518. Springer, 2016.
[8] M. Jancosek and T. Pajdla. “Multi-view reconstruction preserving weakly-supported surfaces,” In Proc. CVPR, 2011.
[9] Zhou, Y.
Shen, S.
Hu, Z. “Detail Preserved Surface Reconstruction from Point Cloud,“ Sensors, 2019, 19(6): 1278, doi:10.3390/s19061278.
[10] Michael Bleyer, Christoph Rhemann, and Carsten Rother, “Patchmatch Stereo-Stereo Matching with Slanted Support Windows,” In BMVC, volume 11, pages 1–11, 2011.
[11] A. Romanoni and M. Matteucci. “TAPA-MVS: Texturelessaware PatchMatch multi-view stereo,” In IEEE International Conference on Computer Vision (ICCV), Seoul, Korea, 2019.
[12] Michael Van den Bergh, Xavier Boix, Gemma Roig, and Luc Van Gool, “SEEDS: Superpixels Extracted via Energy-Driven Sampling,” International Journal of Computer Vision, 111(3):298–314, 2015.
[13] B. Green, “Canny Edge Detection Tutorial”, http://www.pages.drexel. edu/~weg22/can_tut.html, 2002.
[14] E. F. Moore, “The shortest path through a maze,” In Int. Symp. on Th. of Switching, pp. 285–292, 1959.
[15] Pierre Soille, Morphological image analysis: Principles and applications. 2nd edition. Berlin, Germany: Springer-Verlag, 2003.
[16] M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography", Communications of the ACM, vol. 24, no. 6, pp. 381-395, Jun. 1981.
[17] T. Schöps, J. L. Schönberger, S. Galliani, T. Sattler, K. Schindler, M. Pollefeys, A. Geiger, “A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos,” Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

Comment


Editorial Office
1108, New building, 22, Teheran-ro 7-gil, Gangnam-gu, Seoul, Korea
Homepage: www.kibme.org TEL: +82-2-568-3556 FAX: +82-2-568-3557
Copyrightⓒ 2012 The Korean Institute of Broadcast and Media Engineers
All Rights Reserved