全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...
-  2016 

三维重建中的多模型融合:克服光照和尺度影响
Multiple model fusion in 3-D reconstruction: Illumination and scale invariance

DOI: 10.16511/j.cnki.qhdxxb.2016.21.046

Keywords: 三维模型配准,运动恢复结构,尺度自适应主成分迭代最近点算法,
3-D model registration
,structure from motion,scaled-PCA-ICP

Full-Text   Cite this paper   Add to My Lib

Abstract:

互联网图像三维可视化通常使用运动恢复结构方法将互联网图像重构成为三维点云,用于支持用户在三维空间中自由移动观察三维点云和图像。但由于同一场景互联网图像间光照条件差异巨大,传统方法往往不会重构成唯一的三维点云,而是依照光照条件的分布,构建成多个独立的点云。该文提出了一种三维点云配准框架,将这些因为光照差异而分离的点云融合成为统一的点云。首先利用点云的三维几何特征而非二维图像特征描述点云,克服了光照差异对配准的影响。其次提出了一种克服尺度差异的配准方法,以解决不同尺度点云的匹配问题。在两个数据集上的实验证明了该方法的有效性。
Abstract:3-D internet photo visualization reconstructs objects in 3-D using structure information gained from the object's motion to give users motion experience. However, due to the large illumination difference between photographs on the Internet, traditional reconstruction methods cannot generate a single point cloud, but will generate multiple independent point clouds. This paper describes a 3-D model registration framework based on 3-D geometries that generates unified 3-D models from various illuminations to complete a structure from multiple models. The 3-D point cloud geometry is used instead of the 2-D features to overcome the influence of large illumination changes. Secondly, a scaled-PCA-ICP algorithm was then used to do the registration that can overcome the large scale variance between the two point clouds. Tests on two datasets show the effectiveness of this method.

References

[1]  Snavely N, Seitz S M, Szeliski R. Photo tourism: Exploring photo collections in 3-D [J]. ACM transactions on graphics, 2006, 25(3): 835-846.
[2]  Snavely N, Seitz S M, Szeliski R. Modeling the world from internet photo collections [J]. International Journal of Computer Vision, 2008, 80(2): 189-210.
[3]  Lowe D G. Object recognition from local scale-invariant features [C]//Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV). Kerkyra, Greece: IEEE, 1999: 1150-1157.
[4]  Yin L, Snavely N, Gehrke J. MatchMiner: Efficient spanning structure mining in large image collections [C]//European Conference on Computer Vision (ECCV). Firenze, Italy: Springer Berlin Heidelberg, 2012:45-58.
[5]  Furukawa Y, Ponce J. Accurate, dense, and robust multiview stereopsis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(8): 1362-1376.
[6]  Li Y, Snavely N, Dan H, et al. Worldwide pose estimation using 3D point clouds [C]//European Conference on Computer Vision (ECCV). Firenze, Italy: Springer Berlin Heidelberg, 2012:15-29.
[7]  Zhong Y. Intrinsic shape signatures: A shape descriptor for 3-D object recognition [C]//2009 IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops). Kyoto, Japan: IEEE, 2009: 689-696.
[8]  Newcombe R A, Izadi S, Hilliges O, et al. Kinect fusion: Real-time dense surface mapping and tracking [C]//10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Basel, Switzerland: IEEE, 2011: 127-136.
[9]  Pomerleau F, Colas F, Siegwart R, et al. Comparing ICP variants on real-world data sets [J]. Autonomous Robots, 2013, 34(3): 133-148.
[10]  Besl P J, Mckay N D. Method for registration of 3-D shapes [J]. Proceedings of SPIE-The International Society for Optical Engineering, 1992, 14(3):239-256.
[11]  Umeyama S. Least-squares estimation of transformation parameters between two point patterns [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1991 (4): 376-380.
[12]  Wu C. Towards linear-time incremental structure from motion [C]//International Conference on 3-D Vision (3-DV). Seattle, WA, USA: IEEE, 2013: 127-134.
[13]  Furukawa Y, Curless B, Seitz S M, et al. Towards internet-scale multi-view stereo [C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR). San Francisco, CA, USA: IEEE, 2010: 1434-1441.
[14]  Henry P, Krainin M, Herbst E, et al. RGB-D mapping: Using Kinect-style depth cameras for dense 3-D modeling of indoor environments [J]. The International Journal of Robotics Research, 2012, 31(5): 647-663.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133