全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Rapid 3D Modeling and Parts Recognition on Automotive Vehicles Using a Network of RGB-D Sensors for Robot Guidance

DOI: 10.1155/2013/832963

Full-Text   Cite this paper   Add to My Lib

Abstract:

This paper presents an approach for the automatic detection and fast 3D profiling of lateral body panels of vehicles. The work introduces a method to integrate raw streams from depth sensors in the task of 3D profiling and reconstruction and a methodology for the extrinsic calibration of a network of Kinect sensors. This sensing framework is intended for rapidly providing a robot with enough spatial information to interact with automobile panels using various tools. When a vehicle is positioned inside the defined scanning area, a collection of reference parts on the bodywork are automatically recognized from a mosaic of color images collected by a network of Kinect sensors distributed around the vehicle and a global frame of reference is set up. Sections of the depth information on one side of the vehicle are then collected, aligned, and merged into a global RGB-D model. Finally, a 3D triangular mesh modelling the body panels of the vehicle is automatically built. The approach has applications in the intelligent transportation industry, automated vehicle inspection, quality control, automatic car wash systems, automotive production lines, and scan alignment and interpretation. 1. Introduction Robot manipulation and navigation require efficient methods for representing and interpreting the surrounding environment. Industrial robots, which work in controlled environments, are typically designed to perform only repetitive and preprogrammed tasks. However, robots working in dynamic environments demand reliable methods to interpret their surroundings and are submitted to severe time constraints. Most existing solutions for robotic environment representation and interpretation make use of high-cost 3D profiling cameras, scanners, sonars, or combinations of them, which often result in lengthy acquisition and slow processing of massive amounts of information. The extreme acquisition speed of the Kinect’s technology meets requirements for rapidly acquiring models over large volumes, such as that of automotive vehicles. The performance, affordability, and the growing adoption of the Kinect for robotic applications supported the selection of the sensor to develop the robotic inspection station operating under multisensory visual guidance. The method presented in this work uses a set of Kinect depth sensors properly calibrated to collect visual information as well as 3D points from different regions over vehicle bodyworks. A dedicated calibration methodology is presented to achieve accurate alignment between the respective point clouds and textured images acquired

References

[1]  J. Zhou, J. Tong, L. Liu, Z. Pan, and H. Yan, “Scanning 3D full human bodies using kinects,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 4, pp. 643–650, 2012.
[2]  A. Maimone and H. Fuchs, “Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras,” in Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR '11), pp. 137–146, October 2011.
[3]  P. J. Noonan, T. F. Cootes, W. A. Hallett, and R. Hinz, “The design and initial calibration of an optical tracking system using the microsoft kinect,” in Proceedings of the IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC '11), pp. 3614–3617, October 2011.
[4]  P. Rakprayoon, M. Ruchanurucks, and A. Coundoul, “Kinect-based obstacle detection for manipulator,” in Proceedings of the IEEEE/SICE International Symposium on System Integration (SII '11), pp. 68–73, 2011.
[5]  K. Berger, K. Ruhl, M. Albers et al., “The capturing of turbulent gas flows using multiple kinects,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV '11), pp. 1108–1113, Barcelona, Spain, November 2011.
[6]  J. Smisek, M. Jancosek, and T. Pajdla, “3D with kinect,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV '11), pp. 1154–1160, Barcelona, Spain, November 2011.
[7]  C.-S. Park, S.-W. Kim, D. Kim, and S.-R. Oh, “Comparison of plane extraction performance using laser scanner and kinect,” in Proceedings of the 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI '11), pp. 153–155, Seoul, Korea, November 2011.
[8]  N. Burrus, “Demo software to visualize, calibrate and process kinect cameras output,” 2012, http://nicolas.burrus.name/index.php/Research/KinectRgbDemoV6.
[9]  M. Gaffney, “Kinect/3D scanner calibration pattern,” 2011, http://www.thingiverse.com/thing:7793.
[10]  K. Berger, K. Ruhl, Y. Schroeder, C. Brummer, A. Scholz, and M. Magnor, “Markerless motion capture using multiple color-depth sensors,” in Proceedings of the Vision, Modeling and Visualization, pp. 317–324, 2011.
[11]  K. Khoshelham, “Accuracy analysis of kinect depth data,” in Proceedings of the ISPRS Workshop on Laser Scanning, pp. 1437–1454, 2011.
[12]  S. Matyunin, D. Vatolin, Y. Berdnikov, and M. Smirnov, “Temporal filtering for depth maps generated by kinect depth camera,” in Proceedings of the 5th 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON '11), pp. 1–4, May 2011.
[13]  S.-M. Kim, Y.-C. Lee, and S.-C. Lee, “Vision based automatic inspection system for nuts welded on the support hinge,” in Proceedings of the SICE-ICASE International Joint Conference, pp. 1508–1512, October 2006.
[14]  S. Agarwal, A. Awan, and D. Roth, “Learning to detect objects in images via a sparse, part-based representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 11, pp. 1475–1490, 2004.
[15]  A. Kiryakov, B. Popov, I. Terziev, D. Manov, and D. Ognyanoff, “Semantic annotation, indexing, and retrieval,” Journal of Web Semantics, vol. 2, no. 1, pp. 49–79, 2004.
[16]  P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '01), pp. I511–I518, Kauai, Hawaii, USA, December 2001.
[17]  Y.-F. Fung, H. Lee, and M. F. Ercan, “Image processing application in toll collection,” IAENG International Journal of Computer Science, vol. 32, pp. 473–478, 2006.
[18]  M. M. Trivedi, T. Gandhi, and J. McCall, “Looking-in and looking-out of a vehicle: computer-vision-based enhanced vehicle safety,” IEEE Transactions on Intelligent Transportation Systems, vol. 8, no. 1, pp. 108–120, 2007.
[19]  Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000.
[20]  Open Source Computer Vision Library, http://opencv.willowgarage.com/wiki/.
[21]  Real-time Computer Graphics and Physics, Mathematics, Geometry, Numerical Analysis, and Image Analysis, “Geometric tools,” http://www.geometrictools.com/LibMathematics/Approximation/Approximation.html.
[22]  A. Chávez-Aragón, R. Laganière, and P. Payeur, “Vision-based detection and labelling of multiple vehicle parts,” in Proceedings of the IEEE International Conference on Intelligent Transportation Systems, pp. 1273–1278, Washington, DC, USA, 2011.
[23]  M. De Berg, O. Cheong, M. Van Kreveld, and M. Overmars, “Delaunay triangulations: height interpolation,” in Computational Geometry: Algorithms and Applications, pp. 191–218, Springer, 3rd edition, 2008.
[24]  P. Lindstrom, “Out-of-core simplification of large polygonal models,” in Proceedigs of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '00), pp. 259–262, July 2000.
[25]  The Visualization Toolkit, VTK, http://www.vtk.org/.
[26]  Stanford Triangle Forma, PLY, http://www.cs.virginia.edu/~gfx/Courses/2001/Advanced.spring.01/plylib/Ply.txt.
[27]  P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, February 1992.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133