全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Mobile Imaging and Computing for Intelligent Structural Damage Inspection

DOI: 10.1155/2014/483729

Full-Text   Cite this paper   Add to My Lib

Abstract:

Optical imaging is a commonly used technique in civil engineering for aiding the archival of damage scenes and more recently for image analysis-based damage quantification. However, the limitations are evident when applying optical imaging in the field. The most significant one is the lacking of computing and processing capability in the real time. The advancement of mobile imaging and computing technologies provides a promising opportunity to change this norm. This paper first provides a timely introduction of the state-of-the-art mobile imaging and computing technologies for the purpose of engineering application development. Further we propose a mobile imaging and computing (MIC) framework for conducting intelligent condition assessment for constructed objects, which features in situ imaging and real-time damage analysis. This framework synthesizes advanced mobile technologies with three innovative features: (i) context-enabled image collection, (ii) interactive image preprocessing, and (iii) real-time image analysis and analytics. Through performance evaluation and field experiments, this paper demonstrates the feasibility and efficiency of the proposed framework. 1. Introduction 1.1. Background and Rationale Sensing-based inspection technologies are recognized as critical components for solutions to quantitative and risk-informed condition assessment for buildings, bridges, and other civil infrastructure systems [1, 2]. Among numerous sensing technologies, visual inspection is commonly used in engineering practice for the purpose of archiving damage scenes and patterns. For example, visual inspection is considered the predominant approach to condition assessment for the majority of bridge inventories in the United States [3]. In these practices, visual inspection is often companied by the use of digital camera for the purpose of digital archival through photographing (viz., optical imaging). The fundamental basis of optical imaging is that through measuring photonic energy emanating from distant objects that are degraded or damaged, disturbed spatial or spectral patterns on the surface of the objects are recorded in a digital format (i.e., two-dimensional images) [4]. In practice, a large percentage of damage patterns, which are visible on the surfaces of structural or geotechnical members (e.g., cracking, spalling, deformation, or collapse-induced debris), can be captured using a commercial digital camera. In recent years, many research endeavors attempt to empower visual inspection and optical imaging by quantitative image analysis; hence,

References

[1]  F. N. Catbas and A. E. Aktan, “Condition and damage assessment: issues and some promising indices,” Journal of Structural Engineering, vol. 128, no. 8, pp. 1026–1036, 2002.
[2]  B. R. Ellingwood, “Risk-informed condition assessment of civil infrastructure: state of practice and research issues,” Structure and Infrastructure Engineering, vol. 1, no. 1, pp. 7–18, 2005.
[3]  M. Moore, B. Phares, B. Graybeal, D. Rolander, and G. Washer, Reliability of Visual Inspection for Highway Bridges, Volume I, Federal Highway Administration, 2001.
[4]  M. J. Olsen, Z. Chen, T. Hutchinson, and F. Kuester, “Optical techniques for multiscale damage assessment,” Geomatics, Natural Hazards and Risk, vol. 4, no. 1, pp. 49–70, 2013.
[5]  C. Koch and I. Brilakis, “Pothole detection in asphalt pavement images,” Advanced Engineering Informatics, vol. 25, no. 3, pp. 507–515, 2011.
[6]  Z. Chen and T. C. Hutchinson, “Image-based framework for concrete surface crack monitoring and quantification,” Advances in Civil Engineering, vol. 2010, Article ID 215295, 18 pages, 2010.
[7]  S. R. Cruz-Ramírez, Y. Mae, T. Arai, T. Takubo, and K. Ohara, “Vision-based hierarchical recognition for dismantling robot applied to interior renewal of buildings,” Computer-Aided Civil and Infrastructure Engineering, vol. 26, no. 5, pp. 336–355, 2011.
[8]  T. Nishikawa, J. Yoshida, T. Sugiyama, and Y. Fujino, “Concrete crack detection by multiple sequential image filtering,” Computer-Aided Civil and Infrastructure Engineering, vol. 27, no. 1, pp. 29–47, 2012.
[9]  M. R. Jahanshahi and S. F. Masri, “Adaptive vision-based crack detection using 3D scene reconstruction for condition assessment of structures,” Automation in Construction, vol. 22, pp. 567–576, 2012.
[10]  H. Cheng and C. Glazier, Automated Real-Time Pavement Crack Deflection/Classification System, IDEA Program, Transportation Research Board, 2002.
[11]  S. Ghanta, R. Birken, and J. Dy, “Automatic road surface defect detection from grayscale images,” in Proceedings of the SPIE 8347, Nondestructive Characterization for Composite Materials, Aerospace Engineering, Civil Infrastructure, and Homeland Security, vol. 83471E, 2012.
[12]  C. Balaguer, A. Giménez, J. M. Pastor, V. M. Padrón, and M. Abderrahim, “Climbing autonomous robot for inspection applications in 3D complex environments,” Robotica, vol. 18, no. 3, pp. 287–297, 2000.
[13]  J.-K. Oh, G. Jang, S. Oh et al., “Bridge inspection robot system with machine vision,” Automation in Construction, vol. 18, no. 7, pp. 929–941, 2009.
[14]  S.-N. Yu, J.-H. Jang, and C.-S. Han, “Auto inspection system using a mobile robot for detecting concrete cracks in a tunnel,” Automation in Construction, vol. 16, no. 3, pp. 255–261, 2007.
[15]  C. Eschmann, C.-M. Kuo, C.-H. Kuo, and C. Boller, “Unmanned aircraft systems for remote building inspection and monitoring,” in Proceedings of the 6th European Workshop on Structural Health Monitoring (EWSHM '12), pp. 1179–1186, Dresden, Germany, July 2012.
[16]  S. Rathinam, Z. W. Kim, and R. Sengupta, “Vision-based monitoring of locally linear structures using an unmanned aerial vehicle,” Journal of Infrastructure Systems, vol. 14, no. 1, pp. 52–63, 2008.
[17]  N. Metni and T. Hamel, “A UAV for bridge inspection: visual servoing control law with orientation limits,” Automation in Construction, vol. 17, no. 1, pp. 3–10, 2007.
[18]  G. H. Forman and J. Zahorjan, “The challenges of mobile computing,” Computer, vol. 27, no. 4, pp. 38–47, 1994.
[19]  Mobithinking, “Global mobile statistics 2013 Part A: mobile subscribers; handset market share; mobile operators,” 2013, http://mobithinking.com/mobile-marketing-tools/latest-mobile-stats/a.
[20]  K. Pulli, W.-C. Chen, N. Gelfand et al., “Mobile visual computing,” in Proceedings of the International Symposium on Ubiquitous Virtual Reality (ISUVR '09), pp. 3–6, July 2009.
[21]  I. Gartner, “Gartner says worldwide sales of mobile phones,” 2013, http://www.gartner.com/newsroom/id/2017015.
[22]  D. Ehringer, “The dalvik virtual machine architecture,” Tech. Rep., 2010.
[23]  G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, O'Reilly Media, 2008.
[24]  OpenCV, “Open Source Computer Vision,” 2013, http://opencv.org/.
[25]  W.-H. Cho and T.-C. Kim, “Image enhancement technique using color and edge features for mobile systems,” in Digital Photography VII, Proceedings of SPIE, San Francisco, Calif, USA, January 2011.
[26]  T. Yeh, K. Grauman, K. Tollmar, and T. Darrell, “A picture is worth a thousand keywords: image-based object search on a mobile platform,” in Proceedings of the Extended Abstracts on Human Factors in Computing Systems (CHI '05), pp. 2025–2028.
[27]  R. Andrade, A. V. Wangenheim, and M. K. Bortoluzzi, “Wireless and PDA: a novel strategy to access DICOM-compliant medical data on mobile devices,” International Journal of Medical Informatics, vol. 71, no. 2-3, pp. 157–163, 2003.
[28]  Y. Kondo, “Medical image transfer for emergency care utilizing internet and mobile phone,” Nippon Hoshasen Gijutsu Gakkai Zasshi, vol. 58, no. 10, pp. 1393–1401, 2002.
[29]  Z. Tu and R. Li, “Automatic recognition of civil infrastructure objects in mobile mapping imagery using a markov random field model,” in Proceedings of the 19th Congress of ISPRS, pp. 16–23, 2000.
[30]  C. Tao, R. Li, and M. A. Chapman, “Automatic reconstruction of road centerlines from mobile mapping image sequences,” Photogrammetric Engineering and Remote Sensing, vol. 64, no. 7, pp. 709–716, 1998.
[31]  D. Wagner, G. Reitmayr, A. Mulloni, T. Drummond, and D. Schmalstieg, “Real-time detection and tracking for augmented reality on mobile phones,” IEEE Transactions on Visualization and Computer Graphics, vol. 16, no. 3, pp. 355–368, 2010.
[32]  Y. Liu, J. Yang, and M. Liu, “Recognition of QR Code with mobile phones,” in Chinese Control and Decision Conference, 2008 (CCDC '08), pp. 203–206, Yantai, China, July 2008.
[33]  A. Sun, Y. Sun, and C. Liu, “The QR-code reorganization in illegible snapshots taken by mobile phones,” in Proceedings of the International Conference on Computational Science and Its Applications (ICCSA '07), pp. 532–538, Kuala Lampur, Malaysia, August 2007.
[34]  J. J. Hull, X. Liu, B. Erol, J. Graham, and J. Moraleda, “Mobile image recognition: architectures and tradeoffs,” in Proceedings of the 11th Workshop on Mobile Computing Systems Applications, pp. 84–88, 2010.
[35]  T. Nagayama, A. Miyajima, S. Kimura, Y. Shimada, and Y. Fujino, “Road condition evaluation using the vibration response of ordinary vehicles and synchronously recorded movies,” in Proceedings of the SPIE Smart Structures and Materials+ Nondestructive Evaluation and Health Monitoring, p. 86923A, March 2013.
[36]  R. Bhoraskar, N. Vankadhara, B. Raman, and P. Kulkarni, “Wolverine: traffic and road condition estimation using smartphone sensors,” in Proceedings of the 4th International Conference on Communication Systems and Networks (COMSNETS '12), pp. 1–6, Bangalore, India, January 2012.
[37]  D. A. Johnson and M. M. Trivedi, “Driving style recognition using a smartphone as a sensor platform,” in Proceedings of the 14th IEEE International Intelligent Transportation Systems Conference (ITSC '11), pp. 1609–1615, Washington, DC, USA, October 2011.
[38]  A. H. Behzadan, Z. Aziz, C. J. Anumba, and V. R. Kamat, “Ubiquitous location tracking for context-specific information delivery on construction sites,” Automation in Construction, vol. 17, no. 6, pp. 737–748, 2008.
[39]  K. C. Yeh, M. H. Tsai, and S. C. Kang, “The iHelmet: an AR-nhanced wearable display for BIM information,” in Mobile and Pervasive Computing in Construction, pp. 149–168, Wiley-Blackwell, 2012.
[40]  Z. Chen, R. R. Derakhshani, C. Halmen, and J. T. Kevern, “A texture-based method for classifying cracked concrete surfaces from digital images using neural networks,” in Proceedings of the International Joint Conference on Neural Network (IJCNN '11), pp. 2632–2637, San Jose, Calif, USA, August 2011.
[41]  D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach, Prentice Hall, New York, NY, USA, 2002.
[42]  Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000.
[43]  C. Sphinx, “Open source toolkit for speech recognition,” 2011, http://cmusphinx.sourceforge.net/.
[44]  A. K. Jain and F. Farrokhnia, “Unsupervised texture segmentation using Gabor filters,” Pattern Recognition, vol. 24, no. 12, pp. 1167–1186, 1991.
[45]  L. A. Vese and T. F. Chan, “A multiphase level set framework for image segmentation using the Mumford and Shah model,” International Journal of Computer Vision, vol. 50, no. 3, pp. 271–293, 2002.
[46]  H. D. Cheng and X. J. Shi, “A simple and effective histogram equalization approach to image enhancement,” Digital Signal Processing: A Review Journal, vol. 14, no. 2, pp. 158–170, 2004.
[47]  D. Comaniciu and P. Meet, “Mean shift analysis and applications,” in Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV '99), pp. 1197–1203, September 1999.
[48]  J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–698, 1986.
[49]  R. Koren and Y. Yitzhaky, “Automatic selection of edge detector parameters based on spatial and statistical measures,” Computer Vision and Image Understanding, vol. 102, no. 2, pp. 204–213, 2006.
[50]  Y. Yitzhaky and E. Peli, “A method for objective edge detection evaluation and detector parameter selection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 8, pp. 1027–1033, 2003.
[51]  S. Kichenassamy, A. Kumar, P. Olver, A. Tannenbaum, and A. Yezzi, “Gradient flows and geometric active contour models,” in Proceedings of the 5th International Conference on Computer Vision, pp. 810–815, 1995.
[52]  V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International Journal of Computer Vision, vol. 22, no. 1, pp. 61–79, 1997.
[53]  S. Suzuki and K. be, “Topological structural analysis of digitized binary images by border following,” Computer Vision, Graphics and Image Processing, vol. 30, no. 1, pp. 32–46, 1985.
[54]  Google, Find Content to Reuse, Google, 2013, https://support.google.com/websearch/answer/29508.
[55]  Z. Chen, “UMKC Concrete Damage Imagery Database,” 2013, http://lasir.umkc.edu/cdid/.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133