全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Hardware Architecture for Real-Time Computation of Image Component Feature Descriptors on a FPGA

DOI: 10.1155/2014/815378

Full-Text   Cite this paper   Add to My Lib

Abstract:

This paper describes a hardware architecture for real-time image component labeling and the computation of image component feature descriptors. These descriptors are object related properties used to describe each image component. Embedded machine vision systems demand a robust performance and power efficiency as well as minimum area utilization, depending on the deployed application. In the proposed architecture, the hardware modules for component labeling and feature calculation run in parallel. A CMOS image sensor (MT9V032), operating at a maximum clock frequency of 27?MHz, was used to capture the images. The architecture was synthesized and implemented on a Xilinx Spartan-6 FPGA. The developed architecture is capable of processing 390 video frames per second of size 640 × 480 pixels. Dynamic power consumption is 13?mW at 86 frames per second. 1. Introduction Computation of regional descriptors based on gray value, color, texture, and geometrical features for segmented image components is a basic step for many machine vision systems. The requirement for the detection of image objects and their feature calculation can be seen in a huge number of applications, for example, medical science [1], optical navigation [2], thin film inspection [3], and smart cameras [4], as well as in industrial process monitoring [5]. Hardware architecture design for machine vision systems is thus of great importance and is also an active research topic. The architecture developed in this case ensures a high frame speed, low latency, modularization, and also low power consumption. The developed architecture is suitable for smart camera applications, where only refined and processed information is sent over a low bandwidth communication channel rather than sending whole video frames. This smart camera can form part of a visual sensor network in order to send processed results to any other node or to the base station. Machine vision algorithms are often divided into the following steps as shown in Figure 1 [6]. Video is acquired from an image sensor at image acquisition. Image objects are extracted from the preprocessed video data at Segmentation. During Labeling, pixels belonging to the same image component are assigned a unique label. During Feature extraction, an image component is described, for example, in terms of region features such as ellipse, square, or circle parameters. Components can also be described in terms of gray value features such as mean gray value or position. These features are sometimes also referred to as descriptors. Feature information can then be

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133