oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 14 )

2018 ( 297 )

2017 ( 324 )

2016 ( 332 )

Custom range...

Search Results: 1 - 10 of 8024 matches for " feature extraction "
All listed articles are free for downloading (OA Articles)
Page 1 /8024
Display every page Item
Determinants in Human Gait Recognition  [PDF]
Tahir Amin, Dimitrios Hatzinakos
Journal of Information Security (JIS) , 2012, DOI: 10.4236/jis.2012.32009
Abstract: Human gait is a complex phenomenon involving the motion of various parts of the body simultaneously in a 3 dimensional space. Dynamics of different parts of the body translate its center of gravity from one point to another in the most efficient way. Body dynamics as well as static parameters of different body parts contribute to gait recognition. Studies have been performed to assess the discriminatory power of static and dynamic features. The current research literature, however, lacks the work on the comparative significance of dynamic features from different parts of the body. This paper sheds some light on the recognition performance of dynamic features extracted from different parts of human body in an appearance based set up.
A Survey on Different Feature Extraction and Classification Techniques Used in Image Steganalysis  [PDF]
John Babu, Sridevi Rangu, Pradyusha Manogna
Journal of Information Security (JIS) , 2017, DOI: 10.4236/jis.2017.83013
Abstract:
Steganography is the process of hiding data into public digital medium for secret communication. The image in which the secret data is hidden is termed as stego image. The detection of hidden embedded data in the image is the foundation for blind image steganalysis. The appropriate selection of cover file type and composition contribute to the successful embedding. A large number of steganalysis techniques are available for the detection of steganography in the image. The performance of the steganalysis technique depends on the ability to extract the discriminative features for the identification of statistical changes in the image due to the embedded data. The issue encountered in the blind image steganography is the non-availability of knowledge about the applied steganography techniques in the images. This paper surveys various steganalysis methods, different filtering based preprocessing methods, feature extraction methods, and machine learning based classification methods, for the proper identification of steganography in the image.
Face Recognition Feature Comparison Based SVD and FFT  [PDF]
Lina Zhao, Wanbao Hu, Lihong Cui
Journal of Signal and Information Processing (JSIP) , 2012, DOI: 10.4236/jsip.2012.32035
Abstract: SVD and FFT are both the efficient tools for image analysis and face recognition. In this paper, we first study the role of SVD and FFT in both filed. Then the decomposition information from SVD and FFT are compared. Next, a new viewpoint that the singular value matrix contains the illumination information of the image is proposed and testified by the experiments based on the ORL face database finally.
Validation of High-Density Airborne LiDAR-Based Feature Extraction Using Very High Resolution Optical Remote Sensing Data  [PDF]
Shridhar D. Jawak, Satej N. Panditrao, Alvarinho J. Luis
Advances in Remote Sensing (ARS) , 2013, DOI: 10.4236/ars.2013.24033
Abstract:

This work uses the canopy height model (CHM) based workflow for individual tree crown delineation from LiDAR point cloud data in an urban environment and evaluates its accuracy by using very high-resolution PAN (spatial) and 8-band WorldView-2 imagery. LiDAR point cloud data were used to detect tree features by classifying point elevation values. The workflow includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model, generation of hill-shade image and intensity image, extraction of digital surface model, generation of bare earth digital elevation model and extraction of tree features. Scene dependent extraction criteria were employed to improve the tree feature extraction. LiDAR-based refining/filtering techniques used for bare earth layer extraction were crucial for improving the subsequent tree feature extraction. The PAN-sharpened WV-2 image (with 0.5 m spatial resolution) used to assess the accuracy of LiDAR-based tree features provided an accuracy of 98%. Based on these inferences, we conclude that the LiDAR-based tree feature extraction is a potential application which can be used for understanding vegetation characterization in urban setup.

Target Image Classification through Encryption Algorithm Based on the Biological Features  [PDF]
Zhiwu Chen, Qing E. Wu, Weidong Yang
International Journal of Intelligence Science (IJIS) , 2015, DOI: 10.4236/ijis.2015.51002
Abstract: In order to effectively make biological image classification and identification, this paper studies the biological owned characteristics, gives an encryption algorithm, and presents a biological classification algorithm based on the encryption process. Through studying the composition characteristics of palm, this paper uses the biological classification algorithm to carry out the classification or recognition of palm, improves the accuracy and efficiency of the existing biological classification and recognition approaches, and compares it with existing main approaches of palm classification by experiments. Experimental results show that this classification approach has the better classification effect, the faster computing speed and the higher classification rate which is improved averagely by 1.46% than those of the main classification approaches.
A Multiple Random Feature Extraction Algorithm for Image Object Tracking  [PDF]
Lan-Rong Dung, Shih-Chi Wang, Yin-Yi Wu
Journal of Signal and Information Processing (JSIP) , 2018, DOI: 10.4236/jsip.2018.91004
Abstract: This paper proposes an object-tracking algorithm with multiple randomly-generated features. We mainly improve the tracking performance which is sometimes good and sometimes bad in compressive tracking. In compressive tracking, the image features are generated by random projection. The resulting image features are affected by the random numbers so that the results of each execution are different. If the obvious features of the target are not captured, the tracker is likely to fail. Therefore the tracking results are inconsistent for each execution. The proposed algorithm uses a number of different image features to track, and chooses the best tracking result by measuring the similarity with the target model. It reduces the chances to determine the target location by the poor image features. In this paper, we use the Bhattacharyya coefficient to choose the best tracking result. The experimental results show that the proposed tracking algorithm can greatly reduce the tracking errors. The best performance improvements in terms of center location error, bounding box overlap ratio and success rate are from 63.62 pixels to 15.45 pixels, from 31.75% to 64.48% and from 38.51% to 82.58%, respectively.
Real-Time Static Hand Gesture Recognition for American Sign Language (ASL) in Complex Background  [PDF]
Jayashree R. Pansare, Shravan H. Gawande, Maya Ingle
Journal of Signal and Information Processing (JSIP) , 2012, DOI: 10.4236/jsip.2012.33047
Abstract: Hand gestures are powerful means of communication among humans and sign language is the most natural and expressive way of communication for dump and deaf people. In this work, real-time hand gesture system is proposed. Experimental setup of the system uses fixed position low-cost web camera with 10 mega pixel resolution mounted on the top of monitor of computer which captures snapshot using Red Green Blue [RGB] color space from fixed distance. This work is divided into four stages such as image preprocessing, region extraction, feature extraction, feature matching. First stage converts captured RGB image into binary image using gray threshold method with noise removed using median filter [medfilt2] and Guassian filter, followed by morphological operations. Second stage extracts hand region using blob and crop is applied for getting region of interest and then “Sobel” edge detection is applied on extracted region. Third stage produces feature vector as centroid and area of edge, which will be compared with feature vectors of a training dataset of gestures using Euclidian distance in the fourth stage. Least Euclidian distance gives recognition of perfect matching gesture for display of ASL alphabet, meaningful words using file handling. This paper includes experiments for 26 static hand gestures related to A-Z alphabets. Training dataset consists of 100 samples of each ASL symbol in different lightning conditions, different sizes and shapes of hand. This gesture recognition system can reliably recognize single-hand gestures in real time and can achieve a 90.19% recognition rate in complex background with a “minimum-possible constraints” approach.
Content Based Video Retrieval
B.V.Patel,B.B.Meshram
International Journal of Multimedia & Its Applications , 2012,
Abstract: Content based video retrieval is an approach for facilitating the searching and browsing of large image collections over World Wide Web. In this approach, video analysis is conducted on low level visual properties extracted from video frame. We believed that in order to create an effective video retrieval system, visual perception must be taken into account. We conjectured that a technique which employs multiple features for indexing and retrieval would be more effective in the discrimination and search tasks of videos. In order to validate this claim, content based indexing and retrieval systems were implemented using color histogram, various texture features and other approaches. Videos were stored in Oracle 9i Database and a user study measured correctness of response.
Edge-Based Feature Extraction Method and Its Application to Image Retrieval
G. Ohashi,Y. Shimodaira
Journal of Systemics, Cybernetics and Informatics , 2003,
Abstract: We propose a novel feature extraction method for content-bases image retrieval using graphical rough sketches. The proposed method extracts features based on the shape and texture of objects. This edge-based feature extraction method functions by representing the relative positional relationship between edge pixels, and has the advantage of being shift-, scale-, and rotation-invariant. In order to verify its effectiveness, we applied the proposed method to 1,650 images obtained from the Hamamatsu-city Museum of Musical Instruments and 5,500 images obtained from Corel Photo Gallery. The results verified that the proposed method is an effective tool for achieving accurate retrieval.
Online Fingerprint Verification Algorithm and Distributed System  [PDF]
Ping Zhang, Xi Guo, Jyotirmay Gadedadikar
Journal of Signal and Information Processing (JSIP) , 2011, DOI: 10.4236/jsip.2011.22011
Abstract: In this paper, a novel online fingerprint verification algorithm and distribution system is proposed. In the beginning, fingerprint acquisition, image preprocessing, and feature extraction are conducted on workstations. Then, the extracted feature is transmitted over the internet. Finally, fingerprint verification is processed on a server through web-based database query. For the fingerprint feature extraction, a template is imposed on the fingerprint image to calculate the type and direction of minutiae. A data structure of the feature set is designed in order to accurately match minutiae features between the testing fingerprint and the references in the database. An elastic structural feature matching algorithm is employed for feature verification. The proposed fingerprint matching algorithm is insensitive to fingerprint image distortion, scale, and rotation. Experimental results demonstrated that the matching algorithm is robust even on poor quality fingerprint images. Clients can remotely use ADO.NET on their workstations to verify the testing fingerprint and manipulate fingerprint feature database on the server through the internet. The proposed system performed well on benchmark fingerprint dataset.
Page 1 /8024
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.