oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 9 )

2018 ( 167 )

2017 ( 170 )

2016 ( 178 )

Custom range...

Search Results: 1 - 10 of 5576 matches for " Feature Fusion "
All listed articles are free for downloading (OA Articles)
Page 1 /5576
Display every page Item
Medical image fusion based on pulse coupled neural networks and multi-feature fuzzy clustering  [PDF]
Xiaoqing Luo, Xiaojun Wu
Journal of Biomedical Science and Engineering (JBiSE) , 2012, DOI: 10.4236/jbise.2012.512A111
Abstract:

Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In order to retain useful information and get more reliable results, a novel medical image fusion algorithm based on pulse coupled neural networks (PCNN) and multi-feature fuzzy clustering is proposed, which makes use of the multi-feature of image and combines the advantages of the local entropy and variance of local entropy based PCNN. The results of experiments indicate that the proposed image fusion method can better preserve the image details and robustness and significantly improve the image visual effect than the other fusion methods with less information distortion.

A new approach for HIV-1 protease cleavage site prediction combined with feature selection  [PDF]
Yao Yuan, Hui Liu, Guangtao Qiu
Journal of Biomedical Science and Engineering (JBiSE) , 2013, DOI: 10.4236/jbise.2013.612144
Abstract:

Acquired immunodeficiency syndrome (AIDS) is a fatal disease which highly threatens the health of human being. Human immunodeficiency virus (HIV) is the pathogeny for this disease. Investigating HIV-1 protease cleavage sites can help researchers find or develop protease inhibitors which can restrain the replication of HIV-1, thus resisting AIDS. Feature selection is a new approach for solving the HIV-1 protease cleavage site prediction task and it’s a key point in our research. Comparing with the previous work, there are several advantages in our work. First, a filter method is used to eliminate the redundant features. Second, besides traditional orthogonal encoding (OE), two kinds of newly proposed features extracted by conducting principal component analysis (PCA) and non-linear Fisher transformation (NLF) on AAindex database are used. The two new features are proven to perform better than OE. Third, the data set used here is largely expanded to 1922 samples. Also to improve prediction performance, we conduct parameter optimization for SVM, thus the classifier can obtain better prediction capability. We also fuse the three kinds of features to make sure comprehensive feature representation and improve prediction performance. To effectively evaluate the prediction performance of our method, five parameters, which are much more than previous work, are used to conduct complete comparison. The experimental results of our method show that our method gain better performance than the state of art method. This means that the feature selection combined with feature fusion and classifier parameter optimization can effectively improve HIV-1 cleavage site prediction. Moreover, our work can provide useful help for HIV-1 protease inhibitor developing in the future.

 

Learning Multi-Modality Features for Scene Classification of High-Resolution Remote Sensing Images  [PDF]
Feng’an Zhao, Xiongmei Zhang, Xiaodong Mu, Zhaoxiang Yi, Zhou Yang
Journal of Computer and Communications (JCC) , 2018, DOI: 10.4236/jcc.2018.611018
Abstract:
Scene classification of high-resolution remote sensing (HRRS) image is an important research topic and has been applied broadly in many fields. Deep learning method has shown its high potential to in this domain, owing to its powerful learning ability of characterizing complex patterns. However the deep learning methods omit some global and local information of the HRRS image. To this end, in this article we show efforts to adopt explicit global and local information to provide complementary information to deep models. Specifically, we use a patch based MS-CLBP method to acquire global and local representations, and then we consider a pretrained CNN model as a feature extractor and extract deep hierarchical features from full-connection layers. After fisher vector (FV) encoding, we obtain the holistic visual representation of the scene image. We view the scene classification as a reconstruction procedure and train several class-specific stack denoising autoencoders (SDAEs) of corresponding class, i.e., one SDAE per class, and classify the test image according to the reconstruction error. Experimental results show that our combination method outperforms the state-of-the-art deep learning classification methods without employing fine-tuning.
A Multi-Channel Fusion Based Newborn Seizure Detection  [PDF]
Malarvili BalaKrishnan, Paul Colditz, Boualeum Boashash
Journal of Biomedical Science and Engineering (JBiSE) , 2014, DOI: 10.4236/jbise.2014.78055
Abstract: We propose and compare two multi-channel fusion schemes to utilize the information extracted from simultaneously recorded multiple newborn electroencephalogram (EEG) channels for seizure detection. The first approach is known as the multi-channel feature fusion. It involves concatenating EEG feature vectors independently obtained from the different EEG channels to form a single feature vector. The second approach, called the multi-channel decision/classifier fusion, is achieved by combining the independent decisions of the different EEG channels to form an overall decision as to the existence of a newborn EEG seizure. The first approach suffers from the large dimensionality problem. In order to overcome this problem, three different dimensionality reduction techniques based on the sum, Fisher’s linear discriminant and symmetrical uncertainty (SU) were considered. It was found that feature fusion based on SU technique outperformed the other two techniques. It was also shown that feature fusion, which was developed on the basis that there was inter-dependence between recorded EEG channels, was superior to the independent decision fusion.
视频火焰检测综述
Survey of Flame Detection Based on Video
 [PDF]

吴茜茵, 严云洋, 杜静, 刘以安
Computer Science and Application (CSA) , 2013, DOI: 10.12677/CSA.2013.38059
Abstract:
基于传感器的传统火灾检测系统已经不能满足实际需求。随着计算机技术和数字图像处理技术的发展,视频火焰检测作为一种新型有效的早期火灾探测技术,已经受到人们的广泛关注。本文介绍了视频火焰检测流程,着重分析火焰的图像特征,包括基于单帧的静态特征和基于多帧的动态特征,并探讨了典型的特征提取算法,对多特征融合算法进行了分类比较,最后展望了视频火焰检测的发展趋势。
The traditional fire detection system is an unsatisfactory way of detecting fire based on some sensors. As an effective type of early fire detection technology, video fire detection has received extensive attention recently with the improvement of technology of computer and digital image process. The process of video fire detection is shown here. The flame image characters are discussed such as the static characteristics in single frame and dynamic characteristics in multiple frames. Some typical methods of fire image feature extraction are presented. Then the fusion algorithm with multiple features is classified and summarized. Finally, the development of video fire detection is prospected.
Feature Level Fusion of Multimodal Biometrics for Personal Authentication: A Review
Dapinder kaur,Gaganpreet Kaur
International Journal of Computers & Technology , 2013,
Abstract: User verification systems that use a single biometric indicator often have to contend with noisy sensor data, restricted degree of freedom, non-universality of the biometric trait and unacceptable error rates. So the need of using multimodal biometric system occurred. A multimodal biometric system combines the different biometric traits and provides better recognition performance as compared to the systems based on single biometric trait or modality. In this paper, studies of multimodal modalities are discussed and also discuss the various techniques used in feature level fusion with the objective of improving performance & robustness.
A Survey of Decision Fusion and Feature Fusion Strategies for Pattern Classification
Mangai Utthara,Samanta Suranjana,Das Sukhendu,Chowdhury Pinaki
IETE Technical Review , 2010,
Abstract: For any pattern classification task, an increase in data size, number of classes, dimension of the feature space, and interclass separability affect the performance of any classifier. A single classifier is generally unable to handle the wide variability and scalability of the data in any problem domain. Most modern techniques of pattern classification use a combination of classifiers and fuse the decisions provided by the same, often using only a selected set of appropriate features for the task. The problem of selection of a useful set of features and discarding the ones which do not provide class separability are addressed in feature selection and fusion tasks. This paper presents a review of the different techniques and algorithms used in decision fusion and feature fusion strategies, for the task of pattern classification. A survey of the prominent techniques used for decision fusion, feature selection, and fusion techniques has been discussed separately. The different techniques used for fusion have been categorized based on the applicability and methodology adopted for classification. A novel framework has been proposed by us, combining both the concepts of decision fusion and feature fusion to increase the performance of classification. Experiments have been done on three benchmark datasets to prove the robustness of combining feature fusion and decision fusion techniques.
Feature-Level based Video Fusion for Object Detection
Anjali Malviya,S. G. Bhirud
International Journal of Computer Science Issues , 2011,
Abstract: Fusion of three-dimensional data from multiple sensors gained momentum, especially in applications pertaining to surveillance, when promising results were obtained in moving object detection. Several approaches to video fusion of visual and infrared data have been proposed in recent literature. They mainly comprise of pixel based methodologies. Surveillance is a major application of video fusion and night-time object detection is one of most important issues in automatic video surveillance. In this paper we analyze the suitability of a feature-level based video fusion technique that overcomes the drawback of pixel-based fusion techniques for object detection.
INTRAMODAL FEATURE FUSION USING WAVELET FOR PALMPRINT AUTHENTICATION
K.KRISHNESWARI,,S.ARUMUGAM
International Journal of Engineering Science and Technology , 2011,
Abstract: Palmprint recognition has attracted various researchers in recent years due to its richness in amount of features. In this work, palmprint authentication system is classified into palmprint acquisition, preprocessing, feature extraction, feature fusion and matching. In the preprocessing stage we employed a modified preprocessing technique to extract the ROI and it is further enhanced using adaptive histogram equalization. In feature extraction, the single sample representation has become bottleneck in producing high performance. To solve this we propose an intramodal feature fusion for palmprint authentication. The proposed system extracts multiplefeatures like Texture (Gabor), Line and Appearance (PCA) features from the preprocessed palmprint images. The feature vectors obtained from different approaches are in different dimensions and also the features from same image may be correlated. Therefore, we propose wavelet based fusion techniques to fuse extracted features as it contains wavelet extensions and uses mean-max fusion method to overcome the problem of feature fusion. Finally the feature vector is matched with stored template using NN classifier. The proposed approach is validated for their efficiency on PolyU palmprint database of 200 users. The experimental results illustrates thatthe feature level fusion improves the recognition accuracy significantly.
Multimodal Biometric System Using Face-Iris Fusion Feature
Zhifang Wang,Erfu Wang,Shuangshuang Wang,Qun Ding
Journal of Computers , 2011, DOI: 10.4304/jcp.6.5.931-938
Abstract: With the wide application, the performance of unimodal biometrics systems has to contend with a variety of problems such as background noise, signal noise and distortion, and environment or device variations. Therefore, multimodal biometric systems are proposed to solve the above mentioned problems. This paper proposed a novel multimodal biometric system using face-iris fusion feature. Face feature and iris feature are first extracted respectively and fused in feature-level. However, existing feature level schemes such as sum rule and weighted sum rule are inefficient in complicated condition. In this paper, we adopt an efficient feature-level fusion scheme for iris and face in series. The algorithm normalizes the original features of iris and face using z-score model to eliminate the unbalance in the order of magnitude and the distribution between two different kinds of feature vectors, and then connect the normalized feature vectors in serial rule. The proposed algorithm is tested using CASIA iris database and two face databases (ORL database and Yale database). Experimental results show the effectiveness of the proposed algorithm.
Page 1 /5576
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.