oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
A Novel Fusion Method for Semantic Concept Classification in Video  [cached]
Li Tan,Yuanda Cao,Minghua Yang,Jiong Yu
Journal of Software , 2009, DOI: 10.4304/jsw.4.9.968-975
Abstract: Semantic concept classification is a critical task for content-based video retrieval. Traditional methods of machine learning focus on increasing the accuracy of classifiers or models, and face the problems of inducing new data errors and algorithm complexity. Recent researches show that fusion strategies of ensemble learning have appeared promising for improving the classification performance, so some researchers begin to focus on the ensemble of multi-classifiers. The most widely known method of ensemble learning is the Adaboost algorithm. However, when comes to the video data, it encounters severe difficulties, such as visual feature diversity, sparse concepts, etc. In this paper, we proposed a novel fusion method based on the CACE (Combined Adaboost Classifier Ensembles) algorithm. We categorize the visual features by different granularities and define a pair-wise feature diversity measurement, then we construct the simple classifiers based on the feature diversity, and use modified Adaboost to fusion the classifier results. The CACE algorithm in our method makes it outperform the standard Adaboost algorithm as well as many other fusion methods. Experimental results on TRECVID 2007 show that our method is an effective and relatively robust fusion method.
Dimensionality Reduction and Classification feature using Mutual Information applied to Hyperspectral Images : A Filter strategy based algorithm  [PDF]
ELkebir Sarhrouni,Ahmed Hammouch,Driss Aboutajdine
Computer Science , 2012,
Abstract: Hyperspectral images (HIS) classification is a high technical remote sensing tool. The goal is to reproduce a thematic map that will be compared with a reference ground truth map (GT), constructed by expecting the region. The HIS contains more than a hundred bidirectional measures, called bands (or simply images), of the same region. They are taken at juxtaposed frequencies. Unfortunately, some bands contain redundant information, others are affected by the noise, and the high dimensionality of features made the accuracy of classification lower. The problematic is how to find the good bands to classify the pixels of regions. Some methods use Mutual Information (MI) and threshold, to select relevant bands, without treatment of redundancy. Others control and eliminate redundancy by selecting the band top ranking the MI, and if its neighbors have sensibly the same MI with the GT, they will be considered redundant and so discarded. This is the most inconvenient of this method, because this avoids the advantage of hyperspectral images: some precious information can be discarded. In this paper we'll accept the useful redundancy. A band contains useful redundancy if it contributes to produce an estimated reference map that has higher MI with the GT.nTo control redundancy, we introduce a complementary threshold added to last value of MI. This process is a Filter strategy; it gets a better performance of classification accuracy and not expensive, but less preferment than Wrapper strategy.
Dimensionality Reduction and Classification Feature Using Mutual Information Applied to Hyperspectral Images: A Wrapper Strategy Algorithm Based on Minimizing the Error Probability Using the Inequality of Fano  [PDF]
Elkebir Sarhrouni,Ahmed Hammouch,Driss Aboutajdine
Computer Science , 2012,
Abstract: In the feature classification domain, the choice of data affects widely the results. For the Hyperspectral image, the bands dont all contain the information; some bands are irrelevant like those affected by various atmospheric effects, see Figure.4, and decrease the classification accuracy. And there exist redundant bands to complicate the learning system and product incorrect prediction [14]. Even the bands contain enough information about the scene they may can't predict the classes correctly if the dimension of space images, see Figure.3, is so large that needs many cases to detect the relationship between the bands and the scene (Hughes phenomenon) [10]. We can reduce the dimensionality of hyperspectral images by selecting only the relevant bands (feature selection or subset selection methodology), or extracting, from the original bands, new bands containing the maximal information about the classes, using any functions, logical or numerical (feature extraction methodology) [11][9]. Here we focus on the feature selection using mutual information. Hyperspectral images have three advantages regarding the multispectral images [6],
Cost-sensitive AdaBoost Algorithm for Multi-class Classification Problems
多分类问题代价敏感AdaBoost算法

FU Zhong-Liang,
付忠良

自动化学报 , 2011,
Abstract: To solve the cost merging problem when multi-class cost-sensitive classification is transferred to two-class cost-sensitive classification, a cost-sensitive AdaBoost algorithm which can be applied directly to multi-class classification is constructed. The proposed algorithm is similar to real AdaBoost algorithm in algorithm flow and error estimation formula. When the costs are equal, this algorithm becomes a new real AdaBoost algorithm for multi-class classification, guaranteeing that the training error of the combination classifier could be reduced while the number of trained classifiers increased. The new real AdaBoost algorithm does not need to meet the condition that every classifier must be independent, that is to say, the independent condition of classifiers can be derived from the new algorithm, instead of being the must for current real AdaBoost algorithm for multi-class classification. The experimental results show that this new algorithm always ensures the classification result trends to the class with the smallest cost, while the existing multi-class cost-sensitive learning algorithm may fail if the costs of being erroneously classified to other classes are imbalanced and the average cost of every class is equal. The research method above provides a new idea to construct new ensemble learning algorithms, and an AdaBoost algorithm for multi-label classification is given, which is easy to operate and approximately meets the smallest error classification rate.
Advances in Feature Selection with Mutual Information  [PDF]
Michel Verleysen,Fabrice Rossi,Damien Fran?ois
Mathematics , 2009, DOI: 10.1007/978-3-642-01805-3_4
Abstract: The selection of features that are relevant for a prediction or classification problem is an important problem in many domains involving high-dimensional data. Selecting features helps fighting the curse of dimensionality, improving the performances of prediction or classification methods, and interpreting the application. In a nonlinear context, the mutual information is widely used as relevance criterion for features and sets of features. Nevertheless, it suffers from at least three major limitations: mutual information estimators depend on smoothing parameters, there is no theoretically justified stopping criterion in the feature selection greedy procedure, and the estimation itself suffers from the curse of dimensionality. This chapter shows how to deal with these problems. The two first ones are addressed by using resampling techniques that provide a statistical basis to select the estimator parameters and to stop the search procedure. The third one is addressed by modifying the mutual information criterion into a measure of how features are complementary (and not only informative) for the problem at hand.
基于多模式弱分类器的AdaBoost-Bagging车辆检测算法
AdaBoost-Bagging vehicle detection algorithm based on multi-mode weak classifier
 [PDF]

,,蔡英凤,袁朝春
- , 2015,
Abstract: 针对现有车辆检测算法在实际复杂道路情况下对车辆有效检测率不高的问题,提出了融合多模式弱分类器,并以AdaBoost-Bagging集成为强分类器的车辆检测算法。结合判别式模型善于利用较多的特征形成较好决策边界和生成式模型善于利用较少的特征排除大量负样本的优点,以Haar特征训练判别式弱分类器,以HOG特征训练生成式弱分类器,以AdaBoost算法为桥梁,采用泛化能力强的Bagging学习器集成算法得到AdaBoost-Bagging强分类器,利用Caltech1999数据库和实际道路图像对检测算法进行了验证。验证结果表明:相比于单模式弱分类器,AdaBoost-Bagging强分类器在分类能力和处理时间上均具有优越性,表现为较高的检测率与较低的误检率,分别为95.7%、0.000 27%,每帧图像的检测时间较少,为25 ms; 与传统级联AdaBoost分类器相比,AdaBoost-Bagging强分类器虽然增加了12%的检测时间和30%的训练时间,但检测率提升了1.8%,误检率降低了0.000 06%; 本文算法的检测性能显著优于基于Haar特征的AdaBoost分类器算法、基于HOG特征的SVM分类器算法、基于HOG特征的DPM分类器算法,具有较佳的车辆检测效果。
Focusing on the problem that the vehicle detection rate of existed vehicle detection algorithms is lower in real complex road environment, a vehicle detection algorithm was proposed, in which multi-model weak classifiers were integrated into strong classifier by using AdaBoost-Bagging method. In the algorithm, discriminative model could generate a fine decision boundary by using more features, and generative model could eliminate negative examples by using fewer features. To combine the advantages of discriminative model and generative model, discriminative classifier with Haar feature and generative classifier with HOG feature were built. Combined with AdaBoost algorithm, AdaBoost-Bagging strong classifier was obtained by using Bagging algorithm that is an integrated learning algorithm with strong generalization ability. Vehicle detection algorithm was tested based on Caltech1999 dataset and real road images. Test result indicates that compared with sole mode weak classifier, AdaBoost-Bagging strong classifier maintains superiority in classification ability and processing time, its high detection rate and low false detection rate are 95.7%, 0.000 27% respectively, and the detection time of each frame is 25 ms that is less. Compared with the traditional cascade AdaBoost classifier, the detection time of the AdaBoost-Bagging strong classifier increases 12%, the training time increases 30%, but the detection rate increases 1.8%, and the false detection rate decreases 0.000 06%. The proposed algorithm is better than other vehicle detection algorithms, including Haar feature-based AdaBoost classifier, HOG feature-based SVM classifier, HOG feature-based DPM classifier, and has better vehicle detection effect. 3 tabs, 8 figs, 25 refs
Network traffic classification based on GA-CFS and AdaBoost algorithm
基于GA-CFS和AdaBoost算法的网络流量分类

LA Ting-ting,SHI Jun,
剌婷婷
,师 军

计算机应用研究 , 2012,
Abstract: The selection of feature attribute plays an important role in the network traffic classification. This paper applied a method considering the CFS algorithm as the fitness function of the improved genetic algorithm GA-CFS in order to extract the main flow statistical attributes in the space of 249 attributes and selected 18 attributes of a flow as the best feature subset. Finally it used the AdaBoost algorithm to enhance a series of weak classifiers to the strong classifiers. At the same time, it fulfilled the classification of the network traffic, and further studied the network traffic intensively. The experimental results indicate that GA-CFS and AdaBoost algorithm can achieve higher classification precision compared with the weak classifiers.
An Efficient Approach for Segmentation, Feature Extraction and Classification of Audio Signals  [PDF]
Muthumari Arumugam, Mala Kaliappan
Circuits and Systems (CS) , 2016, DOI: 10.4236/cs.2016.74024
Abstract: Due to the presence of non-stationarities and discontinuities in the audio signal, segmentation and classification of audio signal is a really challenging task. Automatic music classification and annotation is still considered as a challenging task due to the difficulty of extracting and selecting the optimal audio features. Hence, this paper proposes an efficient approach for segmentation, feature?extraction and classification of audio signals. Enhanced Mel Frequency Cepstral Coefficient?(EMFCC)-Enhanced Power Normalized Cepstral Coefficients (EPNCC) based feature extraction is applied for the extraction of features from the audio signal. Then, multi-level classification is done to classify the audio signal as a musical or non-musical signal. The proposed approach achieves better performance in terms of precision, Normalized Mutual Information (NMI), F-score and entropy. The PNN classifier shows high False Rejection Rate (FRR), False Acceptance Rate (FAR), Genuine Acceptance rate (GAR), sensitivity, specificity and accuracy with respect to the number of classes.
Mood Classification of Music Using AdaBoost
基于AdaBoost的音乐情绪分类

Wang Lei,Du Li-min,Wang Jin-lin,
王磊
,杜利民,王劲林

电子与信息学报 , 2007,
Abstract: With fast development and boosting of stream media applications,automatic classification of audio signals becomes one of the hotspots on research and engineering.Since mood classification of music is involved with integrated representation and classification of social and natural properties of music,mechanism selection and architecture optimization should be implemented on the basis of different traditional music representations and classification methods.This paper discusses formation of weak classifiers in AdaBoost algorithm based on K-L transformation and GMM training and realizes mood classification of music with multi-layer classifier architecture. The experiments classify 163 songs into four mood classes:calm,sad,exciting and pleasant with 97.5% accuracy on training data and 93.9% accuracy on test data,which proves feasibility and potential value of this method.
联合互信息水下目标特征选择算法
Joint Mutual Information Feature Selection for Underwater Acoustic Targets
 [PDF]

申昇,杨宏晖,王芸,潘悦,唐建生
- , 2015,
Abstract: 在特征选择算法中,穷举特征选择算法可选择出最优特征子集,但由于计算量过高而在实际中不可实现。针对计算成本和最优特征子集搜索之间的平衡问题,提出一种新的用于水下目标识别的联合互信息特征选择算法。这个算法的核心思想是:利用顺序向前特征搜索机制,在选择出与类别具有最大互信息特征的条件下,选择具有更多互补分类信息的特征,从而达到快速去除噪声特征和冗余特征及提高识别性能的目的。利用4类实测水下目标数据进行仿真实验,结果表明:在支持向量机识别正确率几乎不变的情况下,联合互信息特征选择方法可以减少87%的特征,分类时间降低58%。与基于支持向量机和遗传算法结合的特征选择方法相比,可以选出更少的特征,特征子集具有更好的泛化性能。
The existing exhaustive feature selection algorithms can select the optimal feature subset of an underwater acoustic target but cannot be used in engineering practices because of their too high computational cost. To balance the computational cost and the optimal feature subset search, we propose what we believe to be a new joint mutual information feature selection (JMIFS) algorithm. Its core consists of: we use the sequence forward feature search mechanism to select the feature that shows the largest amount of mutual information for classification and then select the feature that contributes more mutual information that is complementary to the selected feature so as to remove the noise and redundant features of the underwater acoustic target and enhance the recognition performance. We simulate the selection of multi-field features of four classes of underwater acoustic targets. The simulation results show preliminarily that: on the condition that the recognition accuracy of the SVM classifier declines only 1%, our JMIFS algorithm can reduce about 87% of the redundant features, and its classification time decreases by 58%. Compared with the SVM and genetic algorithm hybrid feature selection algorithms, the JMIFS algorithm selects a smaller number of feature subsets that have a better generalization performance
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.