%0 Journal Article %T A Faster Gradient Ascent Learning Algorithm for Nonlinear SVM %A Catalina-Lucia Cocianu %A Luminita State %A Marinela Mircea %A Panayiotis Vlamos %J ISRN Applied Mathematics %D 2013 %R 10.1155/2013/520635 %X We propose a refined gradient ascent method including heuristic parameters for solving the dual problem of nonlinear SVM. Aiming to get better tuning to the particular training sequence, the proposed refinement consists of the use of heuristically established weights in correcting the search direction at each step of the learning algorithm that evolves in the feature space. We propose three variants for computing the correcting weights, their effectiveness being analyzed on experimental basis in the final part of the paper. The tests pointed out good convergence properties, and moreover, the proposed modified variants proved higher convergence rates as compared to Platt¡¯s SMO algorithm. The experimental analysis aimed to derive conclusions on the recognition rate as well as on the generalization capacities. The learning phase of the SVM involved linearly separable samples randomly generated from Gaussian repartitions and the WINE and WDBC datasets. The generalization capacities in case of artificial data were evaluated by several tests performed on new linearly/nonlinearly separable data coming from the same classes. The tests pointed out high recognition rates (about 97%) on artificial datasets and even higher recognition rates in case of the WDBC dataset. 1. Introduction According to the theory of SVMs, while traditional techniques for pattern recognition are based on the attempt to optimize the performance in terms of the empirical risk, SVMs minimize the structural risk, that is, the probability of misclassifying yet-to-be-seen patterns for a fixed but unknown probability distribution of data [1¨C4]. The most distinguished and attractive features of this classification paradigm are the ability to condense the information contained by the training set and the use of families of decision surfaces of the relatively low Vapnik-Chervonenkis dimension. SVM approaches to classification lead to convex optimization problems, typically quadratic problems in a number of variables equal to the number of examples, and these optimization problems become challenging when the number of data points exceeds few thousands. For making SVM more practical, several algorithms have been developed such as Vapnik¡¯s chunking and Osuna¡¯s decompositions [1, 5]. They make the training of SVM possible by breaking the large QP problem into a series of smaller QP problems and optimizing only a subset of training data patterns at each step. Because the subset of training data patterns optimized at each step is called the working set, these approaches are referred to as the working %U http://www.hindawi.com/journals/isrn.applied.mathematics/2013/520635/