%0 Journal Article %T A Novel Neuron in Kernel Domain %A Zahra Khandan %A Hadi Sadoghi Yazdi %J ISRN Signal Processing %D 2013 %R 10.1155/2013/748914 %X Kernel-based neural network (KNN) is proposed as a neuron that is applicable in online learning with adaptive parameters. This neuron with adaptive kernel parameter can classify data accurately instead of using a multilayer error backpropagation neural network. The proposed method, whose heart is kernel least-mean-square, can reduce memory requirement with sparsification technique, and the kernel can adaptively spread. Our experiments will reveal that this method is much faster and more accurate than previous online learning algorithms. 1. Introduction Adaptive filter is the heart of most neural networks [1]. LMS method and its kernel-based methods are potential online methods with iterative learning that are used for reducing mean squared error toward optimum Wiener weights. Due to simple implementation of LMS [1], this method became one of the candidates for online kernel-based learning. The kernel-based learning [2] utilizes Mercer kernels in order to produce nonlinear versions of conventional linear methods. After the introduction of the kernel, kernel least-mean-square (KLMS) [3, 4] was proposed. KLMS algorithm tries to solve LMS problems in reproducing kernel hilbert spaces (RKHS) [3] using a stochastic gradient methodology. KNN has such characteristics as kernel abilities and LMS features, easy learning over variants of patterns, and traditional neurons capabilities. The experimental results show that this classifier has better performance than the other online kernel methods, with suitable parameters. Two main drawbacks of kernel-based methods are selecting proper value for kernel parameters and series expansions whose size equals the number of training data, which make them unsuitable for online applications. This paper concentrates only on Gaussian kernel (for similar reasons to those discussed in [5]), while KNN uses other kernels too. In [6], the role of kernel width in the smoothness of the performance surfaces. Determining the kernel width of Gaussian kernel in kernel-based methods is very important. Controlling kernel width can help us to control the learning rate and the tradeoff between overfitting and underfitting. Use of cross-validation is one of the simplest methods to tune this parameter which is costly and cannot be used for datasets with too many classes. So, the parameters are chosen using a subset of data with a low number of classes in [7]. In some methods, genetic algorithm [8] or grid search [5] is used to determine the proper value of such parameters. However, in all the mentioned methods, the kernel width is chosen as a %U http://www.hindawi.com/journals/isrn.signal.processing/2013/748914/