We consider some modifications of the neural gas algorithm. First, fuzzy assignments as known from fuzzy c-means and neighborhood cooperativeness as known from self-organizing maps and neural gas are combined to obtain a basic Fuzzy Neural Gas. Further, a kernel variant and a simulated annealing approach are derived. Finally, we introduce a fuzzy extension of the ConnIndex to obtain an evaluation measure for clusterings based on fuzzy vector quantization. 1. Introduction Prototype based vector quantization (VQ) is an approved method to cluster and compress very large data sets. Prototype based implies that the data are represented by a much smaller number of prototypes. Famous methods are c-means [1], self-organizing maps (SOM) [2], and neural gas (NG) [3]. These methods have in common that each data point is uniquely assigned to its closest prototype. Therefore, they are also called crisp vector quantizers. Yet, in practical applications, data are often overlapping making it hard to separate clusters. For this kind of data fuzzy vector quantizing, algorithms have been developed, for example, fuzzy c-means (FCM) [4] and fuzzy SOM (FSOM) [5]. Now, each datapoint can be partially assigned to each prototype. The FSOM is an extension of the FCM taking the neighborhood cooperativeness into account. Yet, as common to SOM, this neighborhood is bound to an external topological structure like a grid. In this paper we combined FCM with NG, thus exploiting the advantages of each: fuzziness from FCM and dynamic neighborhood cooperativeness without structural restrictions from NG. Our new approach is called Fuzzy Neural Gas (FNG). Beside its basic functionality we also introduce some variations of FNG. First, we propose the kernel fuzzy neural gas (KFNG) where we consider differentiable kernels to adapt the metric. This allows the algorithm to operate in the same structural space as support vector machines (SVM) [6], which are known to deliver respectable results [7]. In [6], it has been shown that this modified optimization space is equivalent and isometric to a reproducing kernel Hilbert or Banach space, which proves to be beneficial for unsupervised VQ, that is also for FNG. For another variant of FNG we were inspired by simulated annealing (SA), a method which allows temporary deterioration of an optimization process to stabilize its long term behavior. To obtain an SA-like approach, we introduce negative learning and call the new method pulsing Neural Gas (PNG). The idea can also be transferred to FNG resulting in Pulsing Fuzzy Neural Gas (PFNG). Clustering in
T. M. Martinetz, S. G. Berkovich, and K. J. Schulten, “‘Neural-gas’ network for vector quantization and its application to time-series prediction,” IEEE Transactions on Neural Networks, vol. 4, no. 4, pp. 558–569, 1993.
J. C. Dunn, “A fuzzy relative of the ISODATA process and its use in detecting compact well-separated clusters,” Journal of Cybernetics, vol. 3, no. 3, pp. 32–57, 1973.
N. B. Karayiannis and J. C. Bezdek, “An integrated approach to fuzzy learning vector quantization and fuzzy c-means clustering,” IEEE Transactions on Fuzzy Systems, vol. 5, no. 4, pp. 622–628, 1997.
T. Villmann and S. Haase, “A note on gradient based learning in vector quantization usingdifferentiable kernels for hilbert and banach spaces,” Machine Learning Report 01/2012, University of Bielefeld, Bielefeld, Germany, 2012.
K. Ta？demir and E. Merényi, “A validity index for prototype-based clustering of datasets with complex structures,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 41, no. 4, pp. 1039–1053, 2011.
T. Villmann, T. Geweniger, M. K？stner, M. Lange, and editors, “Fuzzy neural gas for unsupervised vector quantization,” in Artificial Intelligence and Soft Computing, L. Rutkowski, M. Korytkowski, R. Scherer, R. Tadeusiewicz, L. Zadeh, and J. Zurada, Eds., vol. 7267 of Lecture Notes in Computer Science, pp. 350–358, Springer, Heidelberg, Germany, 2012.
J. C. Bezdek, “A convergence theorem for the fuzzy isodata clustering algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 2, no. 1, pp. 1–8, 1980.
T. Villmann, S. Haase, and M. Kstner, “Gradient based learning in vectorquantization using differentiable kernels,” in Advances in Self-Organizing Maps, P. A. Estévez, J. C. Príncipe, and P. Zegers, Eds., vol. 198 of Advances InIntelligent Systems and Computing, pp. 193–204, Springer, Berlin, Germany, 2013.
B. Frénay and M. Verleysen, “Parameter-insensitive kernel in extreme learning for non-linear support vector regression,” Neurocomputing, vol. 74, no. 16, pp. 2526–2531, 2011.
C. Williams, “Computing with infinite networks,” in Advances in Neural Information Processing Systems, vol. 9, pp. 295–301, MIT Press, Cambridge, Mass, USA, 1996.
T. Geweniger, M. K？stner, M. Lange, and T. Villmann, “Modified conn-index for theevaluation of fuzzy clusterings,” in Proceedings of the European Symposium on Artificial Neural Networks (ESANN '12), 2012.
D. Landgrebe, Signal Theory Methods in Multispectral Remote Sensing, Wiley Series in Remote Sensing and Image Processing, John Wiley and Sons, Hoboken, NJ, USA, 2003.
G. Camps-Valls and L. Bruzzone, “Kernel-based methods for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 6, pp. 1351–1362, 2005.
M. F. Augusteijn, K. A. Shaw, and R. J. Watson, “A study of neural network inputdata for ground cover identification in satellite images,” in Proceedings of the International Conference on Artificial Neural Networks (ICANN '93), S. Gielen and B. Kappen, Eds., pp. 1010–1013, Springer, London, UK, 1993.
L. Fischer, M. Lange, M. K？stner, and T. Villmann, “Accelerated vector quantization bypulsing neural gas,” Machine Learning Reports 6(MLR-04-2012), 2012, http://www.techfak.uni-bielefeld.de/?fschleif/mlr/mlr.