All Title Author
Keywords Abstract

Publish in OALib Journal
ISSN: 2333-9721
APC: Only $99


Relative Articles


Error-Free Training via Information Structuring in the Classification Problem

DOI: 10.4236/jilsa.2018.103005, PP. 81-92

Keywords: Classification Algorithms, Granular Computing, Invariants of Matrix Data, Data Processing

Full-Text   Cite this paper   Add to My Lib


The present paper solves the training problem that comprises the initial phases of the classification problem using the data matrix invariant method. The method is reduced to an approximate “slicing” of the information contained in the problem, which leads to its structuring. According to this method, the values of each feature are divided into an equal number of intervals, and lists of objects falling into these intervals are constructed. Objects are identified by a set of numbers of intervals, i.e., indices, for each feature. Assuming that the feature values within any interval are approximately the same, we calculate frequency features for objects of different classes that are equal to the frequencies of the corresponding indices. These features allow us to determine the frequency of any object class as the sum of the frequencies of the indices. For any number of intervals, the maximum frequency corresponds to a class object. If the features do not contain repeated values, the error rate of training tends to zero for an infinite number of intervals. If this condition is not fulfilled, a preliminary randomization of the features should be carried out.


[1]  Bishop, C. (2006) Pattern Recognition and Machine Learning. Springer, Berlin, 738.
[2]  Hastie, T., Tibshirani, R. and Friedman, J. (2009) The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd Edition, Springer, Berlin, 764.
[3]  Murphy, K. (2012) Machine Learning. A Probabilistic Perspective. MIT Press, Cambridge, Massachusetts, London, 1098.
[4]  Shats, V.N. (2016) Invariants of Matrix Data in the Classification Problem. Stochastic Optimization in Informatics, 12, 17-32.
[5]  Shats, V.N. (2017) Classification Based on Invariants of the Data Matrix. Journal of Intelligent Learning Systems and Applications, 9, 35-46.
[6]  Zadeh, L. (1979) Fuzzy Sets and Information Granularity. In: Gupta, N., Ragade, R. and Yager, R., Eds., Advances in Fuzzy Set Theory and Applications, World Science Publishing, Amsterdam, 3-18.
[7]  Yao, J., Vasiliacos, V. and Pedrycz, W. (2013) Granular Computing: Perspective and Challenges. IEEE Transactions on Cybernetics, 43, 1977-1989.
[8]  Ashby, W.R. (1957) An Introduction to Cybernetics. 2nd Edition, Chapman and Hall, London, 294.
[9]  Shats, V.N. (2018) The Classification of Objects Based on a Model of Perception. In: Kryzhanovsky, B., Dunin-Barkowski, W. and Redko, V., Eds., Advances in Neural Computation, Machine Learning, and Cognitive Research, Studies in Computational Intelligence, Springer, Cham, 3-8.
[10]  Asuncion, A. and Newman, D.J. (2007) UCI Machine Learning Repository. Irvine University of California, Irvine.
[11]  Lopez, V., Fernandez, A., Garcia, S., Palade V. and Herrera F. (2013) An Insight into Classification with Imbalanced Data: Empirical Results and Current Trends on Using Data Intrinsic Characteristics. Information Sciences, 250, 113-141.
[12]  Anderson, J.A. (2003) Discrete Mathematics with Combinatorics. Prentice Hall, Upper Saddle River, NJ, 784.


comments powered by Disqus

Contact Us


WhatsApp +8615387084133

WeChat 1538708413