Abstract:
In practice, some of information systems are based on dominance relations, and values of decision attribute are fuzzy. So, it is meaningful to study attribute reductions in ordered decision tables with fuzzy decision. In this paper, upper and lower approximation reductions are proposed in this kind of complicated decision table, respectively. Some important properties are discussed. The judgement theorems and discernibility matrices associated with two reductions are obtained from which the theory of attribute reductions is provided in ordered decision tables with fuzzy decision. Moreover, rough set approach to upper and lower approximation reductions is presented in ordered decision tables with fuzzy decision as well. An example illustrates the validity of the approach, and results show that it is an efficient tool for knowledge discovery in ordered decision tables with fuzzy decision. 1. Introduction Rough set theory, which was first proposed by Pawlak in the early 1980s [1], can describe knowledge via set-theoretic analysis based on equivalence classification for the universe of discourse. It provides a theoretical foundation for inference reasoning about data analysis and has extensive applications in areas of artificial intelligence and knowledge acquisition. A primary use of rough set theory is to reduce the number of attributes in databases thereby improving the performance of applications in a number of aspects including speed, storage, and accuracy. For a data set with discrete attribute values, this can be done by reducing the number of redundant attributes and find a subset of the original attributes that are the most informative. As is well known, an information system may usually has more than one reduct. It means that the set of rules derived from knowledge reduction is not unique. In practice, it is always hoped to obtain the set of the most concise rules. Therefore, people have been attempting to find the minimal reduct of information systems, which means that the number of attributes contained in the reduction is minimal. Unfortunately, it has been proven that finding the minimal reduct of an information system is an NP-hard problem. Recently, some new theories and reduction methods have been developed. Many types of knowledge reduction have been proposed in the area of rough sets [2–8]. Possible rules and reducts have been proposed as a way to deal with inconsistence in an inconsistent decision table [9]. Approximation rules [10] are also used as an alternative to possible rules. On the other hand, generalized decision rules and

Abstract:
There are some defects in the attribute set decomposition of decision table based on the attribute significance. The paper introduced a novel feature decomposition of decision table approach based on decision making. It analyzed the diffe-rences between the quality of approximation and the attribute significance to decision making. With the rough set theory, the relationships of condition attributes and decision attributes were considered increasing the attribute classification rate in the decision table, and a new attribute selection criterion was proposed. Then decision table was decomposed with the attribute selection criterion.

Abstract:
决策树技术在数据挖掘的分类领域中被广泛采用。采用决策树从一致决策表(即条件属性值相同的样本其决策值相同)中挖掘有价值信息的相关研究较为成熟，而对于非一致决策表(即条件属性值相同的样本其决策值不同)采用决策树进行数据挖掘是当前研究热点。本文基于贪心算法的思想，提出了一种非一致决策表的决策树分析方法。首先使用多值决策方法处理非一致决策表，将非一致决策表转换成多值决策表(即用一个集合表示样本的多个决策值)；然后根据贪心选择思想，使用不纯度函数和不确定性相关指标设计贪心选择策略；最后使用贪心选择设计决策树构造算法实现决策树构造。通过实例说明了所提出的权值和贪心选择指标能够比已有的最大权值贪心选择指标生成规模更小的决策树。
Decision tree is a widely used technique to discover patterns from consistent data set. But if the data set is inconsistent, where there are groups of examples with equal values of conditional attributes but different decisions (values of the decision attribute), then to discover the essential patterns or knowledge from the data set is challenging. Based on the greedy algorithm, we propose a new approach to construct a decision tree for inconsistent decision table. Firstly, an inconsistent decision table is transformed into a many-valued decision table. After that, we develop a greedy algorithm using “weighted sum” as the impurity and uncertainty measure to construct a decision tree for inconsistent decision tables. An illustration example is used to show that our “weighted sum” measure is better than the existing “weighted max” measure to reduce the size of constructed decision tree.

Abstract:
By now, the positivcbased attribute reduction is one of the most popular algorithms for attribute reduction.Some inconsistent objects may be present in the real world decision tables. And with the decrease of the number of attributes during the process of reduction, some new inconsistent objects may also occur in the decision tables. For a positivcbased attribute reduction algorithm, the inconsistent objects can not provide any useful information. I}herefore, dcleting those objects from the decision table will not change the results of positive regions, and the final result of reduction. Moreover, this operation may improve the efficiency of the algorithm obviously. However, most of the current positivcbased attribute reduction algorithms have not concerned this problem. I}hcy use all objects in the domain to calculate the positive regions and obtain the results of reduction. To solve this problem, we defined the notions of reconstructing consistent decision table and reconstructing consistent decision sulrtable. The aim for introducing the two notions is to delete the inconsistent objects in the original decision table and obtain a consistent decision table during the process of reduction. By virtue of the two notions, we proposed a novel positivcbased attribute reduction algorithm. I}he experimental results on real datasets demonstrate that our algorithm can obtain smaller reducts and higher classification accuracks than the traditional algorithms. And the time complexity of our algorithm is relatively low.

Abstract:
In order to correct some problems of computing the core of a decision table, an improved binary discernable matrix definition based on distribution function for computing the core was put forward in this paper. The improved binary discernable matrix is not only small in scale but also suitable for any decision tables for computing the core. A new algorithm for attribute reduction in inconsistent decision tables was also presented based on getting the core. Only increasing appropriate rows in the improved binary discernable matrix for computing the core, attribution reduction will be obtained by utilizing logic operation. And the absorptive law was used in attribution reduction, which raised the efficiency of attribute reduction greatly.

Abstract:
In decision table, the disadvantages of classical rough reduction algorithm were analyzed. Based on the recent rough entropy of knowledge, the new decision information entropy was proposed with separating consistent objects from inconsistent objects, and the new significance of an attribute was defined. The judgment theorem based on this entropy was obtained with respect to knowledge reduction. Condition attributes were considered to estimate the significance for decision classes, and a heuristic algorithm was proposed. Theoretical analysis shows that the proposed heuristic information was better and more efficient than the others, and experimental results prove the validity of the heuristic algorithm in searching the minimal or optimal reduction.

Abstract:
In this paper we provide a simple random-variable example of inconsistent information, and analyze it using three different approaches: Bayesian, quantum-like, and negative probabilities. We then show that, at least for this particular example, both the Bayesian and the quantum-like approaches have less normative power than the negative probabilities one.

Abstract:
A new approach rough set-based for ordering rules acquisition in vague decision table is presented. An order relation among vague values is constructed according to the degree of suitability that an alternative satisfies the decision-maker's requirement, on the basis of which we transform the vague decision table into the binary decision table. Then optimal rules can be induced using rough set theory. Finally .rules induced in binary decision table are transformed into ordering rules in vague decision table. Simulation results show that the method is effective.

Abstract:
The great quantity and complexity of data are difficulties in analysis of decision table.Decomposition is an effective tool to deal with large decision table,it can improve the efficiency and quality of data analysis.The problem in large decision table analyzing and the necessity of decomposition are discussed in this paper,three standards are proposed for evaluating the decomposition methods.Typical methods for decomposition of decision table are analyzed and compared,several problems are pointed out for further research.

Abstract:
In most synthesis evaluation systems and decision-making systems, data are represented by objects and attributes of objects with a degree of belief. Formally, these data can be abstracted by the form (objects; attributes; P), where P represents a kind degree of belief between objects and attributes, such that, P is a basic probability assignment. In the paper, we provide a kind of probability information system to describe these data and then employ rough sets theory to extract probability decision rules. By extension of Dempster-Shafer evidence theory, we can get probabilities of antecedents and conclusion of probability decision rules. Furthermore, we analyze the consistency of probability decision rules. Based on consistency of probability decision rules, we provide an inference method to finish inference of probability decision rules, which can be used to decide the class of a new object . The conclusion points out that the inference method of the paper not only deals with precise information, but also imprecise or uncertain information as well.