Abstract:
Axiomatization has been widely used for testing logical implications. This paper suggests a non-axiomatic method, the chase, to test if a new dependency follows from a given set of probabilistic dependencies. Although the chase computation may require exponential time in some cases, this technique is a powerful tool for establishing nontrivial theoretical results. More importantly, this approach provides valuable insight into the intriguing connection between relational databases and probabilistic reasoning systems.

Abstract:
One of the most important aspects in any treatment of uncertain information is the rule of combination for updating the degrees of uncertainty. The theory of belief functions uses the Dempster rule to combine two belief functions defined by independent bodies of evidence. However, with limited dependency information about the accumulated belief the Dempster rule may lead to unsatisfactory results. The present study suggests a method to determine the accumulated belief based on the premise that the information gain from the combination process should be minimum. This method provides a mechanism that is equivalent to the Bayes rule when all the conditional probabilities are available and to the Dempster rule when the normalization constant is equal to one. The proposed principle of minimum information gain is shown to be equivalent to the maximum entropy formalism, a special case of the principle of minimum cross-entropy. The application of this principle results in a monotonic increase in belief with accumulation of consistent evidence. The suggested approach may provide a more reasonable criterion for identifying conflicts among various bodies of evidence.

Abstract:
This paper studies the connection between probabilistic conditional independence in uncertain reasoning and data dependency in relational databases. As a demonstration of the usefulness of this preliminary investigation, an alternate proof is presented for refuting the conjecture suggested by Pearl and Paz that probabilistic conditional independencies have a complete axiomatization.

Abstract:
This paper discusses a method for implementing a probabilistic inference system based on an extended relational data model. This model provides a unified approach for a variety of applications such as dynamic programming, solving sparse linear equations, and constraint propagation. In this framework, the probability model is represented as a generalized relational database. Subsequent probabilistic requests can be processed as standard relational queries. Conventional database management systems can be easily adopted for implementing such an approximate reasoning system.

Abstract:
In learning belief networks, the single link lookahead search is widely adopted to reduce the search space. We show that there exists a class of probabilistic domain models which displays a special pattern of dependency. We analyze the behavior of several learning algorithms using different scoring metrics such as the entropy, conditional independence, minimal description length and Bayesian metrics. We demonstrate that single link lookahead search procedures (employed in these algorithms) cannot learn these models correctly. Thus, when the underlying domain model actually belongs to this class, the use of a single link search procedure will result in learning of an incorrect model. This may lead to inference errors when the model is used. Our analysis suggests that if the prior knowledge about a domain does not rule out the possible existence of these models, a multi-link lookahead search or other heuristics should be used for the learning process.

Abstract:
This paper introduces a qualitative measure of ambiguity and analyses its relationship with other measures of uncertainty. Probability measures relative likelihoods, while ambiguity measures vagueness surrounding those judgments. Ambiguity is an important representation of uncertain knowledge. It deals with a different, type of uncertainty modeled by subjective probability or belief.

Abstract:
The compatibility of quantitative and qualitative representations of beliefs was studied extensively in probability theory. It is only recently that this important topic is considered in the context of belief functions. In this paper, the compatibility of various quantitative belief measures and qualitative belief structures is investigated. Four classes of belief measures considered are: the probability function, the monotonic belief function, Shafer's belief function, and Smets' generalized belief function. The analysis of their individual compatibility with different belief structures not only provides a sound b

Abstract:
It is well-known that the notion of (strong) conditional independence (CI) is too restrictive to capture independencies that only hold in certain contexts. This kind of contextual independency, called context-strong independence (CSI), can be used to facilitate the acquisition, representation, and inference of probabilistic knowledge. In this paper, we suggest the use of contextual weak independence (CWI) in Bayesian networks. It should be emphasized that the notion of CWI is a more general form of contextual independence than CSI. Furthermore, if the contextual strong independence holds for all contexts, then the notion of CSI becomes strong CI. On the other hand, if the weak contextual independence holds for all contexts, then the notion of CWI becomes weak independence (WI) nwhich is a more general noncontextual independency than strong CI. More importantly, complete axiomatizations are studied for both the class of WI and the class of CI and WI together. Finally, the interesting property of WI being a necessary and sufficient condition for ensuring consistency in granular probabilistic networks is shown.

Abstract:
In an adaptive population which models financial markets and distributed control, we consider how the dynamics depends on the diversity of the agents' initial preferences of strategies. When the diversity decreases, more agents tend to adapt their strategies together. This change in the environment results in dynamical transitions from vanishing to non-vanishing step sizes. When the diversity decreases further, we find a cascade of dynamical transitions for the different signal dimensions, supported by good agreement between simulations and theory. Besides, the signal of the largest step size at the steady state is likely to be the initial signal.

Abstract:
Adaptive populations such as those in financial markets and distributed control can be modeled by the Minority Game. We consider how their dynamics depends on the agents' initial preferences of strategies, when the agents use linear or quadratic payoff functions to evaluate their strategies. We find that the fluctuations of the population making certain decisions (the volatility) depends on the diversity of the distribution of the initial preferences of strategies. When the diversity decreases, more agents tend to adapt their strategies together. In systems with linear payoffs, this results in dynamical transitions from vanishing volatility to a non-vanishing one. For low signal dimensions, the dynamical transitions for the different signals do not take place at the same critical diversity. Rather, a cascade of dynamical transitions takes place when the diversity is reduced. In contrast, no phase transitions are found in systems with the quadratic payoffs. Instead, a basin boundary of attraction separates two groups of samples in the space of the agents' decisions. Initial states inside this boundary converge to small volatility, while those outside diverge to a large one. Furthermore, when the preference distribution becomes more polarized, the dynamics becomes more erratic. All the above results are supported by good agreement between simulations and theory.