|
Belief Revision in the GOAL Agent Programming LanguageDOI: 10.1155/2013/632319 Abstract: Agents in a multiagent system may in many cases find themselves in situations where inconsistencies arise. In order to properly deal with these, a good belief revision procedure is required. This paper illustrates the usefulness of such a procedure: a certain belief revision algorithm is considered in order to deal with inconsistencies and, particularly, the issue of inconsistencies, and belief revision is examined in relation to the GOAL agent programming language. 1. Introduction When designing artificial intelligence, it is desirable to mimic the human way of reasoning as closely as possible to obtain a realistic intelligence albeit still artificial. This includes the ability to not only construct a plan for solving a given problem but also to be able to adapt the plan or discard it in favor of a new. In these situations the environment in which the agents act should be considered as dynamic and complicated as the world it is representing. This will lead to situations where an agent’s beliefs may be inconsistent and need to be revised. Therefore, an important issue in the subject of modern artificial intelligence is that of belief revision. This paper presents an algorithm for belief revision proposed in [1] and shows some examples of situations where belief revision is desired in order to avoid inconsistencies in an agent’s knowledge base. The agent programming language GOAL will be introduced and belief revision will be discussed in this context. Finally, the belief revision algorithm used in this paper will be compared to other approaches dealing with inconsistency. 2. Motivation In many situations, assumptions are made in order to optimize and simplify an artificial intelligent system. This often leads to solutions which are elegant and planning can be done without too many complications. However, such systems tend to be more difficult to realize in the real world—simply because the assumptions made are too restrictive to model the complex real world. The first thing to notice when modeling intelligence is that human thoughts are themselves inconsistent as considered in [2]. It also considers an example of an expert system from [3], where the classical logical representation of the experts’ statements leads to inconsistency when attempting to reason with it. From this, one can realize how experts of a field not necessarily agree with one another and in order to properly reason with their statements inconsistencies need to be taken into account. This is an example where it is not possible to uniquely define the cause and effect in the real world.
|