%0 Journal Article %T Representing Images' Meanings by Associative Values with Given Lexicons Considering the Semantic Tolerance Relation %A Ying Dai %J Advances in Multimedia %D 2011 %I Hindawi Publishing Corporation %R 10.1155/2011/786427 %X An approach of representing meanings of images based on associative values with lexicons is proposed. For this, the semantic tolerance relation model (STRM) that reflects the tolerance degree between defined lexicons is generated, and two factors of semantic relevance (SR) and visual similarity (VS) are involved in generating associative values. Furthermore, the algorithm of calculating associative values using pixel-based bidirectional associative memories (BAMs) in combination with the STRM, which is easy in implementation, is depicted. The experiment results of multilexicons-based retrieval by individuals show the effectiveness and efficiency of our proposed method in finding the expected images and the improvement in retrieving accuracy because of incorporating SR with VS in representing meanings of images. 1. Introduction With the technological advances in digital imaging, networking, and data storage, more and more people communicate with one another and express themselves by sharing images, videos, and other forms of media on line. However, it is difficult to fully utilize the semantic messages that the image/video carries, because the nature of the concepts regarding images in many domains are imprecise, and the interpretation of finding similar images/videos is also ambiguous and subjective on the level of human perception. Accordingly, there are some techniques that focus on annotating the images by folksonomy (Flickr, del.icio.us). But the deliberately idiosyncratic annotation induced by folksonomies has a risk to decrease the systemsĄŻ performance in formation retrieval utility. In Xie et al. [1], by examining the effects of different choices of lexicons and input detectors, such a conclusion is reached that more concepts than necessary can hurt performance. Thus, much research aims to the image/video automatic annotation or presentation based on the semantic messages that the images carry. In order to avoid the expense and limitations of text annotations on images, there is considerable interest in efficient database access by perceptual and other automatically extractable attributes of images. However, most current retrieval systems only rely on low-level image features such as color and texture, whereas human users think in terms of concepts [2¨C5]. Usually relevance feedback is the only attempt to close the semantic gap between user and system. Recently, there is much research to reduce the semantic gap between users and retrieval systems with the different levels of abstraction employed by human and machine. In Rogowitz [6], how human %U http://www.hindawi.com/journals/am/2011/786427/