全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Representing Images' Meanings by Associative Values with Given Lexicons Considering the Semantic Tolerance Relation

DOI: 10.1155/2011/786427

Full-Text   Cite this paper   Add to My Lib

Abstract:

An approach of representing meanings of images based on associative values with lexicons is proposed. For this, the semantic tolerance relation model (STRM) that reflects the tolerance degree between defined lexicons is generated, and two factors of semantic relevance (SR) and visual similarity (VS) are involved in generating associative values. Furthermore, the algorithm of calculating associative values using pixel-based bidirectional associative memories (BAMs) in combination with the STRM, which is easy in implementation, is depicted. The experiment results of multilexicons-based retrieval by individuals show the effectiveness and efficiency of our proposed method in finding the expected images and the improvement in retrieving accuracy because of incorporating SR with VS in representing meanings of images. 1. Introduction With the technological advances in digital imaging, networking, and data storage, more and more people communicate with one another and express themselves by sharing images, videos, and other forms of media on line. However, it is difficult to fully utilize the semantic messages that the image/video carries, because the nature of the concepts regarding images in many domains are imprecise, and the interpretation of finding similar images/videos is also ambiguous and subjective on the level of human perception. Accordingly, there are some techniques that focus on annotating the images by folksonomy (Flickr, del.icio.us). But the deliberately idiosyncratic annotation induced by folksonomies has a risk to decrease the systems’ performance in formation retrieval utility. In Xie et al. [1], by examining the effects of different choices of lexicons and input detectors, such a conclusion is reached that more concepts than necessary can hurt performance. Thus, much research aims to the image/video automatic annotation or presentation based on the semantic messages that the images carry. In order to avoid the expense and limitations of text annotations on images, there is considerable interest in efficient database access by perceptual and other automatically extractable attributes of images. However, most current retrieval systems only rely on low-level image features such as color and texture, whereas human users think in terms of concepts [2–5]. Usually relevance feedback is the only attempt to close the semantic gap between user and system. Recently, there is much research to reduce the semantic gap between users and retrieval systems with the different levels of abstraction employed by human and machine. In Rogowitz [6], how human

References

[1]  L. Xie, R. Yan, and J. Yang, “Multi-concept learning with large-scale multimedia lexicons,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '08), pp. 2148–2151, October 2008.
[2]  T. Gevers, “Color constant ratio gradients for image segmentation and similarity of texture objects,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '01), vol. I, pp. 8–25, Hawaii, USA, December 2001.
[3]  W. Y. Ma and H. J. Zhang, “Content-based image indexing and retrieval,” in Handbook of Multimedia Computing, CRC Press, New York, NY, USA, 1999.
[4]  Y. Rui and T. Huang, “Optimizing learning in image retrieval,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '00), pp. 236–243, June 2000.
[5]  X. S. Zhou and T. S. Huang, “Small sample learning during multimedia retrieval using BiasMap,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '01), pp. I11–I17, Urbana, Ill, USA, December 2001.
[6]  B. E. Rogowitz, “Perceptual image similarity experiments,” in Human Vision and Electronic Imaging III, vol. 3299 of Proceedings of SPIE, San Jose, Calif, USA, January 1998.
[7]  A. Mojsilovi?, J. Hu, and E. Soljanin, “Extraction of perceptually important colors and similarity measurement for image matching, retrieval, and analysis,” IEEE Transactions on Image Processing, vol. 11, no. 11, pp. 1238–1248, 2002.
[8]  J. Vogel and B. Schiele, “Performance prediction for vocabulary-supported image retrieval,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '01), October 2001.
[9]  A. Mojsilovi?, J. Gomes, and B. Rogowitz, “Semantic-friendly indexing and quering of images based on the extraction of the objective semantic cues,” International Journal of Computer Vision, vol. 56, no. 1-2, pp. 79–107, 2004.
[10]  A. Wardhani and T. Thomson, “Content based image retrieval using category-based indexing,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '04), pp. 783–786, Taipei, Taiwan, June 2004.
[11]  Y. Dai and D. Cai, “Imagery-based digital collection retrieval on web using compact perception features,” in Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence (WI '05), pp. 572–576, September 2005.
[12]  M. Boutell and J. Luo, “Beyond pixels: exploiting camera metadata for photo classification,” Pattern Recognition, vol. 38, no. 6, pp. 935–946, 2005.
[13]  B. Bradshaw, “Semantic based image retrieval: a probabilistic approach,” in Proceedings of the 8th ACM International Conference on Multimedia, pp. 167–176, New York, NY, USA, November 2000.
[14]  J. Yu and Q. Tian, “Toward intelligent use of semantic information on subspace discovery for image retrieval,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '06), pp. 293–296, Toronto, Canada, July 2006.
[15]  X. Shen, M. Boutell, J. Luo, and C. Brown, “Multi-label machine learning and its application to semantic scene classification,” in Storage and Retrieval Methods and Applications for Multimedia, Proceedings of SPIE, January 2004.
[16]  Y. Dai, “Class-based image representation for Kansei retrieval considering semantic tolerance relation,” Journal of Japan Society for Fuzzy Theory and Intelligent Informatics, vol. 21, no. 2, pp. 184–193, 2009.
[17]  G. Carneiro, A. B. Chan, P. J. Moreno, and N. Vasconcelos, “Supervised learning of semantic classes for image annotation and retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 3, pp. 394–410, 2007.
[18]  J. Li and J. Z. Wang, “Automatic linguistic indexing of pictures by a statistical modeling approach,” in Proceedings of the IEEE Transactions on Pattern Recognation and Machine Intelligence, vol. 25, no. 9, pp. 1075–1088, September 2003.
[19]  B. Thorsten and A. Franz, “Web 1T 5-gram. Linguistic data consortium,” 2006, http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2009T13.
[20]  B. Kosko, “Bidirectional associative memories,” IEEE Transactions on Systems, Man and Cybernetics, vol. 18, no. 1, pp. 49–60, 1988.
[21]  “Sozaijiten image book 1,” Datacraft Co.,Ltd.
[22]  Video Traxx 1, “Film & video library,” Digital Juice.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133