Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results. 1. Introduction The researchers in recent years have made a lot of content-based image and video image scaling method [1–7]. These images and video scaling methods are intended by changing the ratio of the image or video and the resolution so that the image or video to the terminal equipment suitable for the target display, and try to save your images and video in the critical content. In these images based on image content and video scaling process, how to quickly detect visual saliency areas is needed to solve the problem. Now image pixels based point in visual saliency region detection method [8–11] are mostly single pixel of each calculation system significantly, because the large number of pixels will result in an overall computation huge. Some methods are even also building high-dimensional vector to perform the search tree structure [8], and the method’s time complexity and space complexity are much higher than other method. Therefore, many existing regional visual saliency detection method [8, 9] can only detect relatively rough area significant results. The literature [10, 11] of the proposed method is made from the image analysis spectral angle to calculate the input original image on the saliency region. The literature [12] is based on machine learning methods to obtain the input of the original image visual saliency area. These methods can accurately detect the original image smaller target, mainly used in target identification and target tracking. They are proposed for the randomization of visual saliency detection methods with the literature’s [8] method is mainly used in image processing
References
[1]
S. Avidan and A. Shamir, “Seam carving for content-aware image resizing,” ACM Transactions on Graphics, vol. 26, no. 3, article 10, 2007.
[2]
B. Chen and P. Sen, “Video carving,” in Proceedings of the Eurographics, Hersonissos, Greece, 2008.
[3]
H. Liu, X. Xie, W. Ma, and H. Zhang, “Automatic browsing of large pictures on mobile devices,” in Proceedings of the 11th ACM International Conference on Multimedia (MM '03), pp. 148–155, Berkeley, Calif, USA, November 2003.
[4]
Y. Pritch, E. Kav-Venaki, and S. Peleg, “Shift-map image editing,” in Proceedings of the 12th International Conference on Computer Vision (ICCV '09), pp. 151–158, Kyoto, Japan, October 2009.
[5]
M. Rubinstein, A. Shamir, and S. Avidan, “Improved seam carving for video retargeting,” ACM Transactions on Graphics, vol. 27, no. 3, article 16, 2008.
[6]
A. Santella, M. Agrawala, D. DeCarlo, D. Salesin, and M. Cohen, “Gaze-based interaction for semi-automatic photo cropping,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 771–780, Montreal, Canada, April 2006.
[7]
L. Wolf, M. Guttmann, and D. Cohen-Or, “Non-homogeneous content-driven video-retargeting,” in Proceedings of the 11th IEEE International Conference on Computer Vision (ICCV '07), pp. 1–6, Rio de Janeiro, Brazil, October 2007.
[8]
L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254–1259, 1998.
[9]
S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 2376–2383, San Francisco, Calif, USA, June 2010.
[10]
X. Hou and L. Zhang, “Saliency detection: a spectral residual approach,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07), pp. 1–8, Minneapolis, Minn, USA, June 2007.
[11]
C. L. Guo, Q. Ma, and L. M. Zhang, “Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, Anchorage, Alaska, USA, June 2008.
[12]
J. Tilke, K. Ehinger, F. Durand, and A. Torralba, “Learning to predict where humans look,” in Proceedings of the 12th International Conference on Computer Vision (ICCV '09), pp. 2106–2113, Kyoto, Japan, October 2009.
[13]
C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman, “PatchMatch: a randomized correspondence algorithm for structural image editing,” ACM Transactions on Graphics, vol. 28, no. 3, article 24, 2009.