南京大学学报(自然科学版) ›› 2015, Vol. 51 ›› Issue (1): 125–131.

• • 上一篇    下一篇

基于光线变化的显著性区域提取

任永峰1,2*, 周静波1, 王志坚2

  

  • 出版日期:2015-01-04 发布日期:2015-01-04
  • 作者简介:(
    淮阴工学院计算机工程学院, 223003;2. 河海大学计算机与信息学院, 211100)
  • 基金资助:
    江苏省高校自然科学研究面上项目(14KJB520006)

A saliency detection base on the change of light 


Ren Yongfeng1,2*, Zhou Jingbo 2, Wang Zhijian 1   

  • Online:2015-01-04 Published:2015-01-04
  • About author:(1. College of Computer and Information, Hohai University, Nanjing, 211100, China;
    2. Faculty of Computer Engineering, Huaiyin Institute of Technology, Huai’an, 223003, China)

摘要: 图像的显著区域提取是指利用人的视觉特点和习惯,获取图像中最易引起注意的区域。该技术被广泛应用于视觉分析的各个领域,是近几年研究的热点。当前显著性区域提取的方法大多基于颜色对比的基础上进行检测,这种方法只是大概检测出显著性区域的范围,不够精细。在对图像进行显著性区域提取的时候,光线也应该占有很重要的地位。为了更好的提取图像的显著性区域,本文提出一个融合光线的特征的模型进行显著性区域的提取。首先对每幅图像进行光线衰竭和增强的变化,生成不同光线特征的图像;然后对每幅不同光线条件下的图像利用流行排序计算显著性区域;最后针对多个显著性区域的结果进行融合计算,得到图像的显著性区域结果。该算法在公开图像数据库进行的试验验证标明,其结果优于同类的算法。 

Abstract: Saliency detection, the task to detect objects attracted by the human visual system in an image or video, gains much attention in recent years and numerous saliency models have been proposed in the literatures. Most models neglect the illumination invariant features of an image which are very important to the final detection result when we detect the saliency areas in the image. In this paper, we propose a novel framework which is based on different light conditions to improve the accuracy of the saliency detection. The proposed algorithm is divided into three steps. First, we reduce or increase the illumination of the image step by step to generate the images under different illumination conditions in the color space of HSL because of the missing to the information of light in the color space of RGB. This step is very important which provides materials for the back of the work. Second, each image obtained by step 1 is segmented into superpixels. We exploit Manifold Ranking to generate the saliency map according to its effectiveness and efficientness. This algorithm base on Manifold Ranking completes the saliency detection of the image by using the potential manifold distribution structure in the feature space and the characteristics of background and target. This operation can detect the saliency area of every image under different light conditions. Last, to combine the saliency maps generated by different illumination conditions by using the prior knowledge, we exploit the fusion method to integrate these cues. The result of the method in this paper which combines the illumination invariant features of the image is not only increased the significant areas of information, but also improved the accuracy of the significant areas. The experiments on the benchmark dataset are to do the saliency detection by comparing our method with several prior ones. The analysis of the results in the experiments shows that the proposed saliency detection model outperforms the other state-of-the-art algorithms in terms of accuracy and robustness.

[1] 刘雨青,黄添强.基于时空域能量可疑度的视频篡改检测与篡改区域定位.南京大学学报(自然科学),2014,50(1):61~71.
[2] Chen L Q, Xie X, Fan X. A visual attention model for adapting images on small displays. Multimedia systems, 2003, 9(4):353~364.
[3] 高玉祥,张兴敢,柏业超.基于Keystone变换的高速运动目标检测方法研究.南京大学学报(自然科学),2014,50(1):30~34.
[4] Itti L. Models of bottom-up and top-down visual attention. PhD Dissertation. California Institute of Technology Pasadena, 2000.
[5] Ma Y F, Zhang H. Contrast-based image attention analysis by using fuzzy growing. ACM Multimedia, 2003: 374~381.
[6] Harel J, Koch C, Perona P. Graph-based visual saliency. Advances in Neural Information Processing Systems, 2007: 545~555.
[7] Hou X, Zhang L. Saliency detection: A spectral residual approach. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, CVPR’07, 2007: 1~8.
[8] Goferman S, Zelnik Manor L, Tal A. Context-aware saliency detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(10):1915~1926.
[9] Cheng M. Global contrast based salient region detection. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, 2011: 409~416.
[10] Yang C, Zhang L, Lu H, et al. Saliency detection via graph-based manifold ranking. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, 2013: 3166~3173.
[11] BorjiA., Sihite D N, Itti L. Salient object detection: A bench-mark. In: Proceedings of European Conference on Computer Vision, 2012: 414~429.
[12] Martin D, Fowlkes C, Tal D, et al. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of the International Conference on Computer Vision (ICCV), 2001(2): 416~423.
[13] Wei Y, Wen F, Zhu W, et al. Geodesic saliency using background priors. In: Proceedings of European Conference on Computer Vision, 2012: 29~42.
[14] Goferman S, Zelnik Manor L, Tal A. Context-aware saliency detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(10): 1915~1926.
[15] Cheng M M, Zhang G X, Mitra N J, et al. Global contrast based salient region detection. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, 2011: 409~416.
[16] Perazzi F, Krahenbuhl P, Pritch Y, et al. Saliency filters: Contrast based filtering for salient region detection. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, 2012: 733~740.
[17] Yang C, Zhang L, Lu H, et al. Saliency detection via graph-based manifold ranking. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, 2013.
[18] Achanta R, Hemami S, Estrada F, et al. Frequency-tuned salient region detection. In: Proceedings fo the International Conference on Computer Vision and Pattern Recognition, 2009: 1597~1604.
[19] Shen X,Wu Y. A unified approach to salient object detection via low rank matrix recovery. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition,2012: 853~860.
[20] Margolin R,Tal A, Zelnik-Manor L.What makes a patch distinct?In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR,2013: 1139~1146.
No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!