南京大学学报(自然科学版) ›› 2016, Vol. 52 ›› Issue (1): 167–174.

• • 上一篇    下一篇

基于对象颜色的图像特征加权表示方法

朱杰1,2, 刘博1,, 超木日力格1, 于剑1*   

  • 出版日期:2016-01-27 发布日期:2016-01-27
  • 作者简介:(1. 北京交通大学计算机与信息技术学院交通数据分析与挖掘北京市重点实验室,北京100044;2. 中央司法警官学院信息管理系保定071000;3. 河北农业大学信息科学与技术学院保定071000)
  • 基金资助:
    基金项目:国家自然科学基金(61033013,61370129,61375062,61300072,61105056,61402462),国家教育部博士点基金
    (20120009110006),中央高校基础科研业务经费北京市科委项目(Z131110002813118),河北省教育厅青年基金
    (QN2015099)
    收稿日期:2015-06-10
    *通讯联系人,E-mail:jianyu@bjtu.edu.cn

Zhu Jie1,2, Liu Bo3, Chaomurilige1, Yu Jian1*

Zhu Jie1,2, Liu Bo3, Chaomurilige1, Yu Jian1*   

  • Online:2016-01-27 Published:2016-01-27
  • About author:(1. Beijing Key Laboartory of Traffic Data Analysis and Mining, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China; 2. Department of Information Management, The Central Institute for Correctional Police, Baoding, 071000, China; 3. College of Information Science and Technology, Agricultural University of Hebei, Baoding, 071000, China)

摘要: 自顶向下的颜色注意力算法(CA)用patch的颜色注意力值当做patch形状特征的权重来进行图像表示,如果当前patch的颜色属于本类内经常出现的颜色,此patch上所提取的形状特征就被赋予一个较大的权值,否则就赋予较小的权值。但是算法没有考虑到对象颜色的多样性为了提高CA算法的对象识别能力,提出一种基于互信息的对象颜色选择方法在对象表示的时候对估计到的对象上的patch赋予一致的高权值。实验采用Soccer,Flower 17和PASCAL VOC Challenge 2007三个图像集进行测评,实验结果表明算法能够得到比较好的分类结果。

Abstract: Visual attention is effective in differentiating an object from its surroundings. Top-down color attention (CA) method is developed to use color to guide attention by means of a top down category-specific attention map. CA is proposed based on the assumption that the color often appears in a category are supposed to the object color and the patches with these colors are assigned different large weights, and it assigns the weight of the patch shape feature based on the patch color for image representation. However, the diversity of object colors is not considered in this method, and the object patches with different colors are assigned different weights for image representation. Moreover, the object patches cannot be distinguished by CA. We suppose that the object patches often appear in one category and seldom appear in the rest of the categories. To enhance the object recognition capability of CA, an object patch selection method is proposed based on mutual information between the category and the color word which measures their mutual dependence. An object is usually characterized by many colors in the real world; the most representative colors are the discriminative ones. We propose the discriminative color histogram which only preserves the most discriminative colors to evaluate the color difference between different two different categories. Ranking mutual information in descending order is useful to show the importance of each color for a certain category. The higher the mutual information between the category and a color word is, the more likely the color is to be the object color in this category. We rank the mutual information between colors and a category in descending order, and the colors corresponding to the top few highest pieces of mutual information are selected as the discriminative colors in this category. Our image representation is based BOW, object information in the histogram is important for classification and all the object patches are equally important for finding the object regions. To this end, we combine the attention (weights) to be the weights of the object patches in the histogram representation, and the final image representation is obtained by concatenating the class-specific histograms. Results are presented on Soccer, Flower 17 and PASCAL VOC Challenge 2007 data sets, and the experiments demonstrate that the proposed feature fusion method can obtain satisfactory results in these data sets.

[1]Csurka G, Dance C, Fan L, et al. Visual categorization with bags of keypoints. In: The 8th European Conference on Computer Vision. Prague: IEEE Press, 2004: 1?22.
[2]Shahbaz Khan F, van de Weijer J, Vanrell M. Top-down color attention for object recognition. In: The 12th International conference on Computer Vision. Tokyo: IEEE Press, 2009:979–986.
[3]Nilsback M E, Zisserman A. A visual vocabulary for flower classification. In: The 24th IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2006: 1447–1454.
[4]Shahbaz Khan F, Anwer R M, van de Weijer, et al. Color attributes for object detection. In: The 30th IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Rhode Island: IEEE Press, 2012: 3306–3313.
[5]Gehler P, Nowozin S. On feature combination for multiclass object classification. In: The 12th International conference on Computer Vision. Tokyo: IEEE Press, 2009: 221–228.
[6]Nilsback M E , Zisserman A. Automated flower classification over a large number of classes. In: The 6th Indian Conference on Computer Vision,. Graphics and Image Processing, India: IEEE Press, 2008:722–729.
[7]Fernando B, Fromont E, Muselet D, et al. Discriminative feature fusion for image classification. In: The 30th IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Rhode Island: IEEE Press, 2012: 3434–3441.
[8]Bilen H, Pedersoli M, Namboodiri V P, et al. Object classification with adaptable regions. In: The 32th IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Columbus: IEEE Press, 2014: 3662-3669.
[9]Lei B, Tan E L, Chen S, et al. Saliency-driven image classification method based on histogram mining and image score. Pattern Recognition, 2015, 48(8): 2567-2580.
[10]王喆正, 唐  晔, 杨育彬. 利用图像类标信息的自调式字典学习方法. 南京大学学报(自然科学), 2015, 51 (2): 320-327.
[11]Lowe D G. Object recognition from local scale-invariant features, In: The 7th International Conference on Computer Vision. Kerkyraq: IEEE Press, 1999: 1150–1157.
[12]Van De Weijer J, Schmid C, Verbeek J, et al. Learning color names for real-world applications. IEEE Transactions on Image Processing, 2009, 18(7):1512-1523.
[13]Van De Weijer J, Schmid C. Coloring local feature extraction. In: The 9th European Conference on Computer Vision. Graz: IEEE Press, 2006: 334-348.
[14]Everingham M, Van Gool L, Williams C K I, et al. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 2010, 88(2): 303-338.
[15]Uijlings J, Smeulders A, Scha R. What is the spatial extent of an object? In: The 22th IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Florida: IEEE Press, 2009: 770-777.
[16]Yan F, Mikolajczyk K, Barnard M, et al. ?p norm multiple kernel fisher discriminant analysis for object and image categorisation. In: The 23th IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco: IEEE Press, 2010: 3626-3632.
[17]Yuan X T, Yan S. Visual classification with multi-task joint sparse representation. In: The 23th IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco: IEEE Press, 2010: 3493–3500.
[18]Mikolajczyk K, Tuytelaars T, Schmid C, et al. A comparison of affine region detectors. International Journal of Computer Vision, 2005,65(1–2): 43–72.
[19]Lowe D G. Distinctive image features from scale-invariant points. International Journal of Computer Vision, 2004, 60(2): 91–110.
[20]Wang J, Yang J, Yu K, et al. Locality-constrained linear coding for image classification. In: The 23th IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco: IEEE Press, 2010: 3360-3367.
No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!