Web image concept annotation with better understanding of tags and visual features
نویسندگان
چکیده
1047-3203/$ see front matter 2010 Elsevier Inc. A doi:10.1016/j.jvcir.2010.08.005 q The paper is the extension of our previous w Workshop on Web-Scale Multimedia Corpus, 2009. ⇑ Corresponding author. E-mail addresses: [email protected] (S. Gao), as [email protected] (X. Cheng). This paper focuses on improving the semi-manual method for web image concept annotation. By sufficiently studying the characteristics of tag and visual feature, we propose the Grouping-Based-Precision & Recall-Aided (GBPRA) feature selection strategy for concept annotation. Specifically, for visual features, we construct a more robust middle level feature by concatenating the k-NN results for each type of visual feature. For tag, we construct a concept-tag co-occurrence matrix, based on which the probability of an image belonging to certain concept can be calculated. By understanding the tags’ quality and groupings’ semantic depth, we propose a grouping based feature selection method; by studying the tags’ distribution, we adopt Precision and Recall as a complementary indicator for feature selection. In this way, the advantages of both tags and visual features are boosted. Experimental results show our method can achieve very high Average Precision, which greatly facilitates the annotation of large-scale web image dataset. 2010 Elsevier Inc. All rights reserved.
منابع مشابه
Tags Re-ranking Using Multi-level Features in Automatic Image Annotation
Automatic image annotation is a process in which computer systems automatically assign the textual tags related with visual content to a query image. In most cases, inappropriate tags generated by the users as well as the images without any tags among the challenges available in this field have a negative effect on the query's result. In this paper, a new method is presented for automatic image...
متن کاملCNRS - TELECOM ParisTech at ImageCLEF 2013 Scalable Concept Image Annotation Task: Winning Annotations with Context Dependent SVMs
In this paper, we describe the participation of CNRS TELECOM ParisTech in the ImageCLEF 2013 Scalable Concept Image Annotation challenge. This edition promotes the use of many contextual cues attached to visual contents. Image collections are supplied with visual features as well as tags taken from different sources (web pages, etc.). Our framework is based on training support vector machines (...
متن کاملScalable Image Annotation by Summarizing Training Samples into Labeled Prototypes
By increasing the number of images, it is essential to provide fast search methods and intelligent filtering of images. To handle images in large datasets, some relevant tags are assigned to each image to for describing its content. Automatic Image Annotation (AIA) aims to automatically assign a group of keywords to an image based on visual content of the image. AIA frameworks have two main sta...
متن کاملImage Object Retrieval Using Semantic Feature Discovery and Tags
The exponential growth in the field of Web 2.0 applications and services, the tags which are used widely to describe the contents of the image spread over the Web. As there is a noisy and general nature found in human made tags, how to use these tags for retrieving tasks of an image is a trending research field. As the visual features that are low-level can provide valuable information, so they...
متن کاملCo-occurrence Models for Image Annotation and Retrieval
We present two models for content-based automatic image annotation and retrieval in web image repositories, based on the co-occurrence of tags and visual features in the images. In particular, we show how additional measures can be taken to address the noisy and limited tagging problems, in datasets such as Flickr, to improve performance. As in many state-of-the-art works, an image is represent...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- J. Visual Communication and Image Representation
دوره 21 شماره
صفحات -
تاریخ انتشار 2010