creators_name: Wu, Lei creators_name: Yang, Linjun creators_name: Yu, Nenghai creators_name: Hua, Xian-Sheng type: conference_item datestamp: 2009-04-06 19:09:24 lastmod: 2009-04-29 12:20:56 metadata_visibility: show title: Learning to Tag ispublished: pub full_text_status: public pres_type: paper abstract: Social tagging provides valuable and crucial information for large-scale web image retrieval. It is ontology-free and easy to obtain; however, irrelevant tags frequently appear, and users typically will not tag all semantic objects in the image, which is also called semantic loss. To avoid noises and compensate for the semantic loss, tag recommendation is proposed in literature. However, current recommendation simply ranks the related tags based on the single modality of tag co-occurrence on the whole dataset, which ignores other modalities, such as visual correlation. This paper proposes a multi-modality recommendation based on both tag and visual correlation, and formulates the tag recommendation as a learning problem. Each modality is used to generate a ranking feature, and Rankboost algorithm is applied to learn an optimal combination of these ranking features from different modalities. Experiments on Flickr data demonstrate the effectiveness of this learning-based multi-modality recommendation strategy. date: 2009-04 pagerange: 361-361 event_title: 18th International World Wide Web Conference event_location: Madrid, Spain event_dates: April 20th-24th, 2009 event_type: conference refereed: TRUE citation: Wu, Lei and Yang, Linjun and Yu, Nenghai and Hua, Xian-Sheng (2009) Learning to Tag. In: 18th International World Wide Web Conference, April 20th-24th, 2009, Madrid, Spain. document_url: http://www2009.eprints.org/37/1/p361.pdf document_url: http://www2009.eprints.org/37/2/WWW2009_Learning2Tag.pdf