title: Learning to Tag creator: Wu, Lei creator: Yang, Linjun creator: Yu, Nenghai creator: Hua, Xian-Sheng description: Social tagging provides valuable and crucial information for large-scale web image retrieval. It is ontology-free and easy to obtain; however, irrelevant tags frequently appear, and users typically will not tag all semantic objects in the image, which is also called semantic loss. To avoid noises and compensate for the semantic loss, tag recommendation is proposed in literature. However, current recommendation simply ranks the related tags based on the single modality of tag co-occurrence on the whole dataset, which ignores other modalities, such as visual correlation. This paper proposes a multi-modality recommendation based on both tag and visual correlation, and formulates the tag recommendation as a learning problem. Each modality is used to generate a ranking feature, and Rankboost algorithm is applied to learn an optimal combination of these ranking features from different modalities. Experiments on Flickr data demonstrate the effectiveness of this learning-based multi-modality recommendation strategy. date: 2009-04 type: Conference or Workshop Item type: PeerReviewed format: application/pdf identifier: http://www2009.eprints.org/37/1/p361.pdf format: application/pdf identifier: http://www2009.eprints.org/37/2/WWW2009_Learning2Tag.pdf identifier: Wu, Lei and Yang, Linjun and Yu, Nenghai and Hua, Xian-Sheng (2009) Learning to Tag. In: 18th International World Wide Web Conference, April 20th-24th, 2009, Madrid, Spain. relation: http://www2009.eprints.org/37/