MLIA at imageCLFE 2014 scalable concept image annotation challenge

Research output: Contribution to journalConference article

1 Citation (Scopus)

Abstract

In this paper, we propose a large-scale image annotation system for the ImageCLEF 2014 Scalable Concept Image Annotation task. The annotation task, of this year, concentrated on developing annotation algorithms that rely only on data obtained automatically from the web. Since the sophisticated SVM based annotation techniques had been widely applied in the task last year (ImageCLEF 2013), for the task this year, we also adopt the SVM based annotation techniques and put our effort mainly on obtaining more accurate concepts assignment for training images. More specifically, we proposed a two-fold scheme to assign concepts to unlabeled training images: (1) A traditional process which stems the extracted web data of each training image from textual aspect, and make concepts assignment based on the appearance of each concept. (2) An additional process which leverages the deep convolutional network toolbox Overfeat to predict labels (in ImageNet nouns) for each training image from visual aspect, then the predicted tags are mapped to concepts in ImageCLEF based on WordNet synonyms and hyponyms with semantic relations. Finally, the allocated concepts for each training image are generated based on a fusion step of the two-fold concepts assignment processes. Experimental results show that the proposed concepts assignment scheme is efficient to improve the assignment results of traditional textual processing and to allocate reasonable concepts for training images. Consequently, with an efficient SVMs solver based on S-tochastic Gradient Descent, our annotation systems achieves competitive performance in the annotation task.

Original languageEnglish
Pages (from-to)411-420
Number of pages10
JournalCEUR Workshop Proceedings
Volume1180
Publication statusPublished - Jan 1 2014
Event2014 Cross Language Evaluation Forum Conference, CLEF 2014 - Sheffield, United Kingdom
Duration: Sep 15 2014Sep 18 2014

Fingerprint

Labels
Fusion reactions
Semantics
Processing
fold

All Science Journal Classification (ASJC) codes

  • Computer Science(all)

Cite this

MLIA at imageCLFE 2014 scalable concept image annotation challenge. / Xu, Xing; Shimada, Atsushi; Taniguchi, Rin-Ichiro.

In: CEUR Workshop Proceedings, Vol. 1180, 01.01.2014, p. 411-420.

Research output: Contribution to journalConference article

@article{4fc15d9d5a8546b685691f1746c07619,
title = "MLIA at imageCLFE 2014 scalable concept image annotation challenge",
abstract = "In this paper, we propose a large-scale image annotation system for the ImageCLEF 2014 Scalable Concept Image Annotation task. The annotation task, of this year, concentrated on developing annotation algorithms that rely only on data obtained automatically from the web. Since the sophisticated SVM based annotation techniques had been widely applied in the task last year (ImageCLEF 2013), for the task this year, we also adopt the SVM based annotation techniques and put our effort mainly on obtaining more accurate concepts assignment for training images. More specifically, we proposed a two-fold scheme to assign concepts to unlabeled training images: (1) A traditional process which stems the extracted web data of each training image from textual aspect, and make concepts assignment based on the appearance of each concept. (2) An additional process which leverages the deep convolutional network toolbox Overfeat to predict labels (in ImageNet nouns) for each training image from visual aspect, then the predicted tags are mapped to concepts in ImageCLEF based on WordNet synonyms and hyponyms with semantic relations. Finally, the allocated concepts for each training image are generated based on a fusion step of the two-fold concepts assignment processes. Experimental results show that the proposed concepts assignment scheme is efficient to improve the assignment results of traditional textual processing and to allocate reasonable concepts for training images. Consequently, with an efficient SVMs solver based on S-tochastic Gradient Descent, our annotation systems achieves competitive performance in the annotation task.",
author = "Xing Xu and Atsushi Shimada and Rin-Ichiro Taniguchi",
year = "2014",
month = "1",
day = "1",
language = "English",
volume = "1180",
pages = "411--420",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",
publisher = "CEUR-WS",

}

TY - JOUR

T1 - MLIA at imageCLFE 2014 scalable concept image annotation challenge

AU - Xu, Xing

AU - Shimada, Atsushi

AU - Taniguchi, Rin-Ichiro

PY - 2014/1/1

Y1 - 2014/1/1

N2 - In this paper, we propose a large-scale image annotation system for the ImageCLEF 2014 Scalable Concept Image Annotation task. The annotation task, of this year, concentrated on developing annotation algorithms that rely only on data obtained automatically from the web. Since the sophisticated SVM based annotation techniques had been widely applied in the task last year (ImageCLEF 2013), for the task this year, we also adopt the SVM based annotation techniques and put our effort mainly on obtaining more accurate concepts assignment for training images. More specifically, we proposed a two-fold scheme to assign concepts to unlabeled training images: (1) A traditional process which stems the extracted web data of each training image from textual aspect, and make concepts assignment based on the appearance of each concept. (2) An additional process which leverages the deep convolutional network toolbox Overfeat to predict labels (in ImageNet nouns) for each training image from visual aspect, then the predicted tags are mapped to concepts in ImageCLEF based on WordNet synonyms and hyponyms with semantic relations. Finally, the allocated concepts for each training image are generated based on a fusion step of the two-fold concepts assignment processes. Experimental results show that the proposed concepts assignment scheme is efficient to improve the assignment results of traditional textual processing and to allocate reasonable concepts for training images. Consequently, with an efficient SVMs solver based on S-tochastic Gradient Descent, our annotation systems achieves competitive performance in the annotation task.

AB - In this paper, we propose a large-scale image annotation system for the ImageCLEF 2014 Scalable Concept Image Annotation task. The annotation task, of this year, concentrated on developing annotation algorithms that rely only on data obtained automatically from the web. Since the sophisticated SVM based annotation techniques had been widely applied in the task last year (ImageCLEF 2013), for the task this year, we also adopt the SVM based annotation techniques and put our effort mainly on obtaining more accurate concepts assignment for training images. More specifically, we proposed a two-fold scheme to assign concepts to unlabeled training images: (1) A traditional process which stems the extracted web data of each training image from textual aspect, and make concepts assignment based on the appearance of each concept. (2) An additional process which leverages the deep convolutional network toolbox Overfeat to predict labels (in ImageNet nouns) for each training image from visual aspect, then the predicted tags are mapped to concepts in ImageCLEF based on WordNet synonyms and hyponyms with semantic relations. Finally, the allocated concepts for each training image are generated based on a fusion step of the two-fold concepts assignment processes. Experimental results show that the proposed concepts assignment scheme is efficient to improve the assignment results of traditional textual processing and to allocate reasonable concepts for training images. Consequently, with an efficient SVMs solver based on S-tochastic Gradient Descent, our annotation systems achieves competitive performance in the annotation task.

UR - http://www.scopus.com/inward/record.url?scp=84961358991&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84961358991&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:84961358991

VL - 1180

SP - 411

EP - 420

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -