Learning unified binary codes for cross-modal retrieval via latent semantic hashing

Xing Xu, Li He, Atsushi Shimada, Rin ichiro Taniguchi, Huimin Lu

Research output: Contribution to journalArticlepeer-review

45 Citations (Scopus)

Abstract

Nowadays the amount of multimedia data such as images and text is growing exponentially on social websites, arousing the demand of effective and efficient cross-modal retrieval. The cross-modal hashing based methods have attracted considerable attention recently as they can learn efficient binary codes for heterogeneous data, which enables large-scale similarity search. Generally, to effectively construct the cross-correlation between different modalities, these methods try to find a joint abstraction space where the heterogeneous data can be projected. Then a quantization rule is applied to convert the abstraction representation to binary codes. However, these methods may not effectively bridge the semantic gap through the latent abstraction space because they fail to capture latent information between heterogeneous data. In addition, most of these methods apply the simplest quantization scheme (i.e. sign function) which may cause information loss of the abstraction representation and result in inferior binary codes. To address these challenges, in this paper, we present a novel cross-modal hashing based method that generates unified binary codes combining different modalities. Specifically, we first extract semantic features from the modalities of images and text to capture latent information. Then these semantic features are projected to a joint abstraction space. Finally, the abstraction space is rotated to produce better unified binary codes with much less quantization loss, while preserving the locality structure of projected data. We integrate the binary code learning procedures above to develop an iterative algorithm for optimal solutions. Moreover, we further exploit the useful class label information to reduce the semantic gap between different modalities to benefit the binary code learning. Extensive experiments on four multimedia datasets show that the proposed binary coding schemes outperform several other state-of-the-art methods under cross-modal scenarios.

Original languageEnglish
Pages (from-to)191-203
Number of pages13
JournalNeurocomputing
Volume213
DOIs
Publication statusPublished - Nov 12 2016

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Learning unified binary codes for cross-modal retrieval via latent semantic hashing'. Together they form a unique fingerprint.

Cite this