On-vehicle videos localization using geometric and spatio-temporal information

Kazuma Fukumoto, Hiroshi Kawasaki, Shintaro Ono, Hiroshi Koyasu, Katsushi Ikeuchi

Research output: Contribution to conferencePaperpeer-review

Abstract

Recently, a number of researches are conducted to construct the actual city into computers for the purpose of web services, ITS, disaster analysis, landscape simulations and so on. Further, with the spread of on-vehicle video cameras, it becomes common to share the on-vehicle video on website. If locations of the videos are available, the data can be efficiently used for virtual city construction. In this paper, we propose a method to realize localization of anonymous on-vehicle videos uploaded on the web by using video matching technique with Temporal Height Image (THI), Affine SIFT and Bag of Feature (BoF). THI retains information of relative building heights from temporal image sequences and the Affine SIFT realizes a robust matching for variance of both camera speed and driving lane. Finally, BoF representation allows us to realize a stable matching with less computational cost. We conducted several experiments using real image sequences of the actual city to show the successful results of the proposed method.

Original languageEnglish
Publication statusPublished - Jan 1 2013
Externally publishedYes
Event20th Intelligent Transport Systems World Congress, ITS 2013 - Tokyo, Japan
Duration: Oct 14 2013Oct 18 2013

Other

Other20th Intelligent Transport Systems World Congress, ITS 2013
CountryJapan
CityTokyo
Period10/14/1310/18/13

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Automotive Engineering
  • Control and Systems Engineering
  • Electrical and Electronic Engineering
  • Mechanical Engineering
  • Transportation
  • Computer Networks and Communications
  • Computer Science Applications

Fingerprint Dive into the research topics of 'On-vehicle videos localization using geometric and spatio-temporal information'. Together they form a unique fingerprint.

Cite this