Frame rate stabilization by variable resolution shape reconstruction for on-line free-viewpoint video generation

Rui Nabeshima, Megumu Ueda, Daisaku Arita, Rin Ichiro Taniguchi

Research output: Contribution to journalConference article

2 Citations (Scopus)

Abstract

Recently, the number of researches aiming at showing real world objects from arbitrary viewpoint have been steadily growing. The processing method is divided into three stages: 3D shape reconstruction by the visual cone intersection method, conversion of 3D shape representation from a voxel form into a triangular patch form, and coloring triangular patches. If the surface area of the object becomes larger, the frame rate decreases since the processing time of the conversion and coloring depends on the number of triangular patches. Stability of the frame rate is essential for on-line distribution of a free-viewpoint video. To solve this problem, we propose a new method which accommodates the space resolution during the 3D shape reconstruction step, thus stabilizing the number of triangular patches and the frame rate. This is achieved by raising the space resolution step by step and stopping the process on a time criteria. The reconstruction is done by using an octree-based visual cone intersection method. Experimental results show that this method makes the frame rate more stable.

Original languageEnglish
Pages (from-to)81-90
Number of pages10
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3852 LNCS
DOIs
Publication statusPublished - 2006
Event7th Asian Conference on Computer Vision, ACCV 2006 - Hyderabad, India
Duration: Jan 13 2006Jan 16 2006

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Frame rate stabilization by variable resolution shape reconstruction for on-line free-viewpoint video generation'. Together they form a unique fingerprint.

Cite this