Modeling large-scale indoor scenes with rigid fragments using RGB-D cameras

Diego Thomas, Akihiro Sugimoto

研究成果: Contribution to journalArticle査読

3 被引用数 (Scopus)


Hand-held consumer depth cameras have become a commodity tool for constructing 3D models of indoor environments in real time. Recently, many methods to fuse low quality depth images into a single dense and high fidelity 3D model have been proposed. Nonetheless, dealing with large-scale scenes remains a challenging problem. In particular, the accumulation of small errors due to imperfect camera localization becomes crucial (at large scale) and results in dramatic deformations of the built 3D model. These deformations have to be corrected whenever it is possible (when a loop exists for example). To facilitate such correction, we use a structured 3D representation where points are clustered into several planar patches that compose the scene. We then propose a two-stage framework to build in details and in real-time a large-scale 3D model. The first stage (the local mapping) generates local structured 3D models with rigidity constraints from short subsequences of RGB-D images. The second stage (the global mapping) aggregates all local 3D models into a single global model in a geometrically consistent manner. Minimizing deformations of the global model reduces to re-positioning the planar patches of the local models thanks to our structured 3D representation. This allows efficient, yet accurate computations. Our experiments using real data confirm the effectiveness of our proposed method.

ジャーナルComputer Vision and Image Understanding
出版ステータス出版済み - 4 1 2017

All Science Journal Classification (ASJC) codes

  • ソフトウェア
  • 信号処理
  • コンピュータ ビジョンおよびパターン認識


「Modeling large-scale indoor scenes with rigid fragments using RGB-D cameras」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。