TY - GEN
T1 - Simultaneous entire shape registration of multiple depth images using depth difference and shape silhouette
AU - Ushinohama, Takuya
AU - Sawai, Yosuke
AU - Ono, Satoshi
AU - Kawasaki, Hiroshi
N1 - Publisher Copyright:
© Springer International Publishing Switzerland 2015.
PY - 2015
Y1 - 2015
N2 - This paper proposes a method for simultaneous global registration of multiple depth images which are obtained from multiple viewpoints. Unlike the previous method, the proposed method fully utilizes a silhouette-based cost function taking out-of-view and non-overlapping regions into account as well as depth differences at overlapping areas. With the combination of the above cost functions and a recent powerful meta-heuristics named self-adaptive Differential Evolution, it realizes the entire shape reconstruction from relatively small number (three or four) of depth images, which do not involve enough overlapping regions for Iterative Closest Point even if they are prealigned. In addition, to allow the technique to be applicable not only to time-of-flight sensors, but also projector-camera systems, which has deficient silhouette by occlusions, we propose a simple solution based on color-based silhouette. Experimental results show that the proposed method can reconstruct the entire shape only from three depth images of both synthetic and real data. The influence of noises and inaccurate silhouettes is also evaluated.
AB - This paper proposes a method for simultaneous global registration of multiple depth images which are obtained from multiple viewpoints. Unlike the previous method, the proposed method fully utilizes a silhouette-based cost function taking out-of-view and non-overlapping regions into account as well as depth differences at overlapping areas. With the combination of the above cost functions and a recent powerful meta-heuristics named self-adaptive Differential Evolution, it realizes the entire shape reconstruction from relatively small number (three or four) of depth images, which do not involve enough overlapping regions for Iterative Closest Point even if they are prealigned. In addition, to allow the technique to be applicable not only to time-of-flight sensors, but also projector-camera systems, which has deficient silhouette by occlusions, we propose a simple solution based on color-based silhouette. Experimental results show that the proposed method can reconstruct the entire shape only from three depth images of both synthetic and real data. The influence of noises and inaccurate silhouettes is also evaluated.
UR - http://www.scopus.com/inward/record.url?scp=84945945341&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84945945341&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-16808-1_31
DO - 10.1007/978-3-319-16808-1_31
M3 - Conference contribution
AN - SCOPUS:84945945341
SN - 9783319168074
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 457
EP - 472
BT - Computer Vision - ACCV 2014 - 12th Asian Conference on Computer Vision, Revised Selected Papers
A2 - Yang, Ming-Hsuan
A2 - Saito, Hideo
A2 - Cremers, Daniel
A2 - Reid, Ian
PB - Springer Verlag
T2 - 12th Asian Conference on Computer Vision, ACCV 2014
Y2 - 1 November 2014 through 5 November 2014
ER -