TY - GEN
T1 - Multi-layered contents generation from real world scene by three-dimensional measurement
AU - Kim, M. K.
AU - Nakajima, Y.
AU - Takeshita, T.
AU - Onogi, S.
AU - Mitsuishi, M.
AU - Matsumoto, Y.
PY - 2009/10/12
Y1 - 2009/10/12
N2 - In this paper, we propose a method to create automatically multi-layered contents from real world scene based on Depth from Focus and Spatio-Temporal Image Analysis. Since the contents are generated by layer representation directly from real world, the change of point of view is able to freely and it reduces the labor and cost of creating three-dimensional (3-D) contents using Computer Graphics. To extraction layer in the real images, Depth from Focus is used in case of stationary objects and Spatio-Temporal Image Analysis is used in case of moving objects. We selected above two methods, because of stability of system. Depth from Focus method doesn't need to search correspondence point and Spatio-Temporal Image Analysis has also simple computing algorithm relatively. We performed an experiment to extract layer contents from stationary and moving object automatically and the feasibility of the method was confirmed.
AB - In this paper, we propose a method to create automatically multi-layered contents from real world scene based on Depth from Focus and Spatio-Temporal Image Analysis. Since the contents are generated by layer representation directly from real world, the change of point of view is able to freely and it reduces the labor and cost of creating three-dimensional (3-D) contents using Computer Graphics. To extraction layer in the real images, Depth from Focus is used in case of stationary objects and Spatio-Temporal Image Analysis is used in case of moving objects. We selected above two methods, because of stability of system. Depth from Focus method doesn't need to search correspondence point and Spatio-Temporal Image Analysis has also simple computing algorithm relatively. We performed an experiment to extract layer contents from stationary and moving object automatically and the feasibility of the method was confirmed.
UR - http://www.scopus.com/inward/record.url?scp=70349705950&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=70349705950&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:70349705950
SN - 9789898111692
T3 - VISAPP 2009 - Proceedings of the 4th International Conference on Computer Vision Theory and Applications
SP - 109
EP - 112
BT - VISAPP 2009 - Proceedings of the 4th International Conference on Computer Vision Theory and Applications
T2 - 4th International Conference on Computer Vision Theory and Applications, VISAPP 2009
Y2 - 5 February 2009 through 8 February 2009
ER -