Virtual Scene Control Using Human Body Postures

Satoshi Yonemoto, Rin Ichiro Taniguchi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

This paper describes a vision based 3D real-virtual interaction which enables realistic avatar motion control, and in which the virtual camera is controlled by the body posture of the user. The human motion analysis method is implemented by blob tracking. A physically-constrained motion synthesis method is implemented to generate realistic motion from a limit number of blobs. We address our framework to utilize virtual scene contexts as a priori knowledge. In order to make the virtual scene more realistically beyond the limitation of the real world sensing, we use a framework to augment the reality in the virtual scene by simulating various events of the real world. Concretely, we suppose that a virtual environment can provide action information for the avatar. 3rd-person viewpoint control coupled with body postures is also realized to directly access virtual objects.

Original languageEnglish
Title of host publication2003 Conference on Computer Vision and Pattern Recognition Workshop, CVPRW 2003
PublisherIEEE Computer Society
ISBN (Electronic)0769519008
DOIs
Publication statusPublished - Jan 1 2003
EventConference on Computer Vision and Pattern Recognition Workshop, CVPRW 2003 - Madison, United States
Duration: Jun 16 2003Jun 22 2003

Publication series

NameIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Volume5
ISSN (Print)2160-7508
ISSN (Electronic)2160-7516

Other

OtherConference on Computer Vision and Pattern Recognition Workshop, CVPRW 2003
CountryUnited States
CityMadison
Period6/16/036/22/03

Fingerprint

Motion control
Virtual reality
Cameras
Motion analysis

All Science Journal Classification (ASJC) codes

  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Cite this

Yonemoto, S., & Taniguchi, R. I. (2003). Virtual Scene Control Using Human Body Postures. In 2003 Conference on Computer Vision and Pattern Recognition Workshop, CVPRW 2003 [4624316] (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; Vol. 5). IEEE Computer Society. https://doi.org/10.1109/CVPRW.2003.10054

Virtual Scene Control Using Human Body Postures. / Yonemoto, Satoshi; Taniguchi, Rin Ichiro.

2003 Conference on Computer Vision and Pattern Recognition Workshop, CVPRW 2003. IEEE Computer Society, 2003. 4624316 (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; Vol. 5).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yonemoto, S & Taniguchi, RI 2003, Virtual Scene Control Using Human Body Postures. in 2003 Conference on Computer Vision and Pattern Recognition Workshop, CVPRW 2003., 4624316, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, vol. 5, IEEE Computer Society, Conference on Computer Vision and Pattern Recognition Workshop, CVPRW 2003, Madison, United States, 6/16/03. https://doi.org/10.1109/CVPRW.2003.10054
Yonemoto S, Taniguchi RI. Virtual Scene Control Using Human Body Postures. In 2003 Conference on Computer Vision and Pattern Recognition Workshop, CVPRW 2003. IEEE Computer Society. 2003. 4624316. (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops). https://doi.org/10.1109/CVPRW.2003.10054
Yonemoto, Satoshi ; Taniguchi, Rin Ichiro. / Virtual Scene Control Using Human Body Postures. 2003 Conference on Computer Vision and Pattern Recognition Workshop, CVPRW 2003. IEEE Computer Society, 2003. (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops).
@inproceedings{f437e29cdb7f482085dc500fa13f3962,
title = "Virtual Scene Control Using Human Body Postures",
abstract = "This paper describes a vision based 3D real-virtual interaction which enables realistic avatar motion control, and in which the virtual camera is controlled by the body posture of the user. The human motion analysis method is implemented by blob tracking. A physically-constrained motion synthesis method is implemented to generate realistic motion from a limit number of blobs. We address our framework to utilize virtual scene contexts as a priori knowledge. In order to make the virtual scene more realistically beyond the limitation of the real world sensing, we use a framework to augment the reality in the virtual scene by simulating various events of the real world. Concretely, we suppose that a virtual environment can provide action information for the avatar. 3rd-person viewpoint control coupled with body postures is also realized to directly access virtual objects.",
author = "Satoshi Yonemoto and Taniguchi, {Rin Ichiro}",
year = "2003",
month = "1",
day = "1",
doi = "10.1109/CVPRW.2003.10054",
language = "English",
series = "IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops",
publisher = "IEEE Computer Society",
booktitle = "2003 Conference on Computer Vision and Pattern Recognition Workshop, CVPRW 2003",
address = "United States",

}

TY - GEN

T1 - Virtual Scene Control Using Human Body Postures

AU - Yonemoto, Satoshi

AU - Taniguchi, Rin Ichiro

PY - 2003/1/1

Y1 - 2003/1/1

N2 - This paper describes a vision based 3D real-virtual interaction which enables realistic avatar motion control, and in which the virtual camera is controlled by the body posture of the user. The human motion analysis method is implemented by blob tracking. A physically-constrained motion synthesis method is implemented to generate realistic motion from a limit number of blobs. We address our framework to utilize virtual scene contexts as a priori knowledge. In order to make the virtual scene more realistically beyond the limitation of the real world sensing, we use a framework to augment the reality in the virtual scene by simulating various events of the real world. Concretely, we suppose that a virtual environment can provide action information for the avatar. 3rd-person viewpoint control coupled with body postures is also realized to directly access virtual objects.

AB - This paper describes a vision based 3D real-virtual interaction which enables realistic avatar motion control, and in which the virtual camera is controlled by the body posture of the user. The human motion analysis method is implemented by blob tracking. A physically-constrained motion synthesis method is implemented to generate realistic motion from a limit number of blobs. We address our framework to utilize virtual scene contexts as a priori knowledge. In order to make the virtual scene more realistically beyond the limitation of the real world sensing, we use a framework to augment the reality in the virtual scene by simulating various events of the real world. Concretely, we suppose that a virtual environment can provide action information for the avatar. 3rd-person viewpoint control coupled with body postures is also realized to directly access virtual objects.

UR - http://www.scopus.com/inward/record.url?scp=84954414002&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84954414002&partnerID=8YFLogxK

U2 - 10.1109/CVPRW.2003.10054

DO - 10.1109/CVPRW.2003.10054

M3 - Conference contribution

AN - SCOPUS:84954414002

T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

BT - 2003 Conference on Computer Vision and Pattern Recognition Workshop, CVPRW 2003

PB - IEEE Computer Society

ER -