Vision-based 3D direct manipulation interface for smart interaction

Satoshi Yonemoto, Rin Ichiro Taniguchi

Research output: Contribution to journalArticle

6 Citations (Scopus)

Abstract

This paper describes an real-time interaction system which enables 3D direct manipulation. Our purpose is to do seamless mapping of human action in the real world into virtual environments. With the aim of making computing systems suited for users, we have developed a vision based 3D direct manipulation interface as smart pointing devices. Our system realizes human motion analysis by 3D blob tracking, and human figure motion synthesis to generate realistic motion from a limit number of blobs. For the sake of realization of smart interaction, we assume that virtual objects in virtual environments can afford human figure action, that is, the virtual environments provide action information for a human figure model, or an avatar. Extending the affordance based approach, this system can employ scene constraints in the virtual environments in order to generate more realistic motion.

Original languageEnglish
Pages (from-to)655-658
Number of pages4
JournalProceedings - International Conference on Pattern Recognition
Volume16
Issue number2
Publication statusPublished - Dec 1 2002

All Science Journal Classification (ASJC) codes

  • Computer Vision and Pattern Recognition

Fingerprint Dive into the research topics of 'Vision-based 3D direct manipulation interface for smart interaction'. Together they form a unique fingerprint.

Cite this