You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
We analyze a 3D skeletal representation of the user in spatial and temporal domains as a tool necessary to recognizing the gestures of drawing, picking and grabbing. The mechanisms of visual perception that are called upon in the imaginative process of artistic creation use those same tactile and kinesthetic pathways and structures in the brain which are employed when we manipulate the 3D world. We see, in fact, with our sensual bodies as well as with our eyes. Our interface is built on an analysis of pointing and gesturing and how they related to the perception of form in space. We report on our progress in implementing a body language user interface for artistic computer interaction, i.e., an human/computer interaction based on an analysis of how an artist uses her body in the act of creation. Using two synchronous TV cameras, we have videotaped an environment into which an artist moves, assumes a canonical (Da Vinci) pose and subsequently makes a series of simple gestures. The video images are processed to generate an animated 3D skeleton that corresponds to the skeleton of the artist. The locus of the path taken by the drawing hand is the source of a trace of particles. Our presentation shows the two simultaneous videos, the associated animated 3D skeleton, that skeleton as an instance of motion capture for a constrained model of a human skeleton and the trace of the path taken by the drawing hand.
The alert did not successfully save. Please try again later.
Arthur William Brody, Coert Olmsted, "Body language user interface (BLUI)," Proc. SPIE 3299, Human Vision and Electronic Imaging III, (17 July 1998); https://doi.org/10.1117/12.320130