This research studies the Virtual Reality simulation for collaborative interaction so that different people from different places can interact with one object concurrently. Our focus is the real-time handling of inputs from multiple users, where object's behavior is determined by the combination of the multiple inputs. Issues addressed in this research are: 1) The effects of using haptics on a collaborative interaction, 2) The possibilities of collaboration between users from different environments. We conducted user tests on our system in several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over
Internet, and 3) Analysis of collaborative interaction between
non-immersive and immersive display environments. The case studies are the interaction of users in two cases: collaborative authoring of a 3D model by two users, and collaborative haptic interaction by multiple users. In Virtual Dollhouse, users can observe physics law while constructing a dollhouse using existing building blocks, under gravity effects. In Virtual Stretcher, multiple users can collaborate on moving a stretcher together while feeling each other's haptic motions.
Virtual Reality simulation enables immersive 3D experience of a Virtual Environment. A simulation-based Virtual
Environment can be used to map real world phenomena onto virtual experience. With a reconfigurable simulation, users
can reconfigure the parameters of the involved objects, so that they can see different effects from the different
configurations. This concept is suitable for a classroom learning of physics law. This research studies the Virtual Reality
simulation of Newton's physics law on rigid body type of objects. With network support, collaborative interaction is
enabled so that people from different places can interact with the same set of objects in immersive Collaborative Virtual
Environment. The taxonomy of the interaction in different levels of collaboration is described as: distinct objects and
same object, in which there are same object - sequentially, same object - concurrently - same attribute, and same object
- concurrently - distinct attributes. The case studies are the interaction of users in two cases: destroying and creating a
set of arranged rigid bodies. In Virtual Domino, users can observe physics law while applying force to the domino blocks
in order to destroy the arrangements. In Virtual Dollhouse, users can observe physics law while constructing a dollhouse
using existing building blocks, under gravity effects.
This paper presents our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One is for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model.
The algorithm is applied to 2 cases which are corridor and space that has the four walls like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.
The MPEG-4 BIFS and MPEG-4 LASeR are parts of MPEG-4 standard for describing multimedia scenes in binary
format. Scenes are multimedia presentations consisting of text, graphics, animation and real contents such as images,
videos, and audios. BIFS and LASeR scenes can be written in XML format and then be encoded into binary format for
the consumption of MPEG-4 terminals. While BIFS is a stable standard, LASeR is an emerging standard that is being
newly developed for the purpose of lightweight applications in constrained terminals such as mobile phones. This
explains why the LASeR specification is much simpler than that of BIFS. In this paper, we present a method of
transcoding for converting BIFS XML format into LASeR XML format. We analyze the differences between BIFS and
LASeR specifications and propose a set of mapping rules for the conversion purpose. The transcoding is done using an
XSLT processor for converting the BIFS XML formats into the corresponding LASeR XML formats. The text to text
conversion is incorporated into our transcoding system that also adapts real objects in the scenes such as images and
videos. The motivation of this paper is to enable a transcoding from BIFS to LASeR so that MPEG-4 content authors can
distribute their existing contents to mobile devices without the need of reauthoring and learning of a new standard.