Log in / Register
Home arrow Computer Science arrow Augmented and Virtual Reality
< Prev   CONTENTS   Next >

3 Virtual Reality Training System

We have developed a 3D Virtual Reality Training System (VRTS). It is a desktop based virtual environment that is used to train students in assembly of 3phase step-down transformer. The VRTS is a room like structure and contains all 3D components of the transformer as shown in Fig. 2. These component are designed in 3d studio max and loaded and placed in the virtual environment. The high quality of these models increases the realism of the environment. Just like a real environment the user can navigate, select and manipulate the objects. Whenever a user selects an object he/she is provided various audio/visual information about that object i.e. name, properties and function of the object. Interaction with VRTS is made via the ARToolKit [9] markers which are printed patterns. These markers provide 6 degree of freedom.

3.1 Software Architecture of VRTS

The complete model of the system is shown in Fig. 3. This model represents the working mechanism of VRTS, and consists of the following principle modules.

Fig. 2. Overview of the VRTS environment

Fig. 3. Software architecture of VRTS

CAD 3D MODELS. The whole environment and all the parts of 3 phase step down transformer were first designed in 3D Studio Max 2009 package. These high quality objects were then translated to .obj file format along with color, material and texture information. The .obj file is then exported to OpenGl Loader software.

OpenGl Loader. This module is used to translate the .obj file into the VR environment. It places all the objects at specific positions in the VR environment.

Computer Vision Module (CVM). In order to make the VRTS system simple, realistic, and reduce its cost and complexity so that it can easily be adopted and used in many organization, we use computer vision based interaction system. This system has three main components: (i) ARToolKit markers (ii) ARToolKit library and (iii) a video camera. ARToolKit markers are special black and white markers printed on a paper that can be detected by a normal camera. The algorithm developed using ARToolKit library is responsible for analyzing the input stream taken by the video camera to detect the marker. Once the marker is detected, its position and orientation is estimated and then passed to the main VRTS module.

– 3D Pointer Mapping The 3D pointer is a virtual hand in the virtual environment which represents the presence of the user and is used for interaction with the VE (see Fig. 2). The physical pose of the marker in the real environment is mapped into the pose of the 3D pointer in the VE.

The User Interaction Module (UIM). UIM controls different operations in the virtual environment. It allows the 3D pointer (virtual hand) to navigate and interact with virtual environment. It controls the collision detection of virtual hand with objects, inter-object collision detection, object selection, and manipulation in the virtual environment.

– Navigation

The user (represented by the virtual hand) can move (navigate) freely in all directions in the VR environment. The virtual hand is mapped with the marker, whenever user moves the Marker using his hand in the real environment, the virtual hand follows its motion in the virtual environment dynamically in real time. The camera also moves along with the virtual hand in the virtual environment.

– Selection and Manipulation Module

Selection and manipulation are the most important operations in any virtual environment. The object is first selected by the virtual hand in order to perform some manipulation operations. The manipulation may consist of making some change in the behavior of the object e.g. changing the position of the object. ARToolKit marker is used for navigation, identification and selection of objects in the virtual environment. The virtual hand follows the movement of the real world marker. A single marker is simply used for the free navigation and identification of objects in the virtual environment(Fig. 4 (a)). If an object collides/intersects with the virtual hand while the user has a single visible marker, the audio/visual information related to that object are provided. If the second maker is also made visible to the camera and the virtual hand collides/intersects with an object, then the virtual hand picks/grabs that object (see Fig. 4 (b)). So in this way user can select, move, and rotate the object dynamically. To release the object, simply make the second marker invisible to camera. The Fig. 5 shows the algorithm for interaction using the markers.

Collision Detection. Collision detection is the most important issue in complex virtual assembly environments. Different types of techniques are used for collision detection. VRTS measures collision by calculating the distance between the centers of objects. The system performs different actions when the collision occurs in the virtual environment. If the virtual hand collides with an object, the audio/visual information related to the object is provided by the system or

Fig. 4. Interaction via ARToolKit markers (a) Single marker (b) Two markers both visible

Fig. 5. Flow diagram for object selection and manipulation using markers

the object is selected. If a selected object collides with any other object in the environment, the object blocks moving further.

Audio and Visual Information. The system provides audio/visual information as cognitive aids to the user. The objective of using audio/visual information is to enhance the user learning about the system and the objects. When the virtual hand touches an object in the virtual environment, the object related information both in audio/textual forms are provided to the user (see Fig. 6). These information are stored in audio/textual databases.

Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
Business & Finance
Computer Science
Language & Literature
Political science