Applications of Virtual Reality
Applications of Virtual Reality
Applications of Virtual Reality
VIRTUAL REALITY
Edited by Ceclia Sk Lnyi
Applications of Virtual Reality
Edited by Ceclia Sk Lnyi
Published by InTech
Janeza Trdine 9, 51000 Rijeka, Croatia
As for readers, this license allows users to download, copy and build upon published
chapters even for commercial purposes, as long as the author and publisher are properly
credited, which ensures maximum dissemination and a wider impact of our publications.
Notice
Statements and opinions expressed in the chapters are these of the individual contributors
and not necessarily those of the editors or publisher. No responsibility is accepted for the
accuracy of information contained in the published chapters. The publisher assumes no
responsibility for any damage or injury to persons or property arising out of the use of any
materials, instructions, methods or ideas contained in the book.
Preface IX
Virtual Reality technology is currently used in a broad range of applications. The best
known are games, movies, simulations, therapy. From a manufacturing standpoint,
there are some attractive applications including training, education, collaborative
work and learning.
This book provides an up-to-date discussion of the current research in Virtual Reality
and its applications. It describes the current Virtual Reality state-of-the-art and points
out the many areas where there is still work to be done. We have chosen certain areas
to cover in this book, which we believe will have potential significant impact on
Virtual Reality and its applications.
1. The book describes and evaluates the current state-of-the-art in the field of Virtual
Reality.
2. It also presents several applications of Virtual Reality in the fields of learning
environments, simulations, industrial application, data mining and ergonomic
design.
3. Contributors to the book are the leading researchers from Academia and
practitioners from the industry.
This book provides a definitive resource for wide variety of people including
academicians, designers, developers, educators, engineers, practitioners, researchers,
and graduate students.
X Preface
We would like to thank the authors for their contributions. Without their expertise and
effort, this book would never be born. InTech staff also deserves our sincere
recognition for their support throughout the project.
Finally the editor would like to thank her husband Ferenc Sik, her sons Andrs Sik and
Gergely Sik for their patience and Professor Janos Schanda, the head of the editors
Laboratory for giving her the freedom of research.
1. Introduction
Production paradigm has been changing since Henry Fords we believe that no factory is
large enough to make two kinds of products (Ford, H. 1926). With their Scion brand Toyota
joined the race for offering customers an increasing product variety, a trend which has been
characterizing the automotive industry throughout the last decades (Lee, H. et al., 2005).
This development has been driven by two factors. On the outside demand, customization is
driven by the improved competitive position of companies which address individual
customers needs (Kotler, P.1989). On the inside supply, customization strategies have been
significantly promoted if not was made possible at all-by advances in product design and
manufacturing as well as information technology(Da Silveira,G.et al.,2001). Based on these
advances it became possible to quickly respond to the customer orders by combining
standardized modules and cut down the cost.
As the design of the production line is a complex and systematic project, many scholars
advance to apply the computer-aided design to each unit of the production line design. Sang
Hyeok Han et al (S.H. Han et al., 2011) used Maxscript in 3D Studio Max for automation of
the visualization process, which has been applied to the production line of modular
buildings with the output of lean, simulation, and visualization in the form of animation, to
automate the visualization process as a post-simulation tool through sharing interactive
information between simulation and visualization. Thomas Volling(Thomas
Volling&Thomas S. Spengler.2011) provided a model and simulation of the order-driven
planning policies in build-to-order automobile production, comprising separate interlinked
quantitative models for order promising and master production scheduling and evaluating
both models in a dynamic setting. Yong-Sik Kim (Yong-Sik Kim et al., 2006) proposed that
virtual reality module uses a commercial virtual manufacturing system instead of expensive
virtual reality equipments as the viewer of the immersive virtual reality system on a cluster
of PCs and adopts the modified simulation algorithm. GAO Chonghui(GAO Chonhui et
al.,2010) constructed the virtual simulation for automobile panels based on analyzing the
motion characteristics of automatic press line and extracting the corresponding data of
motion. These models took better advantage of computer-aided design technology for
* Corresponding Author
2 Applications of Virtual Reality
production line design, but these methods cannot model for the design process of the whole
production line, and cannot complete dynamic analysis of production lines. Because of
varieties and quantities of the piston are constantly changing, the above method is difficult
to effectively and proactively verify the running condition of piston production lines.
As the part supplier of automobile assembly, piston companies also face the same problems,
for example, Shandong Binzhou Bohai Piston Co., Ltd. has more than 70 piston production
lines to manufacture those pistons such as car, motorcycle, marine, air compressors, chillers,
engineering machinery and agricultural machinery pistons. Those size ranges from 30 mm
to 350mm. However, due to the changing market and customized demand, the annual
piston species is up to 800 kinds, some piston production line can change the product twice
per month, some even more than five, and the production batch is ranging from small to
mass. So it is difficult to quickly make the production planning under those demands with
the traditional production line design methods. Therefore, it needs the advancing
manufacturing technologies and methods to respond quickly to market changes and
customized production.
As for production activities in production lines, it often faces the adjustment of design, and
the well-designed production line can reduce operating and maintenance costs, improve
equipment capacity factor and the efficiency of the system.
provide the model and analysis tool to rapidly design the piston production and improve
the design rationality in the end(Shao Li et al.,2000).
Production lines involve multiple objects and actions with discrete, random, complexity,
hierarchy and so on. Modelling for production lines is the foundation of virtual design. The
traditional simulation model mainly focused on the design of algorithms that can be
accepted by the computer, resulting in a variety of simulation algorithms and simulation
software (Zhao Ji et al.,2000; S B.Yoo et al.,1994; H.T. Papadopolous& C. Heavey, J.
Browne.,1993; Zhang Longxiang,2007). From the 1980s, due to high-level language for
computer compiling, structured simulation modelling has been a great progress. Chan and
Chan(F.T.S. Chan&H.K. Chan,2004) presented a review of discrete event simulation (DES)
applications in scheduling for flexible manufacturing systems (FMS). Ashworth and Carley
(M.J. Ashworth&K.M. Carley,2007) had conducted a review that addresses organizational
theory and modelling using agent-based simulation (ABS) and system dynamics (SD).Shafer
and Smunt(S.M. Shafer&T.L. Smunt,2004), Smith(J.S. Smith,2003), Baines and Harrison(T.S.
Baines& D.K. Harrison,1999) targeted the larger domain of operations management and
applied the simulation to it. However, most reviews limited themselves to either a single
technique (DES or SD) or a single application area where more than one technique is used.
However, because the interactivity of the structural simulation modelling is poor, it has not
been widely application. With the developing object-oriented technology, object-oriented
simulation modelling has been rapidly developed. Object-oriented modelling techniques
(OMT) is a software environment applying classes, objects, inheritance, packages, collections,
messaging, polymorphism and other concepts, which emphasizes the concept of the problem
domain map directly to objects or the interface definition between objects, applies the
modelling, analysis and maintenance of the realistic entity, so that the built model is easy to
reflect the real objects, and makes the constructed model with re-configurability, reusability
and maintainability. And it is easy to expand and upgrade and can reduce the complexity of
systems analysis and development costs significantly. Many different OMT methods have
been advanced, such as OMT / Rumbaugh, OSA / Embley, OOD / Booch et al (Zhang
Longxiang,2007; Par Klingstam& Per Gullander ,1999; Dirk Rantzau et al.,1999).
But when these models are used for dynamic performance analysis on the production line, the
modelling is more complex and difficult to describe the dynamic characteristics of the
production line quickly and easily, which has greater limitations. QUEST is a virtual
integrated development environment applied to queue simulation analysis in Deneb
company, which is proper to simulate and analyse the accuracy of the technological process
and productivity, in order to improve the design, reduce risk and cost, and make the planned
production line meet the design requirements early in the design and implementation, before
investing real facilities. Combining the advantages of the QUEST virtual simulation
development environment and unified modelling language (UML), this paper presents the
simulation, analysis and modelling methods of Virtual Design of Piston Production Line (VD-
PPL) to analyse the static and dynamic characteristics of the piston production line.
PPL theoretical model is divided into five levels: Support, Management, Transaction,
Simulation and Decision level levels. Design features and contents about of all levels are
shown in Figure 3.1.
Fig. 3.1. Virtual simulation design frame of reconstructing piston production line
message passing mechanism. For example, when the simulation of production lines is
running, machine tools, buffers area, cutting tools, measuring tools and other objects will
interact with each other and the dynamic behaviors will appear.
As it is known, object modeling of piston production line contains three parts: the
description of object relations, object behavior and object interaction, which join together to
achieve mapping modeling from reality to virtual simulation environment of the piston
production line. Therefore, the VD-PPL modeling process is defined as follows:
1. Establish the physical model of VD-PPL
In the simulation environment, make the object model reflect the physical entity of the real
piston production line.
2. Establish the logical model of VD-PPL
The logical model contains static logic model and dynamic logic models. Among these, the
static logic description the modeling for internal properties of the piston production line,
structure and behavior and so on, which reflect the static properties of all objects and
relationships of the piston production line.
The dynamic logical is used to describe the dynamics behaviors and dynamic interactions
on the piston production line, and to achieve the description of its dynamic characteristics
by adding the simulation clock, event controller and other simulation-driving mechanisms,
which can reproduce the running condition of description of piston production line to get
the simulation results of piston production line.
3. Behavior diagram
Describe the dynamic model and the interactions of composition objects in the system,
including state diagrams and activity diagrams. State diagram is used to describe all
possible states of the objects and transfer conditions of the incident state, usually the state
diagram is supplement of the class diagram; activity diagram is used to describe the
activities and the constraint relationship between activities meeting the requirements of
cases, which can be easily expressed in parallel activities.
4. Interactive diagram
Both sequence diagrams and collaboration diagrams are used to describe the interactions
between objects. Sequence diagram is used to show the dynamic cooperative relationship
between objects, and collaboration diagram emphasizes collaborative relationships between
objects.
5. Implementation diagram
It is used to describe the features of the system, including component diagrams and
configuration diagram.
Although the use of UML modeling method can well describe the object relations of VD-
PPL, the modeling process is complex and model implementation is more time-consuming
and difficult when UML is used to describe the complicated and discrete object behaviors
and interactive relationship, because of the characteristics of random, discrete and others.
QUEST simulation platform based on virtual manufacturing technology, not only supports
the physical modeling of resource objects with better virtual visual interface, but also fully
supports the simulation of object-oriented discrete/continuous events, which can be the
important tools of the simulation and analysis of the production process. Combining the
advantages of QUEST and UML, the paper proposed VD-PPL simulation modeling method,
as shown in Fig.3.2.
mapping
QUEST
UML
Object definition,object
Model of virtual piston relationship and static
production line structure abstracting
In Fig.3.2, VD-PPL simulation modeling can be divided into two parts: the virtual physical
modeling and virtual logical modeling. Virtual physical model is visual appearance of the
logical model in a virtual environment, and it focuses on describing the three-dimensional
geometry corresponding to the physical entity of the real production line. Therefore, virtual
Virtual Design of Piston Production Line 7
physical model is the foundation of layout design of piston production line and visual
simulation. The virtual physical model is divided into virtual static characteristics modeling
and dynamic characteristics modeling. Virtual static characteristics modeling includes
customization of all the objects on the production line and the description of the relationship
between objects, and virtual dynamic model describes the dynamic behavior of the object
itself and of interactions between objects. VD-PPL simulation modeling focuses on
establishing the virtual logical model of the production line piston.
In VD-PPL modeling process, the contents are established by QUEST as follows: 1) the
virtual physical model mapping corresponding to physical entities of the piston production
lines; 2). the virtual logical model of VD-RPPL object relationship and object behaviors.
Piston production lines describe object definition of the resources, object association and the
static structure abstracting and other processes with UML.
(Source, Sink, Buffer, etc.). Workers (Labor) who complete the transportation and storage of
materials can also be treated as abstract logistics equipment. Logistics equipment class is
used to describe the properties and methods of equipment and workers that implement
the transportation function of the work piece and materials, and it is also used to describe
the reconfiguring time, cost, utilization, work piece delivery time of logistics equipment in
the process of reorganization objects. According to the definition mode of processing
equipment class, the attributes of the logistics equipment class can also be divided into
physical, process, functional and state attributes. Especially, the physical attribute
includes equipment number, name, size, cost, geometry, size, color and so on. The process
attribute includes utilization, delivery times, reconstructing time, delivery speed,
acceleration/deceleration, etc. the functional attribute includes the maximum
specifications and length of work pieces or materials, the maximum transportation
quantities of the work pieces, the maximum work piece capacity and so on. UML class
diagram of the logistics equipment is shown in Fig.3.5 (b). AGV, convey, robot and
storage classes are derived from logistics equipment class
3. Auxiliary equipment class
Auxiliary equipment is mainly service as the measure and other works to make sure the
processes are completed smoothly and accuracy. The auxiliary equipment class is the
abstract of such equipment, and its attributes includes device identification, name, size, cost,
measure items, measure accuracy, measure time, reconfiguring time and so on; Its
behavioral approach is: to get the cost of auxiliary equipment, the reconfiguring time of
auxiliary equipment and the measuring time of auxiliary equipment so on. UML class
diagram of measure equipment class is shown in Fig.3.5 (c). According to the different
measure items, measuring device, roughness measurement, sensor type, and other classes
are derived from the auxiliary equipment class.
a) manufacturing equipment class b) Logistics equipments class c) Detecting auxiliary equipments class
control class contain the controller ID the number of control objects, logical calculation
priority. Those behavioral methods include the definition of initialization logic, processing
logic, part routing logic, resource selection logic and other selecting modes. The meanings of
the logical models are shown in Table 3.1, and the logical hierarchy relationship of the
control class is shown in Fig. 3.7.
Processing logic is used to define the sequence of processing objects, proportion relations of
process objects handling. Routing logic is primarily used to define the model of bottom
objects of the work piece. Queuing logic is mainly used to define queuing methods.
Fig. 3.8. UML class diagram of equipment logic and AGV/Labor controller logic
12 Applications of Virtual Reality
The equipment logical device is mainly used in loading the work piece, processing the work
piece and unloading the work piece, and the equipment need complete judgment and
decision-making. Its UML class diagram is described in Fig.3.8 (a). AGV / Labor control
logic means that AGV / Labor controller sends logic instructions to AGV / Labor when the
production resources (such as equipment) is set on AGV / Labor , and its UML model is
shown in Fig. 3.8 (b).
The piston product line also includes other logic, such as: Initial Logic, Request Input Logic,
Part Input Logic, Request Selection Logic, etc.
Apply object-oriented modeling method to establish the dynamic model piston production
line to simulate and model effectively for dynamic characteristics of piston production line.
Fig. 3.9. UML state diagram of machine tool when piston is processed
The object relations, behavioral control, object interaction model are established with UML
sequence and collaboration diagrams to fully describe the association relationship of various
objects during piston processing. Figure.3.10 describes UML sequence interaction diagrams
of labor controller, labor, buffer, and machine tool objects.
After analyzing the interaction behavior of dynamic models, in QUEST, virtual dynamic
model of PPL is established, and the modeling steps are shown in Fig.3.11.
1. Establishing the physical model
The virtual physical model is the basic task of VD-PPL. The physical model is an abstract
description of a real model of the piston production line in a virtual environment. According
to the geometry, establish the virtual physical model of the resource objects with three-
dimensional geometric modeling functions (or other 3D solid modeling software,
14 Applications of Virtual Reality
Fig. 3.10. UML sequence interaction diagram of all the objects in the piston conveying
transferring to the QUEST integrated environment through the graphical interface). Virtual
physical model is established as follows:
1. Make sure all modules of the equipment of the piston production line.
2. According to geometry, shape and size of the equipment, establish the virtual physical
model library, which can make the physical model with the function of the
reconfiguration and reuse in QUEST environment.
3. According to the physical entity of the resource object, layout the piston production line
will be carried out.
2. Establishing virtual logic model of VD-PPL
Abstracting from the virtual physical model to virtual logical model, it needs to complete
the following tasks:
1. Complete the abstract definition and description for the attributes and behaviors of
virtual physical model. On the one hand, these attributes are used to simulate, on the
other hand, are applied with equipment management and reconfiguration of objects.
2. Define the virtual process model associated with the virtual physical model to make the
established resource objects with relationship, make the object model with dynamic
behaviors, and achieve the function of virtual machining simulation during the
production process of piston machining.
3. Define the virtual control logic model associated with the virtual physical model. So
that the established model can describe the logic relationship of all the behaviors of
objects and make the dynamic behavior of the model occur orderly and circularly.
After finished these steps above, the production line physical model can be changed into the
virtual logic model. The virtual model with the process and the logic can be used as visual
simulation and dynamically analyze the piston manufacturing process supported by the
system simulation strategy and simulation clock.
of roof surface fine boring of pin hole rolling of pin hole boring decompression
chamber of pin hole. The parameters of all the processes are shown in Table 4.1.
Because piston species in one production line would be changed 3 to 5 times monthly, for
the traditional piston production line, its design method cannot be respond rapidly, and the
design of the process time is not balanced and so on, resulting in the serious workpiece
products in the line and a serious unbalance labors and machines utilization. Aimed at these
problems, ensure the re-design goals: the diameter of piston diameter is between 100 ~
130mm, monthly production capacity of production line should be not less than 22,000
pieces/ month (two shifts), and cycle should be not more than 47s, the operator should be
no more than 9, machine tools should be no more than 16.
a) Virtual boring machine model BH30 b) Virtual boring machine model BH20
the middle part (These stations are reserved for some special processing procedures, but
these machines have been moved to special equipment areas at present.). As the machine
pitch is too far, and the machine position is not optimized, resulting in a waste of area in the
line, and increasing walking distance and labor intensity.
Fig. 4.2. Virtual layout and the connection way of process logic of equipment
Buffer B1 B2 B3 B4 B5 B6 B7 B8 B9 B10
1 hour 14 0 9 0 0 13 1 1 0 1
1 shift 90 0 58 1 97 85 0 0 0 0
1 week 440 0 270 0 469 436 1 1 0 0
Table 4.2. Accumulation number of parts in buffer
Figure 4.13 shows the relationship between process time of each machine and the real cycle
time, it is shown that: there was a sudden change of output between the processes operated
by worker L1 and L2. The same change occurs between the L3 and L4, L5 and L6, L7 and L6,
meanwhile, it is also shown that cycle time is different with processes, the process operated
by L1 is the smallest, up to about 35s, which means that the process has a greater
redundancy. Suddenly changing point of process time happens in fine /rough turning of
combustion chamber operated by L6 and finishing turning of cylindrical operated by L7,
those processes are mostly near the entire cycle time. It is referred that those processes may
be the most serious bottleneck process in the system.
1400 76
cycle time(s)
800
Part output
73
600
72
400
200 71
0 70
7200 14400 21600 28800 36000 43200 50400 57600 64800 72000 79200 86400
Simulation time(s)
75
67
59
Process time(s)
51
3. Equipment utilization
Equipment utilization reflects the load of each machine. Under different simulation times,
machine utilization is shown in Fig. 4.14. It is known that: when the line is in a steady state,
the machine utilization does not change over time on the whole. But the difference of the
utilization on each machine tool is much larger, namely: balance of this production line is
very poor. The utilizations of M1, M3, M4, M6, M7, M8, M10, M11 and M12 are much
higher, and utilization of finishing turning of cylindrical processes machined by M11 and
Virtual Design of Piston Production Line 25
M12 is the highest, up to 94.6%. Considering the output of the all product line, cycle time,
and machine utilization, it can be diagnosed that: the finishing turning of the outside round
is the most serious bottleneck process. When adjusting and redesigning the production line,
the adjustment should be begun from this process. Constrained by the bottleneck process,
utilizations of M13 ~ M16 are relatively low. If adjusting those bottleneck processes, the
utilizations of M13 ~ M16 can be significantly improved to carry out the purpose of logistics
balance and increasing output of the whole production line.
110
100
equipment utilization(%)
90
80
70
60
50
40 at 1 hour simulation
at 1 shift simulation
30 at 1 week simulation
20
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
equipment number(M1-M16)
25
21
17
labor utilization(%)
13
labor utilization at 1 hour simulation
9 labor utilization at 1 shift simulation
labor utilization at 1 week simulation
labor number(L1-L11)
5
1 2 3 4 5 6 7 8 9 10 11
Fig. 4.15. Labor utilization at different simulation time
4. Utilizations of labor
Labor utilizations are shown in the labor intensity during the piston manufacturing. The
utilization of each labor at different times after the simulation data is shown in Fig. 4.15.
Fig. 4.15 shows: labor utilization of L2 (responsible for finishing turning iron groove
process) is the highest, up to 23.9%, followed by labor utilization of L5. But labor utilization
of the bottleneck process operated by L7 (responsible for finishing cylindrical) is lower
because the percentage of machine processing time is larger. It leads to decrease the
26 Applications of Virtual Reality
percentage of operating time of labor. During this condition, it is often no longer allowed
this labor to operate other machines in order to guarantee machining precision of the
bottleneck process. The labor utilization of L10 (responsible for the rolling of the pin-hole
process) is lower. It is indicated that the labor has the capacity to operate other machines, to
reduce the number of the labor.
Through the analysis for the cycle time, the balance of the process time, utilizations of
machines and labors, the most serious bottleneck process in this production line are fine
turning of cylindrical, followed by rough and fine turning of the combustion chamber. But
the production lines performance is improved evidently, after taking those optimization
measures as following:
1. Improve the feed of finishing turning of cylindrical. Increasing the feed of finishing
cylindrical to 0.12mm / r, the process can save time 22.7s.
Virtual Design of Piston Production Line 27
2. Combine the fine and rough turning of the combustion chamber to reduce the auxiliary
time, the process can save 9.5s. At this time, the process time of the fine and rough
turning of the combustion chamber is 72.06s, but the process needs two devices.
4500
walking dista nce(m)
after optimizated
original
3500
2500
1500
500
1 2 3 4 5 6 7 8 9 10 11
labor number(L1-L11)
Fig. 4.17. Walking distance of labors before and after optimization
Through above methods, it is modified the virtual dynamic logic model, and simulated the
redesign model again, the results could be gained, the piston production per shift in this
piston production line increase from 336 to 516, and the most serious bottleneck processes
have been weakened. Figure 4.18 and 4.19 show respectively the machine utilization and
labor utilization before and after eliminating the bottleneck. It is indicated that: after taking
the bottleneck reducing measures, the machine and labor utilization of each process after the
bottleneck is improved, reduces the utilizations of machine and labor before the bottleneck
to reduce bottlenecks by making the production line in balance direction.
100
Equipment utilization(%
80
60
40
original
after optimized
20
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Equipment number(M1-M16)
Fig. 4.18. Utilizations of machine tool before and after optimized
28 Applications of Virtual Reality
25
original
after optimizated
labor utilization (%) 20
15
10
5
1 2 3 4 5 6 7 8 9 10 11
labor number(L1-L11)
Fig. 4.19. Utilizations of labor before and after optimized
5. Conclusions
Compared to the traditional design method of the production line, virtual design of the
production line integrates many advanced technologies, and applies the unified
manufacturing process modeling, analysis and dynamic optimization and other advanced
design tools to get multiple-objective optimal results. Meanwhile, it is easier to get
corresponding design data and decision-making data. Based on those simulation results, it can
gain some methods quickly to redesign the real production lines parameters in the early design
stage. So, this design method is a prospective powerful tool for the manufacturing planning.
With object-oriented technology, combined with the advantages of QUEST and UML, it is
established the simulation model of the piston production line mapping from real to virtual
environment with UML class diagram of the physical equipment, process, logic control and
system interaction classes. The virtual model of this production line is divided to steps:
static modeling is described with the hierarchy of various resources objects in piston
production line and other static characteristics, and dynamic modeling defined with
behavioral and control logic processes.
With this virtual design method, an instance of this piston production line is used to not only
present the virtual design procedure, but also compare those design results quickly with the
virtual model. From those simulation results, it is shown that the optimized production line
can greatly reduce the labor intensity, improves equipment utilization, decreases the layout
area, and makes all processes more balanced. With this method, all design procedures are
achieved in a virtual environment with a short design cycle before carrying out the real
production line. It is easy to avoid wasting the resource and making the system design more
reliable and effective during the beginning planning for the piston production line, which can
save the design cost and time to improve the design performance success.
6. References
Da Silveira, G., Borenstein, D.&Fogliatto, F.S.(2001). Mass customization. International
Journal of Production Economics, Vol.72,No.1, (June 2001),pp.113. ISSN 0925-5273
Virtual Design of Piston Production Line 29
1. Introduction
This book focuses on virtual reality. In the context of design, virtual reality is an emerging
technology that not only allows designers and other stakeholders to gain a three-
dimensional appreciation of the artifact being designed, it also has the potential to
significantly alter the manner in which design occurs. Internet-based technologies have
made it possible for designers in different locations to collaborate in developing and refining
their designs. Virtual reality has contributed to this environment (Maher, 2005) by allowing
designers in geographically-dispersed locations to interact with each other. Software
applications have been developed to assist and facilitate these collaborative activities
(including Shyamsundar and Gadh (2001) and Lau, Mak and Lu (2003)) but comparatively
speaking, little research has been conducted into the people-related issues of collaboration
via the Internet. Some of these are the issues addressed in this chapter.
Recent developments in virtual communication technologies have the potential to
dramatically improve collaboration in the construction industry (Gameson & Sher, 2002).
Furthermore, virtual teams hold significant promise for organizations that implement them
because they enable unprecedented levels of flexibility and responsiveness (Powell, Piccoli,
& Ives, 2004, p. 6). Some authors observe that virtual teams are here to stay (Bell &
Kozlowski, 2002) and that organisations will be forced to embrace virtual collaboration to
enhance their competitiveness (Abuelmaatti & Rezgui, 2008, p. 351). Indeed, current
research proposes that (g)lobally disbursed project teams are the new norm in every
industry today (Daim et al., 2012). However, the skills required to work productively in
virtual environments have been theoretically defined but not assessed in the real world.
Indeed, many of the studies that have been conducted (e.g. Hatem, Kwan and Miles (2011)
and Rezgui (2007)) into virtual teamwork have involved tertiary-level students. Abelmaatti
and Rezgui (2008) consider that the challenges of virtual teamwork in the real world
substantially outweigh the relative ease with which academics can research and develop
virtual team solutions. Furthermore, the differences between virtual and face-to-face
teamwork means that an overt and explicit effort is needed to design new work processes to
make it successful (Nunamaker, Reinig, & Briggs, 2009).
32 Applications of Virtual Reality
Our studies were part of a project which examined the use of information and computer
technologies (ICTs) to facilitate design / construction team interactions. They were funded
by the Australian Cooperative Research Centre for Construction Innovation (Maher, 2002)
and focused on the early stages of design / construction collaboration where designs for a
building are created, developed and revised. Three aspects of collaboration in virtual
environments were investigated: (i) the technological processes that enable effective
collaboration using these technologies; (ii) the models that allow disciplines to share their
views in a synchronous virtual environment; (iii) the generic skills used by individuals and
teams when engaging with high bandwidth ICT. The last strand of these investigations was
investigated by the authors and is reported on here. Details of the other strands of this
project may be found at the project website (Maher, 2002) and other publications (Bellamy,
Williams, Sher, Sherratt, & Gameson, (2005) and Sherratt, Sher, Williams, & Gameson, 2010).
2. Virtual teamwork
There are numerous definitions of teams. For this paper teams are defined as a cluster of
two or more people usually occupying different roles and skill levels that interact
adaptively, interdependently, and dynamically towards a common and valued goal (Salas,
Shawn Burke, & Cannon-Bowers, 2000, p. 341). At present the term virtual teams is used
by different authors to mean different things. A more detailed exploration of the various
facets of virtual teams is provided by Dub and Par (2004) and is summarised in Table 1. A
number of other researchers have outlined the characteristics of or factors relating to virtual
teams e.g. Berry (2011); Schumacher, Cardinal and Bocquet (2009) as well as the
3. Generic skills
There is still much discussion about the core set of knowledge, skills and attitudes that
constitute teamwork (Salas, et al., 2000). We sought to contribute to this debate by
identifying the skills that transferred from a traditional face to face (F2F) environment and
the ones that required refining for virtual environments. Furthermore, we wished to identify
if virtual teamworkers needed any new skills. As a starting point, we investigated the
generic skills workers acquire and use on a daily basis. Generic skills are defined by Salas et
al (2000: p, 344) as the knowledge, skills and attitudes that a team member possesses when
completing a task or communicating with fellow members, whether in a co-located or
virtual environment. Generic skills influence both individuals and teams; they are skills
which are transportable and applicable across teams (Salas, et al., 2000, p. 344). A
review of generic skills (Cannon-Bowers, Tannenbaum, Salas, & Volpe, 1995) was used to
identify those which are used by design team members and is summarised in Table 2.
To examine the skills designers use it is necessary to understand the content of their
interactions. A number of techniques facilitate such insights including Protocol Analysis and
Content Analysis. Protocol Analysis attempts to infer cognitive processes by examining
verbal interactions (Ericsson & Simon, 1993) but has been found to be a limited means of
identifying non-verbal design cognition. Even where some comparisons are discovered, a
large degree of interpretation is required (Cross, Christiaans, & K., 1996). The subjectivity of
analysis and the length of time required to complete analysis also call into question the
appropriateness of this method.
Content Analysis, according to (Wallace, 1987), involves coding transcripts of
communications in terms of frequency analyses because the underlying assumption is that
the verbal content produced by the individual is representative of the thought processes at
work in his or her mind (p. 121).
Several content analysis techniques were used to identify and interpret these thought
processes and thereby to investigate the generic skills our participants used. We explored
micro-level communication processes because these can provide valuable insights to
managers and researchers alike about how to read the health of teams (Kanawattanachai
& Yoo, 2002: p. 210). We identified quantitative content analysis as an effective means of
identifying the generic skills of designers. This necessitated the development of a
framework by which our data could be coded. Behavioural marker studies (Klampfer, et al.,
2001, Carthey, de Leval, Wright, Farewell, & Reason, 2003) provided a template for our
generic skills coding framework. Behavioural markers are observable non-technical aspects
of individual and team performance (Carthey et al, 2003: p. 411) which are related to the
effectiveness of an individual and team. The methods for creating behavioural markers
informed the development of our framework. In accordance with Klampfer et als (2001)
recommendations, we devised a system that provided simple, clear markers, used
appropriate professional terminology, and emphasised observable behaviours rather than
34 Applications of Virtual Reality
In addition to the Generic Skills analysis presented here, three other techniques were used to
analyse the data:
1. Baless Interaction Process Analysis (Bales, 1951) - to analyse the interactions between
design team members, so that aspects such as decision-making, communication and
control could be examined.
2. A Communication Technique Framework (Williams & Cowdroy, 2002) to investigate
the techniques which the designers used to communicate.
3. Linguistic analyses - to evaluate the communication occurring in teamwork. The
approach adopted was derived from systemic functional linguistic theory (Halliday &
Matthiessen, 2004).
The aims of this study were to identity and examine the generic skills which facilitate
teamwork in three settings, ranging from face-to-face to 3D virtual environments. The
teamwork which we studied occurred during the conceptual stages of designing
construction projects.
3.2 Participants
It is often the case that design team members are drawn from different
backgrounds/cultures, ages, and experience (Marchman III, 1998), especially in multi-
disciplinary design teams collaborating on an entire project. Stratified purposive sampling
(Rice & Ezzy, 1999) was therefore used to select a heterogenous group of ten participants.
This method of sampling ensured that the diversity of the participants was reflective, as far
as possible, of the actuality of design teams in the real world. Participants were both male
and female, of varying ages, cultures and had differing levels of experience and influence,
ranging from higher management to junior staff. Due to constraints imposed by the funding
body, recruitment of participants was limited to organisations within the Cooperative
Research Centre for Construction Innovation (CRC-CI). The pool of eligible participants was
further constrained by work pressures eventually resulting in participants being recruited
solely from the discipline of architecture.
3.3 Task
Data were collected in three experimental conditions:
Traditional face-to-face collaborative design between the design team members
(including interactions such as talking and sketching).
36 Applications of Virtual Reality
DEGREE OF COMPLEXITY
LOW <> HIGH
Degree of reliance on ICT Low Varies High
ICT availability High X Low
Members ICT Proficiency High X Low
Team size Small X Large
Geographic dispersion Local X Global
Task or project duration Long term X Short term
Prior shared work Extensive X None
experience
Members assignments Full-time X Part-time
Membership stability Stable X Fluid
Task interdependence Low X High
Cultural diversity Homo- Varies Hetero-
geneous geneous
Table 3. Typical characteristics of the virtual teams engaged in this project (adapted from
Dub & Par, 2004)
All tasks were conducted in an identical sequence (i.e. participants first worked face-to-
face, then used a whiteboard and finally designed in the 3D virtual world). This
procedure was prescribed by our research directorate and was designed primarily for the
first two strands of our overall research project (Maher, 2002). We are conscious that
participants may have become familiar with aspects of the tasks that they were asked to
complete, and may also have become fatigued (Pring, 2005). As the designers gained
experience of working together, one would assume they would be able to work more
effectively over time. If this is so, their final collaboration would have been the optimal one
Changing Skills in Changing Environments: Skills Needed in Virtual Construction Teams 37
and this would have occurred when they were designing in the 3D virtual world.
Conversely, if they had become fatigued or bored, their last task performed would have
been the one most affected. It is thus not possible to determine whether sequence affected
the outcomes of this research.
The resulting scores were statistically analysed using a repeated measures ANOVA
parametric test to establish the differences between participants performance on the three
tasks (traditional face-to-face design, virtual design using a electronic whiteboard, and
virtual design using a high bandwidth 3D virtual world) (Riedlinger, Gallois, McKay, &
Pittam, 2004). The results of the ANOVA tests were interpreted using Mauchlys Test of
Sphericity which examines the covariance of the dependent samples. The data were also
examined to determine which shift in condition (i.e. face-to-face to whiteboard or
whiteboard to 3D virtual world) was responsible for any significance. SPSS Version 12 was
used for all statistical analyses.
38 Applications of Virtual Reality
Intra-reliability between two raters was established for the generic skills coding scheme on a
35-minute session using Noldus Observer Pro. Point-by-point agreement was 81% and 80% on
the frequency of coding strings and frequency and sequence of the coding strings, respectively.
These were both at or above the minimum acceptable level of 80% (Kazdin, 1982).
4. Results
4.1 Generic skills
The generic skill Shared Situational Awareness increased significantly (F(2, 8) = 4.903, p < .05).
The Within-Subject Contrasts test indicated a significant difference between face-to-face and
whiteboard conditions (F(1, 4) = 19.478, p < .05).
For the skill of Decision Making, there was a significant decrease (F(2, 8) = 42.431, p < .001) in
frequency as the design conditions moved from low to high bandwidth conditions. The
Within-Subject Contrasts test demonstrated a significant difference between both the face-
to-face to whiteboard and whiteboard to 3D virtual world (F(1, 4) = 120.274, p < .001 and
F(1, 4) = 8.685, p < .05 respectively).
For the skill of Task Management, the decrease in frequency from face-to-face to whiteboard
approached significance (F(1, 4) = 4.799, p > .1).
A11 (Outlines and describes the plan/brief for the design indicative of Task
Management). There was a significant decrease (F(2, 8) = 9.021, p < .05) in the incidence
of this behaviour from low to high bandwidth levels. The Within-Subjects Contrasts
test indicates that the move from face-to-face to whiteboard was significant (F(1, 4) =
7.943, p < .05).
Fig. 2. Frequency of significant observable behaviours A11, B21, B33, C11, D11, in 3 conditions
B21 (Gives updates and reports key events, demonstrating Team Working). This
behaviour increased significantly (F(2, 6) = 6.343, p < .05) as the design process moved
from low to high bandwidth. Furthermore, the difference between face-to-face and
whiteboard conditions was significant (F(1, 3) = 16.734, p < .05).
B33 (States case for instruction and gives justification, also demonstrating Team
Working). The movement from low to high bandwidth demonstrated a significant
decrease in this behaviour (F(2, 6) = 5.362, p < .05). A significant difference between
whiteboard and 3D virtual world was found to be approaching significance (F(1, 3) =
5.642, p = .098).
C11 (Asks for documents and/or information regarding a design indicating Shared
Situational Awareness). This increased significantly as the design process moved from
low to high bandwidth (F(2, 8) = 5.526, p < .05). The Within-Subjects Contrasts test
showed a significant change (F(1, 4) = 15.751, p < .05) for the shift from face-to-face to
whiteboard conditions.
D11 (Discusses design options with clients/other designers demonstrating Decision-
Making). As the design collaborators shifted from low to high bandwidth, the frequency
of the behaviour decreased significantly (F(2, 8) = 25.383, p < .001). In addition,
significant differences were also found between face-to-face and whiteboard and
whiteboard and 3D virtual world (F(1, 4) = 46.24, p < .05 and F(1, 4) = 8.095, p < .05,
respectively).
Changing Skills in Changing Environments: Skills Needed in Virtual Construction Teams 41
In addition, two other statistical results from the generic skill Team Working are worth
noting:-
B23 (Communicates design plans and relevant information to relevant members). The
Within-Subjects Contrasts test indicates that the mean frequency of B23 reduced
significantly (F(1, 4) = 23.774 p < .05) between the face-to-face and whiteboard conditions.
B52 (Reassures/Encourages) was the only observable behaviour that approached
significance (F(2, 8) = 3.462 p < .1). The decrease in frequency of this behaviour between
whiteboard and 3D virtual world (F(1, 4) = 5.956 p < .1) is also approaching significance.
Changes in the incidence of the remaining observable behaviours were non-significant, in
some cases due to limited or non-existent data
5. Discussion
This study examined the generic skills of five design teams in three settings: face-to-face and
two levels of virtual technology (viz. whiteboard and 3D virtual world). The behaviours
underpinning the generic skills designers use during the conceptual stages of a variety of
projects were recorded and analysed. The major findings were a significant increase in the
frequency of Shared Situational Awareness and a significant decrease in Decision Making as
bandwidth conditions increased.
process of working in a face-to-face team. This increase may be due to the challenge of
deciphering the ambiguity of remote communication (Nunamaker, et al., 2009). In addition,
Berry (2011) reported that social communication in virtual environments tends to occur
more slowly at first. Therefore, even if the amount of communication is similar, the rate may
be different. Further research into team communication in these environments may
elucidate this issue.
Virtual teams have a greater risk of communication breakdown due to the difficulties of
establishing shared context of meaning (Bjrn & Ngwenyama, 2009). This breakdown can
cause substantial difficulties as team members struggle to communicate and work with each
other. This may also increase project delivery risks (Daim, et al., 2012). This increased need
to establish a shared awareness suggests that design collaborators became unsure of their
interpretation of communication and so requested additional confirmation. We suggest that
design collaborators need to supply more detailed descriptions of what they are proposing
or attempting to do and continually relate this to the specific task at hand. Additionally,
Nunamaker et al (2009) have also recommended having clear rules and expectations when
using certain types of technology and also having a clear definition of effective work
completion. Virtual environments make it possible to communicate but the efficiency of
such interactions and the level of shared understanding between individuals is not always
assured. A way to enrich such communications is to use multiple communication channels
or modes simultaneously (Gay & Lentini, 1995; Kayworth & Leidner, 2000). Instead of
relying on a single mode of communication it is advantageous to support such
communication with artefacts, such as sketches, as designers do in face-to-face situations.
Verbal commentary is another way to enhance virtual communication. Where these
environments support audio communication, verbal commentary and / or explanation
provides valuable supplementary support. Berry (2011) suggests that virtual team members
should be encouraged to seek out information when misunderstandings occur. We also
recommend that multiple modes of communication be used concurrently to increase shared
understanding between design team members in virtual conditions.
not be acknowledged and / or explored, and that as a consequence, the quality of their
solutions may suffer. It is therefore important for designers working in virtual contexts to
recognise the potential limitations of their solutions, and to challenge the proposals of their
colleagues.
6. Limitations
The following are the main limitations of this study:
Whilst the number of interactions analysed was large, the number of design teams
analysed was relatively small (5). Each set of design tasks took 3.5 to 4 hours (including
training and preparation) and proved challenging to organise. The fact that only five
design teams took part is indicative of the difficulties involved in arranging the
sessions. Although the number of teams was relatively small, the use of purposive
sampling has permitted an exploration of the diverse nature of design teams.
The data were collected under laboratory conditions. Because of confidentiality and
logistics, it was not possible to video designers working at their normal place of work,
nor was it possible to record their work on real-life design projects. Although the
designs the participants were asked to work on were fictitious, they represented
realistic design projects. It is difficult to determine the relative differences in complexity
between the five projects provided.
44 Applications of Virtual Reality
Due to the fact that participants were selected from a restricted pool of design
professionals, all participants were from one discipline (architecture). Whilst our results
may reflect the teamwork culture of the architectural profession, multi-disciplinary
design teams may have experienced even more difficulty in exercising generic skills in
virtual environments.
8. Conclusions
The major conclusion drawn from our analysis of design collaboration is that there are
significant differences for the generic skills profiles between the three operational
conditions; face-to-face, whiteboard and 3D virtual world. This was true for the overall
design activity of the five teams. As Daim et al (2012) concludes, the basic fundamentals of
team building are still valid, but new dimensions of technology and global economy are
making matters complicated and challenging for the managers (p. 9). While it is clear that
the introduction of virtual technologies has implications for designers, the challenges are not
solely technical. Ebrahim, Ahmed and Taha (2009) consider that the successful
implementation of virtual teamwork is more about processes and people than about
technology (p. 2663). However, technology has traditionally been the focus of investigation
in virtual teamwork without taking into account social and economic considerations
Changing Skills in Changing Environments: Skills Needed in Virtual Construction Teams 45
9. Acknowledgments
We wish to acknowledge the Cooperative Research Centre for Construction Innovation
(CRC-CI), part of the Australian Governments CRC program, who funded this project
(Project 2002-024-B), and Tom Bellamy, who was one of our researchers.
Finally, sections of this paper are based on the following publication:
Sher, W., Sherratt, S., Williams, A., & Gameson, R. (2009). Heading into new virtual
environments: what skills do design team members need? Journal of Information
Technology in Construction, 14, 17-29.
10. References
Abuelmaatti, A., & Rezgui, Y. (2008). Virtual teamworking: Current issues and directions for
the future. In L. M. Camarinha-Matos & W. Picard (Eds.), Pervasive Collaborative
Networks (Vol. 283, pp. 351-360). Boston: Springer.
Activeworlds-Corporation (2008). Active Worlds Retrieved 28 Sept 2008, from
http://www.activeworlds.com/
Bales, R. (1951). Interaction process analysis. Cambridge: Addison-Wesley Press Inc.
Bell, B. S., & Kozlowski, S. W. J. (2002). A typology of virtual teams: Implications for
effective leadership. Group and organization management, 27(1), 14 - 49.
Bellamy, T., Williams, A., Sher, W., Sherratt, S., & Gameson, R. (2005). Design
communication: issues confronting both co-located and virtual teams. Paper
presented at the 21st Association of Researchers for Construction Management
46 Applications of Virtual Reality
Salas, E., Shawn Burke, C., & Cannon-Bowers, J. A. (2000). Teamwork: emerging principles.
International Journal of Management Reviews, 2(4), 339-356.
Schumacher, M., Cardinal, J. S.-l., & Bocquet, J.-C. (2009). Towards a methodology for
managing competencies in virtual teams - a systemic approach. Paper presented at
the IFIP AICT 307 - PRO-VE 2009.
Sherratt, S., Sher, W., Williams, A., & Gameson, R. (2010). Communication in construction
design teams: Moving into the virtual world. In R. Taiwo (Ed.), Handbook of
Research on Discourse Behavior and Digital Communication: Language Structures
and Social Interaction. Hershey, PA: IGI Global.
Shyamsundar, N., & Gadh, R. (2001). Internet-based collaborative product design with
assembly features and virtual design spaces. Computer-Aided Design, 33(9), 637-
651.
Wallace, W. (1987). The influence of design team communication content upon the
architectural decision making process in the pre contract design stages. Edinburgh:
Department of Building. , UK, Heriot-Watt University.
Williams, A., & Cowdroy, R. (2002). How designers communicate ideas to each other in
design meetings. Paper presented at the Design 2002, International Design
Conference.
Williams, A., & Sher, W. (2007, 12 - 15 Nov). The Alignment of Research, Education and
Industry Application. Paper presented at the IASDR 07: International Association
of Societies of Design Research, Hong Kong Polytechnic University.
3
1. Introduction
The use of new information technologies and software provide the possibility to solve
problems connected with raising work efficiency in the company (Hannelore, 1999). The
first information on using information technologies in the sewing industry, particularly in
construction designing, turned up in the beginning of the 70-ies of the XX century, but first
publications on computer aided designing software only in the 90-ies of the XX century. At
present most of the companies use computer aided software.
Modern computer aided designing software provides the possibility to avoid small
operations and manual work, to raise precision, productivity and organize information flow
(Beazley, 2003). The usage of garment designing systems excludes the time consuming
manual preparation of patterns, creation of layouts and relocation of written information.
The computer systems are meant for the execution of every single process and the
integration of all processes into one joint flow, for the organization of logistics and the
mobility of work tasks.
The computerization of different processes in the garment industry is necessary to reduce
the costs of a product and raise the competitiveness (Kang, 2000).
Computer systems allow making two dimensional as well as three dimensional product
illustrations and visualizations (D'Apuzzo, 2009; Lectra, 2009). It is possible to create
computer aided garment constructions, as well as gradations, and create a virtual first
pattern of the model - such computer aided operations significantly decrease the time
consumption and cost necessary to design a product. The costs of the product itself can be
calculated with the help of the product management systems following the development
parameters, the layout of patterns, textile expenditure, model complexity and specification,
as well as previous experience of the company stored in a data base.
Although computer systems significantly facilitate the development of a product, the
knowledge and skill of the user are still very important. One of the most important garment
creation stages is constructing.
Constructing is the reproduction of a spatial model (clothing) on a plane (construction); this
transformation has to be reflexive when joining the parts of the construction a garment is
originated. The creation of the drafts of the construction is the most complicated and
responsible stage of garment designing, because a non-existent complicated spatial shape
50 Applications of Virtual Reality
product surface layout has to be created (drawn) (Vilumsone, 1993; Koblakova, 1988). One
of the most topical problems in garment designing has always been the search of garment
designing methods scientifically reasoned, precise and as little as possible time and labour
consuming. Several factors depend on a precise development of garment surface layout
material expenditure, garment set quality, labour intensity level, the aesthetical and
hygienic characteristics of the finished product.
The traditional mass production ever decreases the volumes of series, the production
becomes more elastic and the choice of goods expands; the wear time decreases. Along with
the serial production, individual production becomes more and more popular. The current
economic situation shifts the search for labour more and more to the East, but the creation of
individually oriented products could make it possible to maintain working places and
production units in Europe. People will be willing to pay more for this type of clothing and
receive it in a possibly short term. Thereby the promotion of individualized production is
affected by social and economic aspects.
The non-contact anthropometrical data acquisition methods are currently used to solve
the problem of acquiring the clients measures for individualized production, yet still the
spread of individualized production is limited by the uniformity of assortment, the labour
intensity of designing, the uncertainty of the result of the construction and the complexity
of the constructing tasks creating an individual product for each customer (D'Apuzzo,
2008; Fan, 2004).
In its turn the potentialities of the virtual reality are used to create e-store offers that are
more attractive to customers, create virtual twins, model fitting and the reflection of
garment individualities.
photograph/sketch. A new fabric is spread over the fabric in the image in a way that the
direction of the pattern conforms with the pattern direction of the fragment defined with the
help of a net structure (Figure 1). In case of a complicated model the preparation of the
image for fabric spreading can be quite labour-intensive. Nevertheless it pays off since
after that a large variety of patterns and colours can be tested within a very short period of
time.
There are several other 3D designing elaboration foreruns and finished elaborations, the
usage of which is limited by different factors assortment, segmentations of products, the
fiction of 3D designing all changes are made in a 2D environment (Viumsone, 2007).
A structural scheme of the production process (Fig.4.), identifying the processes of typal
production with the goal to determine the mutual relationship of the production
preparation processes and the structure of the informative and software means, has been
developed; it has been concluded that no matter what level CAD/CAM system is used,
their usage provides a faster development of the product and shortens the working
process. A complete 3D designing process would exclude different working stages
connected with constructing and constructive modelling, 3D imitation and creation of a
virtual prototype.
Virtual Garment Creation 53
(a)
(b)
Fig. 2. Garment imitation a) LECTRA 3D Fit, b) BERNINA My Label
54 Applications of Virtual Reality
Staprim
Bernina
Optitex
Gerber
Assyst
Lectra
Assol
# Parameter Description
feminine one
1.1. x x x x x
type
sex feminine
1.2. x x
several types
1.3. masculine x x x x
1.4. parametric x x x x x x x
MANNEQUIN
traditional
1.5. x x x x x x
measurements
individualiz projection
1.6. x
ation measurements
integration
1.7. x x x x
from 3D scan
virtual
1.8. x
movement
imitation of
change of
movements
1.9. current x x x
postures
designing of apparel
CREATION OF GARMENT SHAPE
2.1. parts on a 3D x x
mannequin
definition of projection
2.2. x x
an distances
intermediate traditional
2.3. layer (ease ease x x
allowance) allowances
3D
2.4. usage of construction x x x
finished templates
apparel sewing and
2.5. parts try on using x x x x
2D templates
CORRECTION OF
Staprim
Bernina
Optitex
Gerber
Assyst
Lectra
Assol
# Parameter Description
4.1. elasticity x x x x x
VISUAL CHARACTERISTICS
4.2. drapery x x x x x
fabric
4.3. structure x x x x x
characteristics
OF A GARMENT
stiffness
4.4. x x x
control
colour/
4.5. x x x x x x
visual pattern
characteristics Size of
4.6. x x x x
of the fabric pattern
4.7. texture x x x x
placement of decorative
4.8. x x x x x x
elements
CONTROL
The comparative table shows that despite the fact that most systems strive to use some of
the 3D designing and/or fitting stages, most of the systems are made for 2D pattern fitting,
whereas the actual indications of 3D designing would be the creation of garment patterns on
the surface of a 3D mannequin and defining ease allowances by setting projection space
between the garment and the mannequin. The systems reviewed in the table can be shortly
described as follows:
Using OptiTex 3D Garment Draping and 3D Visualization software system - designers,
pattern makers, and retailers can visualize patterns, change the texture, colors,
add/remove logos and buttons, instantly in 3D. It is possible to use modeling system
software, analyze fabric behavior, proof-fitting assumptions, the product development
process. It also provides a tool for sales and merchandising, allowing users to create 3D
catalogs.
In the 3D CAD system Staprim the patterns of clothes are created automatically by
laying out the surface of the constructed model from three photoes on a plane
(Viumsone, 2008; Razdomakhin 2003 & 2006). This allows to solve a number of
essentially important engineering problems, for instance: to set high quality of the
layout of a product on a human body; to carry out maximum computerization of
processes of clothes designing from the idea up to the layout of patterns; to estimate the
created (virtual) model of a product before the manufacturing stage by rendering the
image on a screen, etc. The computerization of the process from the idea to a layout of a
58 Applications of Virtual Reality
template designing, gradation, layout and other modules. The designers of the systems have
the possibility to continue to improve the existent approbated modules and develop the new
ones. It does not require the development of a basically new template designing process,
namely, it allows to use the pattern making and gradation methods that have developed for
centuries and which are relatively successfully used by companies to create the contours of
the garment details of a particular assortment despite the specific weight of uncertainties
and subjective solutions. As all creative processes the creation of the shape of a garment
(both, in2D or 3D) is very complicated to formalize. The contours of details intuitively
drawn or manually developed in the pin up process by skilful designers or constructors are
entered into the computer system for further processing. The necessity for a digitizer
module to be included into industrial CAD systems is determined by the inability to
precisely forecast the shape of a garment using 2D template systems.
The virtual fitting of a model is visually very attractive for the designer as well as for the
consumer thanks to the imitation of the physical individualities of textiles as well as the
imitation of patterns, colour and texture. The effect of reality is becoming more and more
convincing. The designers of the systems offer new and more convenient tools. Some have
even implemented movements of a mannequin. Nevertheless the virtual sewing function
procedures of more complex models have to be improved on almost all existent systems.
The main problems are connected with defining the connectable layers, determination of
tuck-up and roll-up parts of a garment, characterization of the multi-colouristic qualities of a
fabric, the thickness of layers and the position of padding.
So far the 3D designing systems have coped better with designing products and developing
layouts of details for close fitting models, where the apparel is smaller (or the same size)
than the given layout of a mannequins surface. As an example CAD Assol and Optitex can
be mentioned.
Research on creating the surface of a garment in a particular distance from the surface of a
mannequin is being carried out to be able to design a broad assortment of apparels. Since
1995 the STAPRIM software is on the CAD system market. The developers of this system
were the first to be able to define projection spaces between the surface of the garment and
the mannequin and connect them with traditional tailor measurements as well as transfer
them into standard and individual patterns. Though the carcass type representation of the
mannequin and garment does not produce the realistic sense characteristic to the fitting
systems, but it is informative and the automatically acquired detail contours are mutually
perfectly coordinated and ensure the set visible in the virtual image.
Such a system could be very suitable for the creation of different uniforms, since it allows
creating well set constructions for different individual figures, but the result provided by the
system is a basic construction and does not foresee full designing of special features of a
model. Importing this construction into any other system the model construction and
pattern designing process has to be started anew. Therefore it is advisable to develop an
algorithm providing the in heritage of detail size and shape of individual figures up to the
level of finished patterns (as it is, for example, in the software GRAFIS).
The developers of CAD Assol (Russia) also notify of the existence of such a module. In their
informative materials they demonstrate examples of all types of 3D CAD apparels,
developed by means of AutoDesk.
60 Applications of Virtual Reality
Which company CAD system is better? It is wrong to state the question in this way, and not
just because it wouldnt be correct. All CAD systems, i.e. the CAD of various companies are
actually identical. All of them computerize the same or almost similar plane-like methods
for creating patterns of clothes. This is the circumstance and it is difficult to disagree. As to
the layout of patterns there are some distinctive features between the systems, but they are
never long-term considering the constant development of the software of all companies.
Certainly there are differences in the choice of toolkits as solutions of some parts of the
system, but in some period of time similar solutions appear on other systems. The
preference is given by the user who studies the systems of various companies and chooses
the most convenient one for the particular assortment and for him-/ herself. Certainly the
greatest and maybe even the crucial impact are given by the price policy of different
companies. But it is not that simple again. We cannot say that everything that is more
expensive is better. Just as we cannot say the contrary - that everything that is cheaper is
worse (Razdomakhin, 2006).
Any scanning device is equipped with optic (light) appliances to ensure non-contact
measuring. Such optic measurement acquisition devices can be divided into categories:
photogrammetry, silhouette methods, laser scanning, light projection, electromagnetic wave
and hybrid methods (Vanags, 2003; Winsborough, 2001; Xu, 2002; Youngsook, 2005;
DApuzzo, 2008).
Each method has its advantages and disadvantages (Wen, 2007; Devroye, 1998). In spite of
the fact that laser scanning has been recognized as the most precise method and the
gathered results are the most extensive (human body measurement data, a 3D virtual
mannequin, a reflection of the actual texture, surface relief measurements, etc.), the light
projection method is used more widely in the garment production industry since the
equipment is much more cheaper than a laser scanner.
There is still not enough research and results as to use virtual mannequins for 3D garment
designing. Mostly 3D scanning results are used to generate measures used in tailoring to use
them in traditional or computer aided constructing methods (DOI, 2005; Dbolia 2007;
Dekker, 1999; DApuzzo, 2008).
FitMe.Com; Hamamatsu Photonics K.K.; Hwang, 2001). Nevertheless the scanners have to
be improved considerably the data acquisition time has to be shortened, the way of
displaying the scanned data has to be improved, the 3D scanner software has to be
improved, etc (ISPRS, 2009; Istook, 2001; Simmons 2004; DApuzzo, 2003).
The scanning technologies are being improved constantly, the price has falls considerably
comparing to previous developments, and nevertheless each system has individual
imperfections (Siegmund, 2007; Sungmin, 2007).
Although it is possible to enumerate the deficiencies of each system, their data precision is
sufficient so that 3D scanners can be considered as appropriate for anthropometrical data
acquisition for garment designing.
Two researches connected with the limitations of 3D scanning have been described:
The scanning systems for human body measure acquisition use different data acquisition
ways: dynamic range (lights and darks), laser beams, etc. The experiment determines the
laser beam reflective abilities of different textile materials and the curve characterising the
reflectivity has been compared to the Lamberts law diffuse reflectivity curve (Dbolia,
2008).
Studying the light reflections from different fabrics, essential deviations from the Lambert
law were observed. Such deviations come from the geometry of the fabric surface (relief,
texture, trim). Insignificant deviations from the standard curve can be observed on very
smooth (bright) surfaces and uneven (relief) surfaces. Decorative elements embroidery,
applications can cause a too bright and uneven surface causing deviations from the standard
division in these areas. If the underwear is decorated with crystals or other very bright
materials, the reflection curve is very uneven with several extreme points. The reflectivity of
different textile materials depending on the decoration varies from diffuse to mirror-
reflectivity.
Virtual Garment Creation 63
Fig. 7. An example of the results of the reflectivity study experiment: If the beam falls on a
decorative element deviations from the diffuse division can be observed, this can be due to
the brightness of the fabric. In this case there are significant changes in the division and a
fall of the signal almost to the zero level has been observed (40 degree angle).
As a result of the research it can be concluded that smooth, but not bright underwear,
without any decorative elements, has to be chosen for scanning. Underwear with decorative
elements that can reflect or break the ray of light crystals, glass particles, and pearls
should be used under no circumstances.
An analysis of the oscillations of the human body in rest state has been performed and the
significance of these oscillations for 3D anthropometrical measurements have been studied
(Fig.8.).
The experiment has been performed for three different postures of a person: back view, side
view and front view. Since the front and back view analysis did not show any differences,
the result of the analysis has been reflected for the side view and front view only.
Photographing has been performed in two cycles.
In the first cycle differences in the posture have been evaluated photographing the person in
one and the same posture every three seconds, for each posture. Afterwards the changes in
64 Applications of Virtual Reality
the posture were analysed. The analysis of the front view and back view postures show little
change in the posture. A person can oscillate in the range from 3 to 12 mm.
(a) (b)
Fig. 8. Change of the posture (analysis of five sequential positions): a) side view, b) front view
The side view oscillations are greater those can vary in the range from 9 to 21 millimetres.
Such a difference can be explained by the fact that ankles are more likely to move back and
forth than sideways. The results of the analysis show that the volume of oscillations
increases the higher the person is from the ground. In this case not only oscillations, but
small changes in the positioning of the body and posture have been observed.
In the second photo analysis cycle posture changes were evaluated sequentially stepping off
the platform and changing the position of the feet and the body by 90 as it is in scanning
devices with change of posture. The range of the oscillations of the front and back views is
from 10 to 40 millimetres, in its turn the side view oscillation range is from 11 to 51
millimetres while the feet and the ankles remain in a fixed position. Such oscillations are
characterized not only by the change of posture, but also by the change of the stand and
Virtual Garment Creation 65
corpus position (EN ISO 15536-1; EN ISO 15536-2). The changes can affect the scanning
process causing inaccuracies in the data (Gomes, 2009; Hsueh, 1997; Luhmann, 2006).
3D scanning has several advantages comparing to manual measurements it is fast,
sequential, and has a higher precision level. Using 3D scanning, no professional knowledge
is needed to acquire the measurements most of the systems generate the measures of the
human body self-dependently (EN ISO 20685; EN ISO 7250). What makes 3D scanning so
attractive is the fact that is a non-contact method, but it also has disadvantages most of the
systems cannot determine hidden areas (armpits, chin, etc.), as well as vague scanning
contours. The latter one is mainly affected by the oscillations of the body in time. Therefore
the scanning time should be shortened as much as possible.
the human body necessary for scanning changes the external characteristics of the body and
makes it inadequate for the natural posture. Human Solutions GmbH, one of the worlds
leading laser scanner producers, solves this problem by scanning people in a free, stately,
unconstrained posture, detaching the extremities form each other and from the torso with
the help of calculations afterwards. This method can have drawbacks in cases when due to
the specific weight of the soft tissues the extremities of the person not only fit close to each
other and the torso but are so close that they misshape each other.
The creation of such a realistic reproduction of the human body allows developing services
available in the e-environment. For instance in the spring of 2011 the company Human
Solutions (Germany) presented a virtual mirror that reflects the scanned virtual twin of a
person which can be used to fit the chosen garment and evaluate the set. Although there are
different e-commerce catalogues available in the computer environment, this type of fitting
is a novelty and is expected to be a great success since the bothersome and exhausting
garment fitting process is excluded. Ditto Vidya Human Solutions in cooperation with
Assyst 3D have developed an innovative 3D system, that not only allows virtual sewing,
fitting and evaluation of a garment, but also define the technological placement of seam ease
allowances, the pocket spread and even evaluate the functionality of button snap.
7. Conclusions
There is high level of pattern making systems. Modern computer aided designing software
provides the possibility to avoid small operations and manual work, to raise precision,
productivity and organize information flow. The usage of garment designing systems
excludes the time consuming manual preparation of patterns, creation of layouts and
relocation of written information. The computer systems are meant for the execution of
every single process and the integration of all processes into one joint flow, for the
organization of logistics and the mobility of work tasks. Computer systems allow making
two dimensional as well as three dimensional product illustrations and visualizations. It is
possible to create computer aided garment constructions, as well as gradations, and create a
virtual first pattern of the model - such computer aided operations significantly decrease the
time consumption and cost necessary to design a product. The costs of the product itself can
be calculated with the help of the product management systems following the development
parameters, the layout of patterns, textile expenditure, model complexity and specification,
as well as previous experience of the company stored in a data base.
Although computer systems significantly facilitate the development of a product, the
knowledge and skill of the user are still very important. One of the most important garment
creation stages is constructing. Constructing is the reproduction of a spatial model (clothing)
on a plane (construction); this transformation has to be reflexive when joining the parts of
the construction a garment is originated. The creation of the drafts of the construction is the
most complicated and responsible stage of garment designing, because a non-existent
complicated spatial shape product surface layout has to be created (drawn). One of the most
topical problems in garment designing has always been the search of garment designing
methods scientifically reasoned, precise and as little as possible time and labour consuming.
Several factors depend on a precise development of garment surface layout material
expenditure, garment set quality, labour intensity level, the aesthetical and hygienic
characteristics of the finished product.
Virtual Garment Creation 67
The specialists in the different fields are interested in reproducing of human figure in a
virtual environment: designers, what uses information of ergonomics (engineering, interior
design), animation creators, and also medicine and apparel designers. There is a chain of
problems in the production of clothes which is related to the features of the figure of a
customer to get maximally conformable clothes. It is very important to have exact human
body measurements without significant mistakes for garment construction. The traditional
mass production ever decreases the volumes of series, the production becomes more elastic
and the choice of goods expands; the wear time decreases. Along with the serial production,
individual production becomes more and more popular. The current economic situation
shifts the search for labour more and more to the East, but the creation of individually
oriented products could make it possible to maintain working places and production units
in Europe. People will be willing to pay more for this type of clothing and receive it in a
possibly short term. Thereby the promotion of individualized production is affected by
social and economic aspects.
The research of scientists regarding the improvement and development of 3D garment
designing is directed into different directions:
The development of mass customization process schemes;
The development of a virtual twin;
The study of coherence and definition of projection ease allowances;
The improvement of fabric visualization means.
The garment production companies mostly support the development of the company and
the introduction of CAD/CAM systems, since these ensure a higher product quality, higher
productivity, humanization of the working process, a more elastic production process and
process control. Nevertheless the distribution and introduction of computerized systems in
companies of all levels (small and large) can be a problem because of the system costs as
well as the incompetence of the employees.
8. References
3D Ouest Ltd. [Live]. - Korux human body laser scanning technology. - www.3douest.com. -
France.
3D-Shape GmbH [Live]. - Body and Face Scan Stripes projection technology. - www.3d-
shape.com. - Germany.
4D Culture Inc. [Live]. - 4D Culture 3D human body laser scanning system. -
www.4Dculture.com. - South Korea.
AceApparel CAD/CAM [Live] // Clothing designing. the company was established 2001
by Mo Deok Won. - www.pattern-cad.com.
Assol CAD/CAM [Live] // Clothing designing. - Centre of Applied Computer
Technologies ASSOL; Moscow Institute of Physics and Technology, Dr. Gennadij
Andreev. - www.assol.mipt.ru.
Assyst CAD/CAM [Live] // Clothing designing. - Assyst-bullmer. - www.assyst-intl.com.
Audaces CAD/CAM [Live] // Clothing designing. - fashion technology. -
www.audaces.com.
68 Applications of Virtual Reality
Beazley Alison & Bond Terry Computer-aided pattern design & product development
[Book]. - UK: Blackwell Publishing, 2003. - p. 220. - ISBN 1-4051-0283-7.
Bernina [Live] // Clothing designing. - OptiTex/Siemens#3: FIT Technology Bernina
MyLabel. - www.berninamylabel.com.
Bodymetrics Ltd. [Live]. - 3D body data analysis software. - www.bodymetrics.com. - UK.
Buxton Bernard Dekker Laura, Douros Ioannis, Vassilev Tsvetomir Reconstruction and
Interpretation of 3D Whole Body Surface Images [Report]. - Gower Street London :
Department of Computer Science University College London, 2006. - WC1E 6BT
United Kingdom.
CAD Modelling Ergonomics S.r.l. [Live]. - Antrophometrics dummies by silhouette
extracting technology. - www.cadmodelling.it.
Comtense CAD/CAM [Live] // Clothing designing. - . - www.comtense.ru.
Cyberware Inc. [Live]. - Cyberware whole body 3D laser scanner technology. -
www.cyberware.com. - USA.
Dbolia I., Blms J. and Viumsone A. Lzera stara plsmas atstaroanas izpte veas
materilos/ A study of the laser beam reflections on underwear materials
[Magazine] // RTU scientific articles. - Riga: Riga Technical University, 2008. part
9: Material Science 3. - p. 62-70 - ISSN 1691-3132-2008-3132.
Dbolia Inga and Viumsone Ausma Trsdimensiju antropometrisk modelana/ Three
dimensional anthropometrical modelling [Article] // RTU scientific articles, 2007. -
Part 9: Material Science, Vol. 2, p. 103-110. - ISSN 1691-3132-2007-2.
D'Apuzzo Nicola Feasibility study: Full Body Scanning, Virtual-Try-On, Face Scanning,
Virtual-Make-Over with application in apparel. [Report]. - Zrich, Switzerland:
Hometrica Consulting, 2008.
D'Apuzzo Nicola Recent Advances in 3D Full Body Scanning With Applications to Fashion
and Apparel [Article] // Optical 3-D Measurement Techniques IX. - Vienna,
Austria: 2009. - p. 10.
D'Apuzzo Nicola Surface Measurement and Tracking of Human Body Parts from Multi
Station Video Sequences [Report]: Doctoral thesis. - Zurich: Swiss Federal Institute
of Technology Zurich, 2003. - p. 148. - ISSN 0252-93355; ISBN 3-906467-44-9.
Dekker L., Douros I., Buxton B. F., Treleaven P. Building Symbolic Information for 3D
Human Body Modelling from Range Data [Article] // Proceedings of the Second
International Conference on 3-D Digital Imaging and Modelling. - Ottawa, Canada:
[no title], October, 1999 - IEEE Computer Society, p. 388 - 397.
Devroye L., Mucke E. P. and Binhai Zhu A Note on Point Location in Delaunay
Triangulations of Random Points [Magazine] // Algorithmica (22) - Springer-
Verlag New York Inc, 1998. - p. 477 482.
Digimask Ltd [Live], Face modelling software hybrid technology from images.,
www.digimask.com.
DOI Generating unified model for dressed virtual humans [Report]. -: Visual Comput, 2005,
p. 522 - 531. - DOI 10.1007/s00371-005-0339-6.
EN ISO 15536-1 Ergonomika. Datormanekeni un ermea maketi. 1.daa: Visprgs
prasbas./Ergonomics-Computer manikins and body templates - Part 1: General
requirements. [Report]. - Brussels: CEN - European Committee for Standardization,
2005. - ISO 15536-1:2005.
Virtual Garment Creation 69
1. Introduction
Spatial navigation is an important cognitive ability which has contributed to the survival of
animal species by allowing them to locate sources of the food, water and shelter (Epstein,
2008). In order to navigate, animals and humans use environmental landmarks as references
to calculate their own position and the location of targets in the environment. In such cases,
landmarks are used to estimate distances and directions of objects and targets (for a review
of landmark navigation in vertebrates see Rozhok, 2008).
In a classic study, Morris (1981) demonstrated that rats could locate an object that they were
unable to see, hear or touch by using spatial landmarks. In the test situation, rats learned to
escape from the water by swimming to an invisible platform located under the water line.
Subsequent tests done without the platform corroborated those findings. Further research
using a virtual version of the Morris water maze showed that humans use landmarks in a
similar way to locate a hidden goal (Astur, et al., 1998). Similarly, Chamizo et al. (2003) and
Artigas et al. (2005) used virtual reality environments in order to study spatial navigation
characteristics in humans.
Two experimental procedures have been commonly used to study human and animal
spatial navigation based on landmarks. After a training period in which individuals learn to
navigate to locate a target, the experimenter changes the experimental setting by 1)
increasing the distance between landmarks (Tinbergen, 1972) or 2) removing one or more
landmarks from the environment (Collett, et al., 1986).
Collett et al. (1986) studied spatial navigation in gerbils by training them to find hidden food
in an arena. As shown in Figure 1A, food was located between two identical landmarks.
After learning acquisition, trials were done without food. When tested in the presence of
only one landmark (removing procedure), the gerbils searched for food on the left and right
sides of the landmark, maintaining the corresponding distance and direction from it (Figure
1B). In a second test, the distance between landmarks was increased (expanding procedure)
and the gerbils searched for food in two different locations but also maintained distance and
direction with regard to the landmarks (Figure 1C). Collet et al. (1986) concluded that gerbils
74 Applications of Virtual Reality
calculated the distance and direction of the food by using each landmark independently. In
this case, search behaviour could be explained by a vectorial summation model in which
gerbils use the learned vectors (between the goal and the landmarks) to calculate the
direction and distance of the food. The vectors are added up and the resulting vector
converges on a certain part of the space that corresponds to the location of the food.
Fig. 1. Layout of the arena (left panel) and the virtual pool (right panel). In the left panel,
landmarks are represented by black dots. The letter a shows the experimental setting in
the training trials, b represents the situation in which one of the landmarks was removed,
and c shows the condition in which the landmarks were separated. Capital letters C and
B respectively represent the food and the regions where the animals search for the food
during the test. The virtual pool used in this experiment is shown in the right panel (d). A
black square represents the platform and the two empty circles represent the landmarks.
The virtual pool was divided into 12 sectors. The central region of the pool was defined as
the 0th sector.
Subsequent research, however, led to different results. Cheng (1988, 1989) trained pigeons in
the presence of two landmarks. After acquisition, the pigeons were tested with the
landmarks being moved further away and it was verified that they search for the food in
midline between the new positions of the landmarks. Cheng assumed that pigeons navigate
using an average vector that indicates the position of the food with regard to the animal.
MacDonald et al. (2004) applied the procedure of separating landmarks in a study with
marmoset monkeys, children and adult humans. They concluded that a middle rule
strategy is the common navigational strategy used by adults. However, children and
primates used a different strategy, preferring to search near the landmarks. From these
results, authors suggested that among vertebrates, adult humans appear to be outliers
MacDonald et al. (2004) suggest that two strategies can be adopted by the participants in a
virtual reality environment: a) encoding the goal location by means of its spatial relation
with regard to the landmarks (configural strategy); and b) encoding a vector from each
landmark to the goal (elemental strategy). Only the configural strategy elicits the use of the
middle rule, i.e. navigating using the resulting vector as a reference.
Human Visual Field and Navigational Strategies 75
In this study we use a particular virtual reality environment to investigate spatial navigation
in human adults in two viewing conditions. In the first, participants could simultaneously
see both landmarks of the virtual environment which inform about the location of the goal
(simultaneous vision). In the second, participants could see only one landmark at a time
(sequential vision). Basically, conditions differed with regard to the amplitude of the visual
fields, which might influence the strategy adopted by the participant to navigate in the
virtual space and locate the goal. When people have visual access to both landmarks, they
can use all relevant information to navigate. However, when people see only one landmark
at a time, they need to integrate the partial viewings of the environment in order to
reconstruct the visual space. Consequently, simultaneous and sequential vision tasks
involve different cognitive demands.
In order to investigate the spatial learning in those visual conditions we used a virtual
reality model of the Morris water maze (Morris, 1981) adapted for a human task (see
Chamizo, et al., 2003 and Artigas, et al., 2005). Participants were trained to locate a hidden
platform in the presence of two landmarks. In Experiment 1 one of the landmarks was
removed (removing procedure), and in Experiment 2 the distance between landmarks was
increased (increasing procedure). In both experiments participants were tested in
simultaneous and sequential vision conditions. After the training period, the platform was
removed and the time spent in each sector of the platform was registered.
We expected that the simultaneous vision of the two landmarks would promote a configural
strategy so that participants would spend more time searching for the platform in the area
between the landmarks (middle rule). The successive viewing task, on the other hand,
would promote the use of an elemental strategy in which participants would spend more
time searching in the adjacent regions, i.e. applying the connecting cues rule (from
landmark to landmark). The results of these two experiments converge in the same
direction, suggesting that different navigational strategies depend on visual conditions.
2. Experiment 1
2.1 Method
2.1.1 Participants
Forty-seven students of psychology at the University of Barcelona ranging in age from 22 to
24 years old took part in the experiment. Twenty-four participants were assigned to Group 1
(simultaneous vision) and performed the location task using a visual field of 90, which
allowed both landmarks to be viewed simultaneously. Twenty-three students were assigned
to Group 2 (sequential vision) and performed the task using a visual field of 30, which
allowed only one landmark at a time to be viewed. The study was approved by the
institutional ethical committee of the University of Barcelona in accordance with the ethical
standards laid down in the 1964 Declaration of Helsinki and students received course
credits to take part in the experiment.
monitor. Headphones were used to present sound in the experimental task. The computers
were equipped with an ATI Radeon 9200 Graphics Card, which allows graphics acceleration
and high resolution. Experiments were programmed in C++/Open GL, using a software
interface for 3D graphics, with hardware developed by Silicon Graphics. The program
controlled the virtual environment and the auditory information (background sound and
positive and negative feedback) and registered the time taken to reach the platform. The
positive feedback consisted of a brief three-second song (Thats all folks) and the negative
auditory feedback consisted of an unpleasant melody (the sound of mournful bells). The
auditory background sound was slightly unpleasant in order to generate some distress in
the students and reproduce the conditions of an escape task. Participants used the keyboard
arrow keys (up, down, left and right) to navigate in the virtual environment.
The virtual space was an octagonal swimming pool (radius = 100 units) situated in the
middle of the virtual space and surrounded by a pale blue surface (sky). Objects were placed
hanging from an invisible ceiling. A circular platform (radius = 8 units) could be placed in
the pool below the surface of the water (i.e. an invisible platform). Two spheres (pink and
green) were used as landmarks (diameter = 20 units). The pool was divided into 12 sectors
of 30o. The platform was placed in sector 8, perpendicular to the landmark (Figure 1d). The
participants started the experiment in the centre of the pool (0th sector).
2.1.3 Procedure
Participants were tested in groups of four individuals and received the following instructions:
In this experiment you will be swimming for a long time in a circular pool from which you
want to escape. You are very tired and you will only rest if you find the floating platform. You
will see the platform only in the first trial. In the following trials, the platform will be hidden
Fig. 2. Mean escape latencies of the groups of participants with simultaneous and sequential
vision of the landmarks.
Human Visual Field and Navigational Strategies 77
but located in the same position with regard to the other objects. Occasionally only one object
will be presented. Your goal in the task consists of reaching the platform.
After the training period, participants find themselves in the middle of the pool (0th sector)
facing a different direction each time (NE, SE, SW or NW). In the escape trials the
platform was hidden and in the final test trial the platform was absent. Landmarks
consisted of the two similar spheres of different colors, with an angular separation of 90.
Participants performed 24 trials with an inter-trial interval of 10 sec. Participants had 60 s to
find the platform. When they reached the goal a positive feedback was presented (song
Thats all folks). Otherwise, if they could not find the platform, a mournful sound was
presented.
After acquisition, test trials were done. Half of the participants performed the search task
with only one landmark. The other half did the task with both landmarks being presented in
the virtual pool. A measure of interest was the time spent in the relevant sector of the pool
(8th sector) which contained the platform. Participants had 60 s to search, but only the first
20 s were included in data analysis. The time spent in the different sectors of the pool was
taken as a dependent variable.
Fig. 3. Mean time spent in seven important sectors of the pool for the two groups with
simultaneous and sequential vision of the landmarks. Panel a presents the results of
Experiment 1, in which a landmark was removed during the test trial. Panel b represents
the situation in which the test trial was identical to the training trial. Panel c corresponds
to the results of Experiment 2, in which the distance between the landmarks was increased
in the test trial.
Finally, post hoc tests revealed that in the simultaneous vision group for one landmark,
participants differed only in the time spent between the 7th and 8th sectors [t(11)= 2.82;
p < .017]. However, in the case of the same group with two landmarks, post hoc tests
indicated differences between the 7th and 8th sectors [t(11)= 5.01; p < .001] and between the
8th and 9th sectors [t(11)= 4.51; p < .001]. With regard to the sequential vision group, no
differences for the time spent in sectors 7, 8 and 9 were found.
Since no difference between escape and test trial was found, we considered it would be
interesting to analyze the time spent at the starting point position as a function of the visual
conditions. An ANOVA with groups (simultaneous and sequential vision) and types of
test (one or two landmarks) as factors were conducted, using the time spent in the 0th
sector (starting-point) as the dependent variable. Results revealed that only the factor types
of test was significant [F(1,43)=7.33; p< .010]. Both groups spent more time at the starting
point during the trial test (Figure 4).
In summary, a comparison of the viewing conditions, as we can see in Figure 3, clearly
demonstrates that participants make use of two different strategies depending on whether
they can see both landmarks simultaneously or just one at a time. In the test with one or two
landmarks, the group with simultaneous vision spent more time looking for the goal in the
8th sector, while the subjects of the group with sequential vision spent their time in a
80 Applications of Virtual Reality
different way, exploring the 8th but also the adjacent 7th and 9th sectors. As we predicted,
participants of the sequential vision group seem to use an elemental strategy, applying the
connecting cues rule (from landmark to landmark). In Experiment 2, we investigated the
effect of the increasing procedure on the navigation strategies adopted in the task.
Fig. 4. Mean time spent in the 0th sector (starting point) for each group (simultaneous and
sequential vision) as a function of the type of test trial (with one or two landmarks).
4. Experiment 2
In this experiment we tested human spatial navigation in a virtual environment by
increasing the angular separation between the landmarks (expanding procedure). This
procedure was important for analyzing whether navigational strategy depends on the kind
of manipulation done in the test trial. As in the previous experiment, during the training
trial the angular separation between the landmarks was kept constant (90), while the visual
field was manipulated: 1) Group 1 = 90 (simultaneous vision); Group 2 = 30 (sequential
vision). In the test trial, half of the participants were submitted to the condition in which the
distance between the landmarks was the same as in the training trials. For the other half we
applied the expanding procedure, changing the angular separation between the landmarks
from 90 to 150. In the latter condition, we considered that if participants distributed the
search time in sectors 7, 8 and 9, it would suggest that they had adopted a navigational
strategy from landmark to landmark. On the other hand, if participants spent more time
in the 8th sector (platform) in comparison to the 7th and 9th sectors, it would suggest that
they had adopted the middle rule strategy (following an egocentric vector from the
starting point to the platform).
Human Visual Field and Navigational Strategies 81
4.1 Method
4.1.1 Participants
Sixty-nine students of psychology at the University of Barcelona ranging in age from 22 to
24 years old were randomly assigned to two groups, differing with regard to the visual field
condition. Thirty-three students took part in Group 1 and had visual access to both
landmarks during the session (90 of visual field - simultaneous vision). Thirty-six students
took part in Group 2 and had visual access to only one landmark at a time (30 of visual field
- sequential vision). Volunteers were naive about the hypothesis of the experiment and
received course credits to take part in the research. The experiment was conducted in the
same room as the previous session.
4.1.2 Procedure
Experiment 2 was identical to Experiment 1 except for the use of the expanding
landmarks procedure. In the test trial, half of the participants were submitted to the
condition in which the distance between the landmarks was identical to the training trial
(90). For the other half of the participants, the expanding procedure was applied and the
angular separation increased (150). As in Experiment 1, we considered only the first 20 s of
searching in the test trial for data analysis.
the group with sequential vision spent more time in the 10th sector in comparison to the
simultaneous group.
Fig. 5. Mean escape latencies of the two groups with simultaneous and sequential vision in
Experiment 2.
In the expanded landmarks condition (150) we found differences between simultaneous
and sequential vision for the 6th [t(44)= 3.26; p < .002], 8th [t(44)= 2.21; p < .033] and 10th
sectors [t(44)= 4.89; p < .001]. These data revealed that participants with simultaneous vision
spent more time in the relevant sector (8th), suggesting that they adopted a strategy of
advancing directly to the goal. However, participants with sequential vision and submitted
to the expanding procedure spent more time in the 6th and 10th sectors, located adjacent
to the landmarks. These results suggest that participants used a strategy of navigating from
one landmark to another (connecting cues strategy).
As regards the participants in the group with simultaneous vision submitted to non-
expanded landmarks, we verified that they spent more time in the 8th sector compared to
the 6th, 7th, 9th and 10th sectors [t(11)= 4.51; p < .001]. Likewise, in the expanding
procedure, participants spent more time in the 8th sector in comparison to the 6th and
10th sectors [t(21)= 2.91; p < .008]. A further analysis in the simultaneous group,
comparing the two types of test (90 vs. 150), only showed differences for the 8th sector
[t(32)= 3.49; p < .001].
Our data showed that participants with simultaneous vision spent more time in the relevant
sector (8th) than in the sectors adjacent to the landmarks. For the condition in which the
distance between the landmarks was expanded, we could attribute the non-significant
Human Visual Field and Navigational Strategies 83
difference between the 7th, 8th and 9th sectors to a decrease in accuracy in finding the
resulting vector that leads to the goal (platform).
For the group with sequential vision submitted to non-expanded landmarks, we found
significant differences only between the 8th and 6th sectors [t(11)= 2.66; p < .022]. However,
for the participants submitted to the expanding procedure (150), the time spent in the 8th
sector differed from that spent in sectors 6 and 10 [t(23)= 2.20; p < .038]. Additionally, we
found that the time spent in the 6th sector differed from that spent in the 7th [t(23)= 2.41; p <
.024] and that the time in the 10th sector differed from that in the 7th and 9th sectors
[tmin(23)= 2.62; p < .015]. This showed that participants in the group with sequential vision
predominantly searched for the platform in the 6th and 10th sectors rather than in the
relevant sector (8th). A final analysis in the sequential group, comparing two types of test
trials (90 vs. 150), showed that only the time spent in the 6th sector differed [t(34)= 3.19; p
< .003]. Generally speaking, these data indicate that participants with sequential vision
submitted to non-expanded landmarks spent approximately the same time in the relevant
(8th) and adjacent (7th and 9th) sectors. However, the group of participants with sequential
vision submitted to the expanded landmarks (150) spent more time in the sectors adjacent
to the landmarks (6th and 10th) than in the relevant sector (8th). This pattern of results
differs from that found in the group with simultaneous vision, suggesting the use of two
navigational strategies. Analogously to Experiment 1, we predict that participants with
sequential vision use an elemental strategy.
Fig. 6. Mean time spent in the 0th sector (starting point) for each group (simultaneous and
sequential vision) as a function of the type of test trial (expanded and non-expanded
landmarks).
84 Applications of Virtual Reality
Finally, an analysis of the variance with groups (simultaneous vs. sequential vision) and
types of test (expanded vs. non-expanded landmarks) as factors was conducted. The time
in the 0th sector (starting point) was taken as the dependent variable. Results revealed a
statistically significant effect of groups [F(1,66)= 5.63; p < .021] and types of test
[F(1,66)= 4.106; p < .047] as well as an interaction between them [F(1,66)= 5.879; p < .018].
A post hoc analysis of the interaction indicated that, only for the group with simultaneous
vision, the distance between the landmarks affected the time spent at the starting point
(Figure 6). In such cases, participants submitted to expanded landmarks (150) spent more
time (mean = 6.32 s) compared to those submitted to non-expanded landmarks (90) (mean
= 2.50 s). We can consider two possible explanations for these results. On the one hand, the
change per se in the distance between landmarks promoted a decrease in the generalization
of the response, and on the other, when the landmarks are further apart, the time to define
the correct vector and direction increases.
6. General discussion
In this study we investigated the influences of simultaneous and sequential vision on
human spatial navigation by using two classic procedures adapted from animal research. A
virtual model of the Morris water maze (Morris, 1981) was used in the task, in which
participants had to locate a hidden underwater platform in the pool. Experiments were
designed in order to analyze in test trials the effect of the removal of one of the landmarks
(Experiment 1) and the influence of increasing the distance between them (Experiment 2).
The time spent by the participants in each sector of the pool was registered and taken as a
dependent variable in data analysis.
In Experiment 1 we verify that participants with simultaneous vision spent more time in the
relevant sector (8th) in comparison to other sectors of the pool. However, participants with
sequential vision spent approximately the same time in the relevant and adjacent sectors.
These results indicate that simultaneous vision compared to sequential vision improves
spatial navigation in a virtual environment.
Similarly, in Experiment 2 participants with simultaneous vision also spent more time in the
relevant sector (8th) compared to the other sectors in both conditions (expanded and non-
expanded landmarks). But when participants with sequential vision were submitted to the
condition of expanded landmarks they spent more time in the 10th and 6th in comparison to
the 7th, 8th and 9th sectors. For the case in which participants with sequential vision were
submitted to non-expanded landmarks (90o), they spent more time in the relevant (8th) and
adjacent (7th and 9th) sectors.
As a whole, these results indicate that participants use different strategies depending on the
visual condition. Participants submitted to simultaneous vision tended to navigate directly
to the relevant sector (8th), indicating that they used a middle rule strategy based on a
configural learning. However, participants with sequential vision submitted to the
procedure of the removal of one landmark (Experiment 1) spent more time in the three
sectors between the two landmarks (7th, 8th and 9th), suggesting that they navigate from
one landmark to another based on an elemental learning. The latter finding is reinforced by
the observation that when the distance between the landmarks was increased, participants
Human Visual Field and Navigational Strategies 85
with sequential vision spent more time in the sectors near the landmarks (6th and 10th). This
pattern of results reveals that participants adopted two different strategies. The middle
rule strategy takes into account the egocentric direction to determine the correct bisector
between the starting point and the landmarks, while the landmark to landmark strategy
takes into account the exocentric direction.
In the context of visual control of the action, Goodale and Humphrey (2001) state that
perceptual tasks differ from motor tasks as a consequence of information processing in
two independent neural pathways: the ventral (related to perception, e.g. object
recognition) and the dorsal (related to directed action, e.g. blind walking). Dissociation
between perception and action can be found in many studies (Goodale & Milner, 1992;
Loomis et al., 1992; Smeets & Brenner, 1995; Philbeck et al., 1997; Fukusima et al., 1997;
Kelly, Loomis & Beal, 2004; Kudoh, 2005). However, there is also evidence against this
dissociation, with many papers trying to reconcile the pathways (Kugler & Turvey, 1987;
Goodale & Humphrey, 2001, Norman, 2002). In this study, results might sustain the
hypothesis of the dissociation between perception and action, with the middle rule
being related to a perceptual strategy, while the navigation from landmark to landmark
implies an action strategy.
We verified that the expanding landmarks test helped to clarify the strategies adopted by
the participants with sequential vision. According to Sutherland & Rudy (1989), if the
responses of the participants were based on an elemental learning, they should present a
good performance in the test with two landmarks, but a worse performance in the test with
one landmark. However, if the responses of the participants are based on configural
learning, they should present a good performance in both types of test (removal and
expanding procedure). If we apply those criteria to the data, we conclude that the group
with sequential vision adopted an elemental strategy, whereas the group submitted to
simultaneous vision adopted a configural one. However, we should note that these authors
made this assertion in the context of discriminatory learning and it has not been applied to a
spatial task until now.
These results differ from those obtained by MacDonald et al. (2004), who pointed out that
human adults use a middle rule strategy, but that children and other primates do not. Our
results suggest that only participants with simultaneous vision used a middle rule
strategy, while participants with sequential vision employed a strategy of advancing from
one landmark to another. In other words, navigation with simultaneous vision follows an
egocentric vector directed to the middle of the landmarks, while sequential vision follows an
exocentric vector that connects landmark to landmark.
The simultaneous visual access of the participants to the relevant information enables them
to integrate the spatial relation easier and promotes a configural encoding which, in turn,
allows the use of middle rule strategy. The visual access to only one landmark at a time
makes the learning of global spatial relations difficult but promotes the elemental
relationship between landmarks, enabling participants to apply the connecting landmarks
rule. However, more research is necessary in order to conclude whether these results could
be attributed to general rules in spatial navigation or to the extensive training in the
sequential condition.
86 Applications of Virtual Reality
7. Conclusion
In this work we have demonstrated that the amplitude of the visual field can affect the type
of navigational strategy used by humans in a virtual environment. This study shows that
simultaneous access to spatial cues in the training period improves the finding of routes and
pathways which can be relevant when one of the landmarks is missing or when the distance
between the landmarks is increased. Therefore, our study also emphasizes the importance of
the amplitude of the visual field in the exploration of a virtual environment. Indeed, the area
covered by the visual field may facilitate or hinder the integration of available information
that is scattered in the environment. The results of this study can provide guidelines to the
development of virtual environments more realistic and compatible with natural conditions
of visual stimulation.
8. Acknowledgment
This research was supported by a grant from the Spanish Ministerio de Ciencia y Tecnologia
(Refs No. PSI2009-11062 /PSIC). We are very grateful to Antonio Alvarez-Artigas and
Victoria Diez-Chamizo for your very valuable help in these experiments.
9. References
Astur, R. S., Ortiz, M. L., & Sutherland, R. J. (1998). A characterization of performance by
men and women in a virtual Morris water task: a large and reliable sex difference.
Behavioural Brain Research, 93(1-2), 185-190.
Chamizo, V. D., Aznar-Casanova, J.A., & Artigas, A.A. (2003). Human overshadowing in a
virtual pool: Simple guidance is a good competitor against locale learning.
Learning and Motivation, 34, 262281.
Artigas, A. A., Aznar-Casanova, J. A., & Chamizo, V. D. (2005). Effects of absolute proximity
and generalization gradient in humans when manipulating the spatial location of
landmarks. International Journal of Comparative Psychology, 18, 225-239.
Cheng, K. (1988). Some psychophysics of the pigeon's use of landmarks. Journal of
Comparative Physiology A, 162, 815-826.
Cheng, K. (1989). The vector sum model of pigeon landmark use. Journal of Experimental
Psychology: Animal Behavior Processes, 15, 366-375.
Collett, T. S., Cartwright, B. A., & Smith, B. A. (1986). Landmark Learning and Visuospatial
Memories in Gerbils. Journal of Comparative Physiology A - Sensory Neural and
Behavioral Physiology, 158 (6), 835-851, 1986.
Epstein, R. A. (2008). Parahippocampal and retrosplenial contributions to human spatial
navigation. Trends in Cognitive Science, 12(10), 388-396.
Fukusima, S.S., Loomis, J. M., & Da Silva, J. A. (1997). Visual perception of egocentric
distance as assessed by triangulation. Journal of Experimental Psychology: Human
Perception and performance, 23(1), 86-100.
Gazzaniga, M. S., Ivry, R., & Mangun, G. R. (2002). Cognitive Neuroscience: The Biology of
the Mind. W.W. Norton, 2nd Edition.
Human Visual Field and Navigational Strategies 87
Goodale, M. L., & Humphrey, G. K. (2001). Separate visual system for action and perception.
In E. B. Goldstein, Blackwell Handbook of Perception, (p. 311-343). Oxford:
Blackwell.
Goodale, M. A. & Milner, A. D. (1992). Separate visual pathways for perception and action.
Trends in Neurosciences, 15 (1), 20-25.
Kelly, J. V., Loomis, J. M. & Beal, A. C. (2004). Judgments of exocentric direction in large-
scale space. Perception, 33, 44-45.
Kudoh, N. (2005). Dissociation between visual perception of allocentric distance and
visually directed walking of its extent. Perception, 34, 1399-1416.
Kugler, P. N., & Turvey, M. T. (1987). Information, Natural law, and the self-assembly of
rhythmic movement. Hillsdale: Erlbaum.
Loomis, J. M. & Knapp, J. M. (2003). Visual perception egocentric distance in real and virtual
environments. In L. J. Hettinger & M. W. Haas (Eds.), Virtual adaptive
environments (pp. 21-46). Mahwah: LEA.
Loomis, J. M., Da Silva, J. A., Fujita, N., & Fukusima, S.S. (1992). Visual space perception and
visually directed action. Journal of Experimental Psychology: Human Perception
and Performance, 18(4), 906-921.
MacDonald, S. E., Spetch, M. L., Kelly, D. M., & Cheng, K. (2004). Strategies in landmark use
by children, adults, and marmoset monkeys. Learning and Motivation, 35 (4), 322-
347.
Morris, R. G. M. (1981). Spatial localization does require the presence of local cues. Learning
and Motivation, 12, 239-260.
Newcombe, N. S. & Huttenlocher, J. (2000). Making space: The development of spatial
representation and reasoning. MIT Press.
Norman, J. (2002). Two visual system and two theories of perception: attempt to reconcile
the constructivist and ecological approaches. Behavioral and Brains Sciences, 25 (1),
73-144.
Philbeck, J. W., Loomis, J. M., & Beall, A. C. (1997). Visually perceived location is an
invariant in the control of action. Perception & Psychophysics, 59, 601-612.
Rozhok, A. (2008). Orientation and navigation in vertebrates. Berlin: Springer-Verlag.
Smeets, J. B. J., & Brenner, E. (1995). Perception and action are based on the same visual
information: distinction between position and velocity. Journal of Experimental
Psychology: Human Perception and Performance, 21, 19-23.
Stewart, C. A., & Morris, R. G. M. (1993). The watermaze. In Ed. Saghal Behavioural
Neuroscience. A Practical Approach. Volume I. IRL Press at Oxford University
press, Oxford, New York, Tokyo, pp107-122.
Sutherland, R. J., & Rudy, J. W. (1989). Configural association theory: The role of the
hippocampal formation in learning, memory, and amnesia. Psychobiology, 17, 2,
129-144.
Tinbergen (1972). The animal in its world: explorations of an ethologist, 1932-1972. Harvard
University Press.
Turano, K. A., Yu, D., Hao, L., & Hicks, J. C. (2005). Optic flow and egocentric-direction
strategies in walking: Central vs peripheral visual field. Vision Research, 45, 3117-
3132.
88 Applications of Virtual Reality
Wu, B., Ooi T. L., & He Z. J. (2004). Perceiving distance accurately by a directional process of
integrating ground information. Nature, 428 (6978),73-77.
5
1. Introduction
There is a growing trend in education and training towards the use of online and distance
learning courses. This delivery format provides flexibility and accessibility; it is also viewed
as a way to provide education in a more effective way to a broader community. Online
courses are comfortable, they are built under the missive of anyone, anywhere, anytime.
Everyone can participate from home or workplace.
Online courses can be developed in a variety of ways, for example, using a LMS (Learning
Management System), a LCM (Learning Content System), or a Web 2.0 tool (or some
mixture). These options, however, show limitations in terms of communication and
interaction levels that can be achieved between students. Most learning systems are
asynchronous and don't allow an effective real-time interaction, collaboration and
cooperation. Whilst they typically have synchronous chats and whiteboards, these
capabilities are often sterile and dont stimulate the appropriate interactions that enhance
learning. A rich interaction does not necessarily involve just verbal exchange since there is
an huge learning value to be gained from interacting with the learning content in a more
visual and practical way. For instance, imagine the learning benefits from collaborating on
a 3D construction jointly and in real-time? Imagine watching the impact of soil erosion, or
building and walking inside an heart model or a car engine? All this is possible in a 3D
immersive virtual world. Students can engage at a distance building content in real-time,
collaboratively and interactively. On the net there can be found an array of virtual
worlds, however we have chosen Second Life (SL) to show how teaching and learning
can be enhanced through the use of this platform. Second Life is immersive, enabling
users to interact, communicate and collaborate as if in the real world. SL is a model of
the real world, it shows an accurate physics simulation and it includes a meteorological
and gravitational system; as such, anything can be modelled and simulated. Each user in
the environment is represented by an avatar with all the features of a human being and
avatars can manipulate the environment. Scientific experiments can be held in a very safe
and controlled environment, and can be directly conducted by the scientist in charge.
Scientific fields such as architecture, history, medicine, biology, sociology, programming,
languages learning among many others can all be tested and researched through this
virtual world.
90 Applications of Virtual Reality
2. Virtual worlds
There is some debate related with the definition of a virtual world. One thing that can be
said is that a virtual world is not itself a game (Austin & Boulder, 2007), besides inside of the
platform who wants can play games. According to Schroeder (2008) virtual worlds are
persistent virtual environments in which people experience others as being there with them
and where they can interact with them. The keyword is immersion. In a virtual world
people can have immersive experiences. A virtual world can also be described as multi-user,
collaborative or shared virtual environment. Those environments or systems allow users to
experience other participants as being present in the same space, they can interact with each
other; this creates the feeling of being there together (Schroeder, 2008). This definition is
focused on sensory experience. In a virtual world Interaction with the world takes place in
real time. When you do something in the world, you can expect feedback almost
immediately. The world is shared. The world is (at least some degree) persistent (Bartle,
2004), so there is an interaction between users despite not being physically in the same space
(Wankel & Kingsley, 2009), stimulating immersive learning.
American National Standards (2007) defines a virtual world as a simulated environment
that appears to have the characteristics of some other environment, and in which
participants perceive themselves as interactive parts.
For Bell (2008), a virtual world is a synchronous, persistent network of people, represented
by avatars and facilitated by computers.
PCMags encyclopedia defines virtual world as a 3D computer environment in which users
are represented on screen as themselves or as made-up characters and interact in real time
with other users. Massively multiuser online games (MMOGs) and worlds such as Second
Life are examples (PCMag, 2011).
Here is a list of some of the best known virtual worlds: ActiveWorlds, Blue Mars, IMVU,
MOOVE, World of Warcraft, OpenSim and Second Life. Most part of these platforms were
created with a social and entertainment purpose. Most of users use them to meet people, chat
and socialize with others remotely. But their potentialities are not limited to the social level.
Virtual worlds already have an impact in real world society, particularly at business, art and
education levels. For instance, as PCMag notes, there are countless Second Life cultures
and subcultures organized around arts, sports, games and other areas. Groups can be
formed that simulate mini-companies and mini-communities. Even real companies, such as
Coca-Cola and Adidas, participate in Second Life as a marketing venue. Numerous
universities, including Harvard, Princeton and Vassar, offer online classes. Religious
organizations hold meetings and starting with the Maldives and Sweden, countries have
created virtual embassies. People find partners, have virtual sex and even get married in
Second Life. In other words, Second Life is the virtual real world (PCMag, 2011).
Talking about real and virtual in this context had led to the need to clarify the meaning
of virtual, recognizing the importance and the impact that preconceptions have in any
learning or teaching or training process. Usually virtual is easily related to something that
does not exist, that does not occupy space or consumes time, like if it belonged to another
dimension that a person cannot reach because it is not tangible (Bettencourt, 2010). This is a
conception of virtual as an antagonism of real that can represent an obstacle to
understand and preview the educational potentialities of the virtual worlds. One must be
Virtual Worlds as an Extended Classroom 91
aware that, in virtual worlds, when the avatar the person is, for instance, talking with
another avatar person, those persons are really talking with each other, consuming time,
discussing ideas, sharing knowledge, thinking, socializing. They are using a virtual medium
to do something very real. The virtual has a real expression. The virtual must be seen
and understood more like an extension of real life, than something that is antagonist to the
real life. Like Bettencourt (2010) we agree that this conception is the first condition to ensure
all the educational potentialities of virtual worlds.
Those potentialities are also related with another concept that arises from the immersive
character of virtual worlds and from the fact that virtual worlds were not created with
educational reasons in mind: the natural context of the environment where learning and
teaching experiences occur (Bettencourt, 2010). This concept is not only quite different from
formal, or nonformal or informal learning environments but as well is independent from
them. Formal and nonformal or informal learning can be characterized by who has the
control of the educational objectives or competences to achieve or to develop (Cross, 2007).
In a natural context, like virtual worlds, the learning process is individual and occurs
because the learner wants to no matter where he/she is, the person learns by his/her own
pace, not necessarily within the context of a school, or any educational entity, in an
autonomous way. Skills learned or competences developed tend to be transferable giving a
whole dimension of the learning process.
As well as the learning process is individual it also depends on the socialization of the
person. More the socialization more the person will learn. Within the concept of natural
context that we are explaining the apprenticeships are embedded in a community of sharing
(Bettencourt, 2010). This idea is inspired in the communities of practice from Lave and
Wenger (1991) who, already in those years, explained the importance of sharing between
newcomers and old-timers. They wrote: We place more emphasis on connecting issues of
sociocultural transformation with changing relations between newcomers and old-timers in the
context of a changing shared practice (pp. 49). The changing shared practice assumes an
enormous actuality nowadays. In the natural context that we were explaining, the learning
processes are focused in communities of sharing, where avatars persons learn one with
another. These communities are nonhierarchical and any individual contribution will enrich
the whole community without any kind of value judgments.
Taking into account our experience, literature review and the research that is being
developed, we believe the virtual world that offers better features and more potentialities
for educational use is Second Life. This platform is the best known, more populated and
integrates several and important features of usability.
We define Second Life as a free to use 3D multi-user virtual world, immersive, imagined,
designed, built and created by its users (residents or avatars). It is considered a playground for
our imagination, a limitless platform design, build, code, perform and collaborate, expanding
the boundaries of creativity. It is a real life simulator (Loureiro & Bettencourt, 2010).
specific case of Second Life the simulations, emulation and role-playing features have been
widely explored and with good results.
The immersive nature of SL allows walk through contents and information - students can
learn by living or experiencing. With a 3D representation of self the avatar - learning can
be done in the 1st person. Features like communication, cooperation, collaboration,
interaction and information sharing are in real time. Students can learn by doing and they
can more easily engage with content (Loureiro, Wood & Bettencourt, 2010). SL is also a
major social network and a wide community of practice (Wenger, 1998). SL is not a game
but it offers the attractiveness of 3D gaming and therefore the sensation of learning by
playing (Loureiro, Wood & Bettencourt, 2010). As Lim (2009) suggested there are six
learning styles that can be applied within SL:
Learning by exploring: students can learn by visiting and explore buildings, landscapes,
communities that are simulated and modelled.
Learning by collaborating: students can work in teams, collaboratively and in real-time
on common projects or on problem-solving tasks, discussions can also be made in
group and collaboratively.
Learning by being: students can immerse in role-playing and performance activities,
they can also explore the self and experiment different identities through avatar
customization and by creating different characters.
Learning by building: students can with no restrictions build any kind of objects or
environments and experiencing in real-time the results.
Learning by championing: students can get involved into activities and causes related
and with an impact in real-life (such as cancer campaign, earthquake victims support).
Learning by expressing: students can show and present their in-world activities to out-
side world audience, by authoring blogs, machinimas, papers, posters or by
participating in conferences and meetings.
By exploring those potentialities, virtual classrooms can emerge and learning can be
enhanced.
One particularly interesting feature of version 2.0 of SL viewer is the possibility of adding
shared media to an object. This means anyone can add web-based media content to the
surface of any object and place it in-world or attach it to an avatar. For instance, it is possible
to be inside SL, running, adding and modifying contents in an external web site and the
audience in-world can watch in real-time. These tasks can be made collaboratively.
Another interesting feature, especially for those who use Moodle as a LMS is the possibility
of connecting and integrating it into SL - through Sloodle (Simulation Linked Object
Oriented Dynamic Learning Environment). The use of LMS in e-learning has limitations as
the students only have to deal with specific activities (Yasar & Adiguzel, 2010) but Sloodle
provides a variety of tools for supporting learning contexts in immersive virtual worlds,
providing experiences and opportunities for students to collaborate, cooperate and interact
with the objects. By connecting Moodle and SL it is possible, for instance, to have the same
chat session running in real-time on both platforms students can chose in which one to be
or connect at both. Chat logs are also saved in the Moodle database. A tool, called Web-
intercom, is really useful to enhance the communication between learners who are
involved in the activities within both SL and Moodle (Yasar & Adiguzel, 2010) and it is
Virtual Worlds as an Extended Classroom 93
important to students as an aide-memoir and to help them reflect later on their experiences
in the virtual world (Livingstone, Kemp & Edgar, 2008). Another feature of Sloodle is that
SL and Moodle accounts can be linked. This feature provides a better management of
students progress, allowing teacher to track students by their avatar names (Yasar &
Adiguzel, 2010). It is possible to set quizzes - QuizChair - where students attempt a
standard Moodle multiple-choice quiz inside SL, with the answers being stored on Moodle
(Kem, Livingstone & Bloomfield, 2009) (in Yasar & Adiguzel, 2010). Students can create 3D
objects and deliver their assignment using drop-box tool and teachers can review their work
and provide feedback in Moodle (Livingstone, Kemp & Edgar, 2008). It also has the
Presenter tool with the possibility of showing slides and/or webpages, where students can
share their work in virtual space or for the teacher in class. An advantage of using Presenter
is that you don't need to spend money (linden dollars - SL currency) on uploading images
(Yasar & Adiguzel, 2010). For those who like to have a more close control over participants
it is also possible to set (via a Sloodle toolbar) a function to collect a list of avatars or Moodle
users connected at a certain time/date.
As a simulation or modeling tool, SL has several uses. Many examples of buildings and
cities (some of them already missing in the real world) have been built and now can be
visited in SL. The Sistine Chapel has been modelled in great detail; the aim of this
recreation was to explore the use of virtual reality in the context of learning about art and
architecture, allowing students experience the context, scale and social aspects of the
original monument (Taylor, 2007). Another example is the reconstruction of ancient Rome or
the city of Lisbon pre-earthquake 17551. The potential is not limited to art, history or
anthropology. An example in the field of physical science is related with the simplicity of
how we can model molecules and observe their interactions when exposed to physical
variables such as temperature. Also in the field of medicine, some experiments in training
students are being conducted covering biology and doctor/patient role-play. SL is also a
great tool for language learning, with the possibility of direct contact with native
speakers, allowing the development of language skills at a writing and speech level. And
because there are full and reliable constructions of real world scenarios, direct contact
with the culture of a country or community is a reality that is easily reachable by every
classroom with an Internet connection. Generally speaking we can say that the use of 3D
immersive virtual worlds in education have an important part to play since they allow
multiple learning styles and they can be used in any subject of the curriculum. It allows
students to acquire and develop formal competences and also to develop social and
collaborative skills.
In 4 we will describe some studies that are being carried out in Second Life by the
authors of the paper.
1 http://lisbon-pre-1755-earthquake.org
94 Applications of Virtual Reality
Todays students have grown up surrounded by digital society, to them traditional teaching
is poorly stimulating, for their are used to utilize simultaneously diverse types of media.
Currently there is several gadgets that allow access to Internet and consequently access to
wireless network, so more users are connected and be able to interact with others. This
reality of being connect to the network called always on, leads to the creation of
communication strategies that suits both digital natives and digital immigrants
(Prensky, 2001). Its a way of communicate where users spend more time browsing and
posting than in e-mail - a clear shift for social websites / social networks. There is a clear
change in users behaviours, and consequently in the way how students act. For nowadays
students learning does not necessary mean being sit in rows at school. Learning is at the
distance of a click, for those digitally savvy. Although, to actually retain knowledge theres
the need of acquire digital skills and have digital literacy, to be able to do better research,
select information, reflect, collaborate, produce, share and achieve knowledge.
Learning in a digital and connected age does not depend on individual knowledge
acquisition, storage, and retrieval; rather, it relies on the connected learning that occurs
through interaction with various sources of knowledge (including the Internet and learning
management systems) and participation in communities of common interest, social
networks and group tasks." (Siemens, 2004). Students need to acquire certain skills and
competences specific of a digital and connected society in order to effectively benefit from
e-government, e-learning and e-health services, and participate actively in the knowledge
society as co-creators, and not simply consumers, as highlighted by the European e-skills
strategy (McCormack, 2010, pp 27). Besides e-skills and e-literacy competences, soft skills
are also a demand. Many of the mentioned skills and competences can be practiced and
enhanced in social, collaborative and virtual environments. Individuals have access to
communities of practice (Wenger, 1998), virtual worlds with role-play and simulations,
social networks and a wide range of web 2.0 tools. The fact of having access to different
online tools demands a shift in students profile and competences.
4.1 The use of Second Life and online tools as an extended classroom
We set a study in Second Life2 to gain experience of the use of virtual worlds and social
web tools (namely Facebook3 and Diigo4) in learning contexts, in order to bridge the gap
between students and teacher contact. We seek to understand how effective a virtual world,
allied with online tools, is as a proxy for face-to-face interaction.
4.1.1 Methodology
This specific case study emerged from the need of observe some of the variables (cf Fig. 1)
already identified (Bettencourt, 2009) and that are related with:
persons and their motivations - engaging and compelling factors - to register and attend
virtual worlds
relationships that are established between avatars and persons - in-world relationships
that evolve into out-world relationships
2 http://secondlife.com/
3 http://www.facebook.com
4 http://www.diigo.com
Virtual Worlds as an Extended Classroom 95
The study components are connected with (i) construction and knowledge sharing, (ii)
interpersonal relationships and (iii) 3D immersive virtual worlds combined with web 2.0
tools. We seek to understand if there are best practices orchestrating learning in
collaborative, immersive virtual worlds and web 2.0 tools, and if they will enhance blended
learning through knowledge sharing and socialization. The premise is that socialization is a
key factor for collaborative learning. We perceive collaborative learning as a situation where
two or more persons learn, or try to learn, something together (Dillenbourg, 1999).
The study goals are to:
identify the variables that might influence knowledge sharing
contribute for richer learning contexts through the use of online tools (Diigo, Facebook)
and virtual worlds (Second Life)
provide tutorial support to night class through a virtual world
encourage collaboration out of hours by providing means for students and teacher to
interact
learn what advantages we can find in an online tutorial implemented using an
immersive virtual world
understand how and which students engage with an immersive 3D world and how
effective it is as a proxy for face-to-face interaction
understand how well online tools and virtual worlds promote knowledge sharing and
enhance socialization in order to contribute for classroom cohesion
provide some insights for better online teaching strategies
The participants in this study are Portuguese higher education students from a school of
education. Students belong to two different groups, undergraduate regular day classes and
mature night classes (ages > 23 years old). They follow exactly the same syllabus in an
identical curriculum. This is a non probabilistic sample for a qualitative study having an
96 Applications of Virtual Reality
inductive and exploratory nature. The researcher, which is also the teacher, has a dual role
in the study and it is a participant-observer. The data has being collected by direct
observation and through electronic records; additionally questionnaires were designed to
survey students for feedback.
The teacher meets each class, in a common physical space (traditional classroom), once a
week. The teacher also has some hours of contact out of the classroom (support hours).
These support hours suit the regular students very well but dont meet the mature students
needs since they are part time. A night class student typically has a full-time job and studies
in the evenings and at weekends. The challenge for the teacher is to provide a way for
students collaborate on coursework, in a tutorial context, making use of the support hours in
a creative way. The main goal of the study is to encourage collaboration out of hours by
providing means for students and teacher to interact. The project is intended to evaluate the
effectiveness of blended learning as a tool to achieve the teaching goals. Here we see blended
learning as a learning that combines online and face to face approaches (Heinze & Procter,
2004). In Fig. 2 we present an image showing how learning was orchestrated in the extended
classroom in order to enhance knowledge sharing and socialization among students.
way we may say that an online tutorial established in a virtual world might suit better the
mature students and this might be a way to help them to keep in touch with the teacher and
to maintain class cohesion.
The results showed that, despite many technological handicaps, students had a better
performance concerning best newsroom practices, the results also indicated that online
students learned just as much as in a face to face situation and also that students enjoyed the
teaching and learning experience better than the traditional way.
4.2.1 Methodology
In similar studies the main variable that could influence more the results relates to the
sampling. For this investigation we have to add the technological skills of each individual as
well as Internet access off campus for each student involved.
Having this pointed out as the first major study concern we have to access the following
research questions:
RQ1: Is there a significant enhancement of journalism students practical skills using a
virtual simulation of a real newsroom?
RQ2: In what extent these virtual environments provide better basic journalism proficiency
like news gathering, writing and reporting skills?
RQ3: What are the students expectations prior and after using a new online teaching and
learning methodology?
The main focus of the procedure was to virtually create a newsroom where students and
teacher could gather together and decide the agenda.
The group met periodically foreseeing the problems and discussed collaboratively ways to
solve those issues as so all together prepared news gathering strategies on a collectively
manner.
Those meetings provided that the teacher could closely accompanying the students work
progress, since getting in touch with the sources, getting interviews, news gathering,
organizing information, and reporting.
In-world meetings aimed as well to support students and in real classes the group usually
discussed problems they had faced and new strategies for success stories.
As a motivation goal all the students were told that they had to publish their stories on a
real e-zine.
The teacher was seen as an editor-in-chief and communication in world was basically made
by voice.
In short:
Online classes were mostly concerned with reporting teams for the real e-zine
In world meetings were also focused on the best approach choice for the articles
All the news gathering, source interviews and report were prepared collaboratively
Course key words: skills, work, preparation, support, feedback
Before online sessions started all students got used to Second Life platform as - except for
one - all of them had no experience with this virtual 3d platform.
Virtual Worlds as an Extended Classroom 101
Online teaching methods were developed over four in-world 60 minute sessions and these
sessions had each been themed to focus on specific aspects of the journalism curriculum
content.
Prior to each online session there was 30 minutes real life class mainly for technological
preparation and to discuss solutions for the issues students had been facing.
Newsroom meetings took 60 minutes but students had to return back in world whenever
they needed in order to get the interviews and the news gathering done as they were
depending on sources agendas.
After each four in-world 60 minute sessions there was a 90 minute session of real life class.
These real life sessions were used to support the news gathering and to improve writing and
reporting skills. So all the writing work was made during these face-to-face sessions and
after being reviewed by the editor/teacher the articles were published on a real life e-zine.
For this investigation there was a class once a week for 180 minutes. The same instructor
taught all the sessions during October 2009.
4.2.1.1 Survey
Pre/post test survey design was used to get students feedback for expectations and attitudes
toward learning experiences in traditional face-to-face and online journalism courses
Respondents included 53 undergraduates taking the journalism course via face-to-face and
online instruction.
The survey was used as a guide and it was taken by the students prior and after the online
teaching period.
The test conducted answers on four sections:
Demographics: questions querying students about gender and age.
Computer literacy: to assess technology skills and confidence using those skills. At the
same time students were invited to rate their internet access off campus.
Attitudes: students had to rate their expectations and feelings towards the course.
Perceived knowledge: students were asked to rate the course content as well as teaching
procedures.
only a face-to-face class was taken. Nevertheless students showed a lot more interest in the
subjects matter.
Also by encouraging student group work the results showed that in combination online and
face-to-face instruction can be a very useful way of improving the teaching and learning
experience.
Students expectations prior to online sessions: the survey results showed highly motivated
and mainly curious students in spite of their poor Second Life literacy.
Students behaviours after online sessions: the survey results showed students felt inspired
and find the virtual newsroom useful and very important for better journalism practices
Results showed students:
Student strategies for the use of Second Life, such as thinking about creative solutions to
problems, planning the use of time among others, point to a better integration of curricular
matters seemed to be successful tasks. Also the surveyed group of students seemed to be
informed about the advantages and disadvantages of specific skills that these technologies
can provide in their learning outcomes.
This has already been pointed out on previous studies from Munguba, Matos and Silva
(2003) which confirm that electronic media can help structuring strategies improving
awareness, attention and motivation.
The strategy performed in this virtual environment was developed in order to meet the
expectations of young people in the classroom and in their academic preparation.
As so students assessed that discussing the strategies according to the new methodology
and study time which included topics such as optimization of the time, healthy work and
educational skills in identifying and solving problems, promoting analysis and synthesis,
proposed a different study, capable of promoting a better reflection and the successful
execution of tasks.
These results pointed out the need for schools to reflect on their teaching practices and study
strategies improvement. Taking advantage of the use of virtual environments, such as
Second Life, would provide students with metacognition skills, helping them to self-
regulate their learning.
These are some of the challenges faced by schools in the future moving towards the
implementation of new technologies to overcome and reconcile tradition with participation
curriculum contexts that will allow knowledge to make sense.
actions are taken in 1st person and can be simulated and emulated. The environment, since
is built by the users, can be adapted according with needs of a specific teacher, subject or
group of students.
Real-time collaboration and cooperation ally to the several connections that can be established
from in-world with Learning Management Systems like Moodle or a specific webpage also
gives several possibilities for learning contexts. Everything can be built, modeled, emulated
and simulated all education areas can be covered and any subject can be delivered with the
help of a 3D immersive virtual world like SL and online Web 2.0 tools.
Virtual worlds and web 2.0 tools, can bring an amount of benefits for learners (Kreijns,
Kirschner & Jochems, 2003; McLoughlin & Lee, 2007) that can be summarized in five clusters:
Participatory learning - foster participation in creation/editing of content;
Collaborative learning collaborative knowledge construction (information shared by
individuals can be recombined to create new forms, concepts, ideas, mash-ups and
services);
Autonomous learning - share, communicate, and discover information in communities;
Communication and interaction capabilities - richer opportunities for networking
Lifelong learning (join the wisdom of the crowds) - develop digital competences &
support lifelong development.
Despite the benefits, some challenges should also be considered. For learners to connect and
take benefits they need to be motivated to interact. Social interaction will not automatically
occur just because technology allows it. Therefore it shouldn't be taken for granted learners
capabilities and motivation to interact. The borders of the learning environment become
diffused, therefore a careful planning and management is mandatory. Virtual worlds and
Web 2.0 tools have their own dynamics and are transient environments - moderation
becomes a requirement. Students are free to learn according with their own patterns,
however a teacher or tutor should be a constant presence to guide and moderate.
Another aspect to consider is the difficulty in designing the new models of teaching and
learning (Instructional Design). On the other hand higher level of anxiety are often
associated with computer-mediated communication which may limit the degree of social
interaction. In order to build group relationships and dynamics, students need to trust each
other, feel a sense of belonging, and feel close to each other before they engage in
collaboration and sharing - sense of community belonging.
Design and implement an extended classroom through the use of online tools and virtual
worlds requires preparation, time and means. We cannot take participation in computer-
supported collaborative learning (CSCL) environments for granted, there is the need to
ignite and maintain it. Students have to be prompt and reminded about their roles, they
should be able to embrace autonomy but teacher needs to provide the right incentives.
Interactivity has to be improved (two way connection between distributed students) by
organising social interaction, collaboration and shared activities - otherwise it is unlikely to
occur or be meaningful. In an extended classroom, teacher also has to foster a sense of
community and encourage development of a social presence. It is very important do not
replicate traditional classrooms in online environments, it is pointless if what only changes
is the place/space (there is no point having students sit in rows listening to lecturers in a
virtual world for instance). Employ designs that focus on collaborative, networked
Virtual Worlds as an Extended Classroom 105
6. References
ATIS (2007). ATIS Telecom Glossary 2007 [Online]. Available:
http://www.atis.org/glossary/definition.aspx?id=248 [Accessed 18 April 2011]
Au, W. J. (2008). The Making of Second Life: Notes from the New World. Collins: Harper Collins.
Austin, T. & Boulder, C. (2007). The Horizon Report, 2007. [Online]. Available:
http://www.nmc.org/pdf/2007_Horizon_Report.pdf [Accessed 10 July 2010]
Bartle, R. (2004). Designing Virtual Worlds. New Riders Publishing.
Bell, M. (2008). Definition and Taxonomy of Virtual Worlds. Journal of Virtual Worlds
Research [Online]. 1(1). Available: http://jvwresearch.org/page/volume_1_issue_1
[Accessed 18 April 2011]
Bennett, K., & McGee, P. (2005). Transformative power of the learning objects debate. Open
Learning, 20(1).
Bennett, S. (2005). Using related cases to support authentic project activities. In A. Herrington & J.
Bennett, S., Agostinho, S., & Lockyer, L. (2005). Reusable learning designs in university
education. In T. C. Montgomerie & J. R. Parker (Eds.), Proceedings of the IASTED
International Conference on Education and Technology. Anaheim, CA: ACTA
Press.
Bettencourt, T. & Abade, A. (2008). Mundos Virtuais de Aprendizagem e de Ensino uma
caracterizao inicial. IE communications, Revista Iberoamericana de Informtica
Educativa [Online] N 7/8. Available:
http://161.67.140.29/iecom/index.php/IECom/issue/view/41/showToc
[Accessed 15 September 2010]
Bettencourt, T. (2009). Teaching & Learning in SL: Figuring Out Some Variables. [Online].
Available: http://cleobekkers.wordpress.com/2009/01/28/teaching-learning-in-
sl-figuring-out-some-variables/ [Accessed 15 September 2010]
106 Applications of Virtual Reality
Bettencourt, T. (2010). Second Life - Uma nova forma de expresso de arte. Keynote
conference on the VI Seminrio Imagens da Cultura / Cultura das Imagens.
Universidade Aberta e Universidade de Mrcia (Org). Porto: Universidade
Portucalense, 2 Julho, Portugal.
Chee, Y. (2001). Virtual reality in education: Rooting learning in experience. In Proceedings of
the International Symposium on Virtual Education 2001, Busan, South Korea, pp. 43-54.
Symposium Organizing Committee, Dongseo University [Online]. Available:
http://yamsanchee.home.nie.edu.sg/Publications/2001/ISVE2001Invited.pdf
[Accessed 18 April 2011]
Connolly, T., MacArthur, E., Stansfield, M., & McLellan, E. (2005). A quasi experimental study
of three online learning courses in computing. Computers & Education. Vol. 49 (2).
Cross, J. (2007). Informal Learning: Rediscovering the Natural Pathways That Inspire Innovation
and Performance. John Wiley and Sons, Inc. Pfeifer.
Current user metrics for Second Life, Retrieved June 2011 from
http://community.secondlife.com/t5/Featured-News/Q1-2011-Linden-Dollar-
Economy-Metrics-Up-Users-and-Usage/ba-p/856693, Secondlife.com.
Dillenbourg, P. (1999). Collaborative-learning: Cognitive and Computational Approaches, Oxford,
Elsevier.
Downes, S. (2006). Learning Networks and Connective Knowledge [Online]. Available:
http://it.coe.uga.edu/itforum/paper92/paper92.html [Accessed 20 March 2010]
Dudeney, G., & Hockly, V. (2007). How to teach English with Information Technology. Pearson
Education Limited.
Dudeney, G., & Ramsay, H. (2009). Overcoming the entry barriers to Second Life in higher
education. In C. Wankel & J. Kingsley (Eds.), Higher education in virtual worlds:
Teaching and learning in Second Life. Bingley: Emerald Group Publishing Limited.
Hayes, G. (2009a). ROI 101 & Stickiness of Second Life?. Retrieved May 2011 from
http://www.personalizemedia.com/roi-101-stickiness-of-second-life/
Hayes, G. (2009b). Some of my 2008 reflections on Virtual Worlds, reflected elsewhere. Retrieved
May 2011 from http://www.personalizemedia.com/some-of-my-2008-reflections-
on-virtual-worlds-reflected-elsewhere/
Heinze, A. & Procter, C. (2004). Reflections on the use of Blended Learning. [Online]. Available:
http://www.ece.salford.ac.uk/proceedings/papers/ah_04.rtf [Accessed 10 July
2010]
Herrington (Eds.) (2006). Authentic learning environments in higher education. Hershey, PA:
Idea Group Inc.
Kreijns, K., Kirschner, P. & Jochems, W. (2003). Identifying the pitfalls for social interaction in
computer-supported collaborative learning environments: A review of the research.
Computers in Human Behavior, 19(3), 335-353.
Lave, J. & Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge,
University Press.
Lim, K. (2009). The six learnings of Second Life. Journal of Virtual Worlds Research [Online].
2(1). Available: https://journals.tdl.org/jvwr/article/view/424/466 [Accessed 18
April 2011]
Livingstone, D; Kemp, Jeremy & Edgar, E. 2008. From Multi-User Virtual Environment to 3D
Virtual Learning Environment, Research in Learning Technology, 16: 3, 139-150 [Online]
Available: http://dx.doi.org/10.1080/09687760802526707 [Accessed 5 April 2011]
Virtual Worlds as an Extended Classroom 107
http://www.academiccommons.org/commons/showcase/sistine-chapel-in-
second-life [Accessed 20 March 2010]
Thomas, R. (2001). Interactivity and Simulations in e-Learning. Retrieved on May 2011 from
http://www.jelsim.org/resources/whitepaper.pdf
Twigg, C. (2003). Improving quality and reducing costs: Designs for effective learning. Change, 35,
2229.
Valente, C., Mattar, J. (2007). Second Life e Web 2.0 na Educao: o potencial revolucionrio das
novas tecnologias. So Paulo: Novatec.
Wankel, C. & Kingsley, J. (2009). Higher Education in Virtual Worlds: Teaching and Learning in
Second Life. Emerald Group Publishing Limited.
Wenger, E. (1998). Communities of Practice. Learning, Meaning, and Identity. Cambridge,
Cambridge University Press.
Yasar, O. & Adiguzel, T. (2010). A working successor of learning management systems:
SLOODLE. Procedia - Social and Behavioral Sciences Volume 2, Issue 2, 2010, Pages
5682-5685 Innovation and Creativity in Education .[Online]. Available:
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B9853-5016P5K-
166&_user=10&_coverDate=12%2F31%2F2010&_rdoc=1&_fmt=high&_orig=gatew
ay&_origin=gateway&_sort=d&_docanchor=&view=c&_acct=C000050221&_versio
n=1&_urlVersion=0&_userid=10&md5=8e436213aae9fb052dba019f114bf503&searc
htype=a
6
1. Introduction
3D Multi User Virtual Environments (3D MUVE), or commonly known as 3D virtual
worlds, are expanding their application domains from their introduction. These simulated
environments with 3D support can be seen as useful applications that facilitate our various
needs. Using 3D virtual worlds for supporting learning has shown promising results,
introducing a new type of learning platform, known as 3D Multi User Learning
Environments (3D MULE). In effect, 3D learning environments based on Multi-User Virtual
worlds are a growing feature of UK education (Kirriemuir, 2010). Leading universities are
interested in, and have been researching on, applications of this novel technology for their
learning needs. 3D MUVE support more intuitive activities for learning complex and
advanced concepts. 3D virtual worlds are particularly appropriate for educational use due
to their alignment with Kolb's (Kolb, et al. 2001) concept of experiential learning, and
learning through experimentation as a particular form of exploration (Allison et al., 2008).
Interactive 3D virtual environments demonstrate a great educational potential due to their
ability to engage learners in the exploration, construction and manipulation of virtual
objects, structures and metaphorical representations of ideas (Dalgarno et al., 2009).
In this exploratory study, we investigate the level of student engagement with the 3D
MUVE supported learning environment with respect to the system management and
administration. 3D virtual worlds are often considered as game environments, associating
the entertainment and flexibility factors as intrinsic parameters. A known challenge is
associating the pedagogical and formal educational practices with the 3D MUVE supported
environments, without affecting the rich and dynamic nature. At the same time, students
should not be let for getting overwhelmed by the environment features so that they miss out
the intended learning outcomes of the learning task (Perera et al., 2011b). To achieve these
crucial goals to facilitate student learning with 3D MUVE, we envisaged a unique approach
through management policy considerations.
Our 3D MULE research activities are based on the two prominent 3D MUVE at present,
Second Life (SL) (Linden Labs, 2003) and Open Simulator (2007). Second Life and Open
Simulator 3D MUVE provide a similar simulated environment for the users except they
have two different implementations at the server side. The high degree of compatibility
110 Applications of Virtual Reality
between the two systems has been a major research benefit for our work. Current trends
indicate a significant shift of preference towards OpenSim for learning activities, however
(Allison, et al., 2011).
This chapter is arranged into the following sections: in section 2 we describe background
details along with our experiences on using 3D MUVE for learning; section 3 explains the
research problem, including the hypotheses for this study. Section 4 describes the research
methodology and the experiment design while the section 5 extensively analyses the
research findings validating the hypotheses of the study. The section 6 discusses the
contributions of the study, study limitations and the expected future work. Finally, the
section 7 gives the conclusions of the research to conclude the chapter.
Archaeology (LAVA) (Getchell et al., 2010) project allows students to engage in a simulated
archaeological excavation, and then explore a recreation of the site. Similar to the Wireless
Island another research project, Network Island in OpenSim, developed a learning island to
facilitate teaching network routing (McCaffery et al., 2011). Network Island simulates several
routing protocol behavior as an interactive method. Students can create their own topologies
and examine the routing behavior. As a part of the evaluation of 3D MUVE for serious
learning needs a network traffic analysis of 3D MUVE was performed (Oliver et al., 2010) to
identify challenges. Second Life and OpenSim were used for teaching Human Computer
Interaction (HCI) to undergraduate students, through creative student projects (Perera et al.,
2009). Research on integrating 3D MUVE with e-Learning infrastructure with the objective
of formulating a blended learning environment with 3D support is conducted (Perera et al.,
2010; Perera et al., 2011a) and the results facilitated this research.
Fig. 1. Selected projects based on 3D MULE: Left side virtual humanitarian disaster
training environment, and right side the Network Island to facilitate teaching and learning
on network routing algorithms
Networks) and the other from the postgraduate (taught MSc program) curriculum (CS5021
Advanced Networks and Distributed Systems). Information about the experiment samples
is given in the Table 1.
Table 1. Details about the experiment samples and associated course modules
It is important to mention that these two course modules have different learning objectives
and tasks while representing different levels in Scottish Credits and Qualification
Framework (SCQF) (SCQF, 2007). Therefore, the subject content in each level had dissimilar
objectives, and the assessment tasks were slightly diverse, although students used the same
learning tool. However, for this study, we are focused on student engagement with the
environment and the impact on that from the two aspects, their view of self-regulation and
the system management. In that regard, we can conclude that, although students had a
different level of the same learning task on the same learning aid, both samples had the
same characteristics with respect to our measure. Therefore, the experiment has not been
affected by the learning tasks given in the modules; hence can be considered as a single
sample consisting of 59 students for the data analysis.
The learning environment, Wireless Island (Sturgeon et al., 2009), is a dedicated region for
facilitating learning and teaching wireless communication. It was developed as a research
project which provides interactive simulations with various configuration settings for
students to explore and learn. It also includes supplementary learning content such as
lecture notes, lecture media stream and a museum to depict the history of wireless
communication development. Figure 2 shows the island layout (left-side image) with
different interactive content and places for student learning. The right-side image of the
Figure 2 shows the interactive simulation on teaching Exposed Node Problem in wireless
communication.
To facilitate small group learning with a less competitive environment interaction, we
decided to have 6 students per region. Therefore, five regions were created in the
OpenSim environment and loaded the Wireless Island on each. This resulted in an
114 Applications of Virtual Reality
identical learning set up for each student, and students were given their assigned region
as their home place to start the learning task. Region information is given in the Table 2
and the corresponding 3D MUVE map, and an island map is shown in the Figure 2. The
root island (Learn) remained as an empty island with everyone to access as a sandbox so
that students can try their desired content creation and other activities without affecting
the learning environment.
Fig. 2. Wireless Island overview with layout left side; right side an interactive learning
aid for simulating Exposed Node Problem in Wireless communication
In the Figure 3, a block of square represents an island (256 m x 256 m virtual area), and the
islands are distributed as shown in the map, to minimise adjacency problems and to
simulate the isolated island look and feel. Tiny green dots indicate the student avatar
positions when the image taken.
The second part of the data gathering based on a questionnaire with 15 questions divided
into two sections: Avatar Behaviour 7 questions, and 3D MULE management 8
questions. Additionally, five open-ended questions were included to help students to
express their opinions openly. The 8 questions in the 3D MULE management section had
some relevance on the two factors that we are investigating, self-regulation and
environment management; however, the questions did not directly represent the variables.
We decided to confirm through the statistical analysis, therefore, treated as 8 related
questions.
3D Multi User Learning Environment Management
An Exploratory Study on Student Engagement with the Learning Environment 115
Fig. 3. Map view of the experiment setup (a small square represents an island), right side
image - the enlarged map view of the island aelous3
Fig. 5. Observed student engagements through content creation and content alteration
Fig. 6. A terraformed and altered island (right side) compared to its original layout
118 Applications of Virtual Reality
The Figure 6 shows a higher degree of student interaction that caused the learning
environment to be significantly altered compared to its original layout and appearance.
However, this observation was a one-off incident, as majority of students retrained from
changing land settings, although they have tried content creation, often. Moreover,
compared to the Masters students, the junior honours students showed high interactivity
and environment alteration, resulted in a range of user created objects, altered content and
changed land terrain. The undergraduates showed more enthusiasm on exploring potential
game-like features, and associate their real-world friends for collaborative activities/plays,
although such acts were not necessarily related with the Intended Learning Outcomes
(ILOs) or Teaching and Learning Activities (TLAs). In contrast, Postgraduate students
showed a higher tendency to complete the given tasks, which may have, in certain instances,
resulted in less motivation to entertain themselves by exploring the 3D MUVE features.
It is important to note that, in all of these cases, students were allowed to follow their
preferred behaviour to interact with the environment as a mean of learning through
exploration (Kolb et al., 2001) without any restriction. An assurance was given that their
behaviour or environment interactions do not affect their assessments, but the completion of
learning tasks.
Exploratory Factor Analysis tends to provide reliable factor identifications when the sample
size is larger. There are many criteria to decide the appropriate sample size; however, many
scholars agree that the suitable determination of the ratio between Minimum Sample Size
(N) to Number of Variables (p) i.e., N:p should be a more reliable practice. Costello and
Osborne (2005) concluded, through empirical tests, that high N:p provides higher accuracies
compared to lower ratios. In our analysis on variable exploration N:p is 16:1 suggesting
higher reliability in the solution model; hence validating the Factor Analysis outcome with
our sample. Previous research (Moose and Azevedo, 2007) have suggested citing Nunnally
(1978) that 10:1 is adequate and has performed Factor Analysis (N=49, p=3) validating our
sample size and ratio.
Moreover, the Bartlett Test of Sphericity, a strict test on sampling and suitable correlations,
was performed to test the appropriateness of data for the Factor Analysis, using PASW
[version 18.0] statistical software. The Bartlett Test of Sphericity gave 2= 155.257, p<.001
suggesting that the correlation matrix (R-Matrix) of items shows highly significant
relationships and can be clustered based on relationships. Furthermore, the Null Hypothesis
of R-Matrix being an Identity Matrix can be rejected with significance, indicating high
suitability for Factor Analysis.
Finally, Kaiser-Meyer-Olkin (KMO) test was performed to examine the accuracy of using the
data sample for the factor analysis. KMO value obtained was 0.714 (>0.7 threshold for
goodness) (Field, 2006) validated the data sample for Factor Analysis. Having all the
conditions for Factor Analysis met through the pre-test, we performed Factor Analysis using
PASW.
As the rotated factor loadings indicated, questions Q9, Q15, Q10 and Q8 were considered as
a one variable and examined for suitable naming. When the objectives of questions were
analysed, we identified a common parameter that governs all the four questions. We
concluded it as the student behaviour with Self-regulation, and defined the new variable as
Self-Regulation. The second factor represents Q11, Q14, Q12 and Q13 questions and we
observed a strong contextual meaning on system administration and control as the common
parameter. The questions strongly discuss the student preference and impact on the system
control, administration and management of the 3D MUVE. Therefore, we defined this
variable as Environment Management to cover all the aspects that associate with this factor.
Fig. 7. Rotated Component Matrix and the Component Plot in Rotated Space
The identified two variables explain nearly 72% of the behaviour of the aspect 3D MULE
Management. Moreover, there arent any other strong and significant variable revealed
through the factor analysis; therefore, we can conclude that the extracted two variables, Self-
Regulation and Environment Management, represent the aspect of 3D MULE Management.
Hence, we conclude that the research hypotheses H1 and H2 of our study is substantiated
based on the exploratory factor analysis.
3D Multi User Learning Environment Management
An Exploratory Study on Student Engagement with the Learning Environment 121
The questions Q2, Q3, Q4 and Q6 were based on the remaining avatar actions that
associate with the possible major areas of avatar engagement. Content creation, land and
content alteration, and avatar communication activities showed reasonably similar
average responds agreeing with the asked questions. Additionally, the most of the
students confirmed that response (Mode = 4). The question Q6 had the environment
exploration aspect within the assigned island and the teleportation to other islands.
Almost all the students indicated higher positive average with most of the students
agreeing with the statement.
Q7 played the concluding role for the student behaviour question set. It let the students to
think about their activities within the 3D MULE and then evaluate his or her learning
engagement as a result of the activities performed. This was a crucial question as it directly
inspects about the considered aspect of student engagement as a measure of student
activity. Student responses showed a high average, indicating that they agree with the
question (Mode =4). This strengthened our research hypotheses as a direct measure, while
showing a possible relationship between the environment interactions and the learning
engagement, which we recommend for further study.
Self-regulation
The One-Sample Kolmogorov-Smirnov Test for normality indicated that the question means
are normally distributed [X ~ N (3.876, 0.378)] with significance (=0.516) to retain the
hypothesis of normal distribution. This implies that, in general, students have shown a
positive response indicating that students agree the self-regulatory practice as an important
consideration.
Std. Std.
No Question Mean Mode
Dev. Error
I think my behaviour affected others
Q8 3.31 3 0.592 .104
learning
The open space and others avatars made
Q9 me to interact as in a real-world learning 4.05 4 0.354 .064
session
Use of real identities increases the proper
Q10 4.02 4 0.309 .056
behaviour of students
Student should responsibly use the
Q15 4.10 4 0.296 .051
learning environment
Table 5. Questions on student self-regulation and descriptive statistics
The question Q8 tends to be a self-assessing question as students had to think about their
behaviour reflectively and critically. This was important to meet the objectives of the
question set, as students answer the rest of the questions with a reflective mind associating
every little detail that they have experienced or felt during the engagement. In that sense,
beyond the literal meaning of the Q8 it helped students to give accurate responses to the rest
of the questions as an indirect effect. However, student may have been doubtful about on
what degree they consider their behaviour and impact on others learning; therefore, we
observe an average of 3.31 (more towards the respond Neither Agree nor Disagree) while
the majority confirming that (mode = 3).
3D Multi User Learning Environment Management
An Exploratory Study on Student Engagement with the Learning Environment 123
The questions Q9, Q10 and Q15 more or less have recorded nearly same averages (~4 =
Agree) while the majority confirms that preference as the Mode is 4 for each question. Q9
was designed associating the privacy concerns of being in an open environment that could
be seen by others and with a high probability on simultaneous engagement on same
learning activity or content. The association of real-world classroom metaphor reinforces the
student comparative observations, resulting in a broader opinion with higher accuracy. Q10
examines the student view on having their real identity (first name and last name) as their
avatar username. Avatar anonymity and its impact on student learning has been researched
previously on various contexts (Messinger et al., 2008); the majority of students agreed
(Mode = 4 and Mean = 4.02) that there is a positive impact of using their real identities on
following appropriate behaviour within the environment. The Q15 plays the concluding role
for the students self-regulatory practice with the sense of being a responsible participant in
the learning session. Importantly, we did not relate any indirect variable association as the
effect of being a responsible student with proper practice; we let the student to self-evaluate
the consequences of their practices for the responses. The student responses indicate that
majority of the students agreed that students must use the environment responsibly,
indicating a positive association of self-regulated interaction as an acceptable practice.
Environment management
The One-Sample Komogorov-Smirnov Test for normality indicated the question means are
normally distributed [X ~ N (3.95, 0.175)] with significance (=0.796) to retain the hypothesis
of normal distribution. This implies that in general student showed a positive response
indicating that there has been a high degree of engaged with the environment.
Std. Std.
No Question Mean Mode
Dev. Error
Land and Content management controls
Q11 are important for the environment 3.78 4 0.420 .074
management
System control and management practices
Q12 3.91 4 0.296 .052
are important for a reliable learning
System management settings should not
Q13 4.19 4 0.592 .105
reduce the 3D MUVE usability
Appropriate system security and controls
Q14 3.89 4 0.390 .070
ensure a successful learning experience
Table 6. Questions on system environment management and descriptive statistics
These four questions associate the system administration and environment control aspect as
students perceive. The question Q11 asks about two major 3D MULE system administration
function groups for their significance in using as an environment management mechanism.
Land and content management related functions are the most significant activities that an
avatar can use to affect the existing learning environment. Therefore, it is important to
examine the student preference on constraining them from using these features if needed for
environment management. As the results indicate, students agreed on this statement. The
question Q12 further examines the student view on having a reliable learning experience
through system controls and environment management. This statement associates the aspect
that the learning environment is trustful. Student responses indicate that they still welcome
124 Applications of Virtual Reality
the controlling measures on the 3D MULE to improve the trust, knowing the importance of
the reliability of a learning activity for formal education. The Q13 discusses the associated
concern on 3D MULE usability, if the management controls are restrictive and unsupportive
for attractive learning engagement. Interestingly, the average feedback was 4.19, the highest
value for all the questions, showed the student concern on losing the usability and
attractiveness of learning environment. This delivers a very important message to the 3D
MULE designers and module administrators, as their environment management policies
should always prioritise the usability of the environment. Finally, the question Q14 brings a
conclusive statement for the environment management concerns. This, in fact, summarises
the students overall opinion on the use of system environment controls and management
practices to ensure successful learning. Students on average agreed to the statement
showing their positive attitude for a supportive environment management for their learning
facilitation.
Fig. 8. Correlation between the two identified variables self-regulation and environment
management
Finally, the Spearman Correlation analysis between the derived variables, Self-Regulation
and Environment Management was performed (shown in Figure 8) to examine their
relationship. The Spearman Correlation Coefficient (rho) was 0.398 indicating a weak
positive relationship with significance (p<0.05). Furthermore, the weak relationship between
the two variables was further indicated as only about 15% of ones variance could be
explained by the other. This is important to proceed with our analysis as we have to be
certain about the model that we are testing; i.e., the two variables are measuring sufficiently
different parameters and the inter-relationship between the two is insignificant to affect the
hypothesis test. Therefore, we conclude that, through all these analyses that the two
3D Multi User Learning Environment Management
An Exploratory Study on Student Engagement with the Learning Environment 125
Fig. 9. Regression analysis output with model test using ANOVA (PASW)
PASW (18.0) linear regression analysis model summary and the model fit (ANOVA) is
shown in the Fig.9. R2 = 0.759, indicates that about 75.9% of the variation in the student
engagement is determined by the environment management and student self-regulation
with the learning activities in 3D MULE, by means of a combined effect. As the model fit test
results in ANOVA shows, the regression model significantly explains the Student
126 Applications of Virtual Reality
As the variable relationship with predictor parameters of the model shown in table 6, the
model path coefficients are .240 for Self-Regulation, which is significant (p<0.05) and .657 for
Environment Management with significance (p<.001). The resulted research model outcome
can be shown as follows (Figure 10).
Therefore, the research model substantiates our research hypotheses H3 and H4. With
reference to H3, we can say that the student Self-Regulation on learning activities in 3D
MULE result in a significant positive effect on the Student Engagement with the learning
environment and learning tasks. Therefore, we conclude that, for constructive and
successful 3D MULE student engagement, we have to consider and promote student self-
regulation as a main policy consideration area for 3D MULE management. Thus, we
validate our research hypothesis on considering student self-regulation as a prime factor for
successful management of 3D MULE and associated learning activities.
Regarding to the hypothesis H4, we can say that the Environment Management practices, as
described previously, result in a significant positive effect on Student Engagement with the
3D MULE and learning tasks. Therefore, we conclude that, for constructive and successful
3D MULE student engagement, we have to identify and implement Environment
Management policies as a main policy consideration area for 3D MULE management. Thus,
we validate our research hypothesis on considering environment management of 3D MULE
as a prime factor for successful management of 3D MULE and associated learning activities.
3D Multi User Learning Environment Management
An Exploratory Study on Student Engagement with the Learning Environment 127
students worried on certain challenges they faced due to the 3D MUVE at the early stages of
the learning activity. We also realised the importance of providing necessary user guidance
and consider it for future work.
[There should be availability of private space for some activities with data which are secure]
[Students need to be trained before they use the virtual world for assessment and laboratory
work]
OFQ4 - If the 3D MULE security enhanced, can it affect the rich features and the usability of
the system?
This question helped us to observe the student view on implementing various environment
management strategies to enhance the system security. As 3D MUVE are designed mainly
for entertainment and gameplay, mapping those use cases for a formal learning engagement
can be a challenge and requires additional level of security management (Perera et al, 2010,
Perera et al., 2011a). However, there is a growing concern on 3D MUVE control as it can
affect the intrinsic characteristics and usability of the environment. Students came up with
vibrant answers, both supporting and against. We suggest a contextual approach for
deciding the required level of security management for the learning needs.
[Yes, because it might hinder the entire idea of collaborative learning in a virtual environment]
[I dont see an issue of security]
[It can be!]
[May be; but should not implement such measures]
OFQ5 - Any other comment/concern/suggestion about using 3D MUVE for learning
As a final thought, all students who expressed their answers indicated the benefits of using
3D MULE as a teaching and learner support tool. Importantly, their positive comments are
highly encouraging and supportive of the future studies on 3D MULE while strengthening
our prime objective of facilitating student learning with technology support.
[I believe it should be handled on a wider scale and it is a great tool for interactive and
collaborative learning]
[Generally, I like it as an educational methodology]
[It is interesting to use these stuffs]
In general, this open ended feedback showed some of the significant concerns that students
have, which we could not relate directly with the questionnaires. As explained in each of
these questions, careful analyses on the student answers helped us to associate those with
the statistically identified variables. It was observed that the student expressions were
scattered on different functional and systems properties of the 3D MUVE when considered
in isolation; however, in the general context, we have been able to incorporate those with the
identified variables self-regulation and environment management with suitable adaptations
for the future steps of this research.
6. Discussion
6.1 Study contributions
We have shown the analysis results of this study previously with reasonable explanations.
The first contribution we have is the student engagement observations during the laboratory
3D Multi User Learning Environment Management
An Exploratory Study on Student Engagement with the Learning Environment 129
this challenge. Although, we cannot use the student inputs for these questions with the
statistical analysis, the comments provided a significant contribution, qualitatively.
7. Conclusion
This research has presented several important contributions for the development of 3D
Multi User Learning Environments as an effective means of providing engaged student
learning. The results and the analysis confirmed that the student self-regulation and system
environment management are the two important factors that contribute for the 3D MULE
management. Therefore, our research towards developing policy considerations for 3D
MULE management will be based on these two factors for its future work. Additionally, we
have seen through the analysis that the student self-regulation and the system environment
management have positive and significant influence on student engagement with the 3D
MUVE. This was an important observation for the researchers who are keen on managing
their 3D MUVE supported learning environments. Importantly, this result encourages the
policy based management of 3D MULE while enabling those to be considered as successful
candidates for formal and serious educational needs.
Our contribution, through research findings and the experiment observations, would help
the other researchers by providing empirical evidence on student engagement with 3D
132 Applications of Virtual Reality
MULE. As the empirical evidence from research on 3D MULE is a growing need, this would
open a new research dialogue among researchers who partake in the development of
technology supported learning. We are committed to extending this work to enhance the
policy based management of 3D MULE and evaluate its impact with a broader perspective.
With that we invite frontiers in educational technologies to further research and strategically
associate the study outcomes with existing learning infrastructures, in a blended manner.
8. Acknowledgment
This research is supported by the UK Commonwealth Scholarships and the Scottish
Informatics and Computer Science Alliance (SICSA). Second Life region rental was
supported in part by the University of St Andrews Fund for Innovations in Learning,
Teaching and Assessment (FILTA). The Higher Education Academy (HEA) UK supported
part of the work on OpenSim. A special thank goes to the INTECH Open for their generous
scholarship support to meet the costs associated with the publication of this article.
9. References
Allison, C., Miller, A., Getchell, K., and Sturgeon, T., (2008): Exploratory Learning for
Computer Networking, Advances in Web Based Learning ICWL, 331-342,
Allison, C., Miller A., Sturgeon T., Perera I. and McCaffery J., (2011), The Third Dimension
in Open Learning, 41st ASEE/IEEE Frontiers in Education, IEEE Press, T2E-1
T2E-6
Biggs, J. (1996), Enhancing Teaching through Constructive Alignment, Higher Education,
32(3), 347-364
Bronack, S., Sanders R., Cheney A., Riedl R., Tashner J., and Matzen N. (2008), Presence
Pedagogy: Teaching and Learning in a 3D Virtual Immersive World, International
Journal of Teaching and Learning in Higher Education, 20(1), 59-69
Burkle, M. and Kinshuk (2009), Learning in Virtual Worlds: The Challenges and
Opportunities, International Conference on Cyber Worlds, CW'09, IEEE, pp.320-327
Cohen J. (1988), Statistical Power Analysis for the Behavioural Sciences (2nd Ed.), Academic Press
Costello, A.B. & Osborne, J.W., (2005), Best practices in exploratory factor analysis: four
recommendations for getting the most from your analysis, Practical Assessment,
Research & Evaluation, 10(7):1-9
Dabbagh, N. and Kitsantas A. (2004), Supporting Self-Regulation in Student-Centred Web-
based Learning Environments, International Journal of e-Learning, 2(4), 40-47
Dalgarno, B., Bishop, A., Adlong, W. and Bedgood Jr., D., (2009), Effectiveness of a Virtual
Laboratory as a preparatory resource for Distance Education chemistry students,
Computers & Education, 53(3), 863-865,
de Freitas S., Rebolledo-Mendez G., Liarokapis F., Magoulas G., and Poulovassilis A. (2010),
Learning as immersive experiences: Using the four-dimensional framework for
designing and evaluating immersive learning experiences in a virtual world, British
Journal of Educational Technology, 41(1), 69-85
Field A., (2006), Discovering Statistics using SPSS, 2nd Ed., SAGE, London
Getchell, K., Miller, A., Nicoll, R., Sweetman, R., and Allison, C. (2010), Games
Methodologies and Immersive Environments for Virtual Fieldwork, IEEE
Transactions on Learning Technologies, 3(4), 281-293
3D Multi User Learning Environment Management
An Exploratory Study on Student Engagement with the Learning Environment 133
Keskitalo, T., Pyykk, E., & Ruokamo, H. (2011), Exploring the Meaningful Learning of
Students in Second Life, Educational Technology & Society, 14(1), 1626
Kirriemuir J., (2010), An autumn 2010 "snapshot" of UK Higher and Further Education
developments in Second Life, Virtual World Watch, Eduserv,
Kolb, D. A., Boyatzis, R.E. and Mainemelis, C., (2001), Experiential Learning Theory:
Previous Research and New Directions, J. Sternberg and L. Zhang, (Eds.): In
Perspectives on Thinking, Learning and Cognitive Styles, Lawrence Erlbaum, 227
Linden Labs (2003), Second Life, http://www.secondlife.com
Lingard, H.C. and Rowlinson, S. (2005) Sample size in factor analysis: why size matters,
pp.16, [accessed in] December 2011, [available at]
http://rec.hku.hk/steve/MSc/factoranalysisnoteforstudentresourcepage.pdf
MacCallum, R.C., Widaman, K.F., Zhang, S. & Hong, S., (1999), Sample size in factor
analysis, Psychological Methods, 4, 84-99
McCaffery J., Miller A., and Allison C., (2011), Extending the use of virtual worlds as an
educational platform - Network Island: An advanced learning environment for
teaching Internet routing algorithms, 3rd CSEDU, INSTICC, 1
Messinger P., Xin G., Stroulia E., Lyons K., Smirnov K., and Bone M. (2008), On the
Relationship between My Avatar and Myself, Journal of Virtual Worlds Research, 1(2),
1-17
Moos, D.C., and Azevedo, R. (2007), Self-regulated learning with hypermedia: The role of
prior domain knowledge, Contemporary Educational Psychology, 33(2):270-298
Nunnally, J.C., (1978), Psychometric Theory (2nd ed.) McGraw Hill, New York
Oliver, I.A., Miller, A.H.D. and Allison, C. (2010), Virtual worlds, real traffic: interaction and
adaptation, 1st ACM SIGMM - MMSys'10, 306-316
Osborne, J.W. and Costello, A.B., (2004), Sample size and subject to item ratio in principal
components analysis, Practical Assessment, Research & Evaluation, 9(11)
Perera, I., Allison C., Nicoll, J. R. and T. Sturgeon (2009), Towards Successful 3D Virtual
Learning A Case Study on Teaching Human Computer Interaction, In Proceedings
of 4th ICITST-2009, IEEE, 159-164
Perera, I., Allison C. and Miller A. (2010), A Use Case Analysis for Learning in 3D MUVE: A
Model based on Key e-Learning Activities, 5th ICVL, Romania, 114-120
Perera, I., Allison, C., McCaffery J., and Miller A. (2011a), Towards Effective Blended
Learning with 3D MUVE - An Analysis of Use Case Implementations for 3D MUVE
Learning, 3rd Computer Supported Education - CSEDU2011, INSTICC, 2, 46-55
Perera, I., Allison, C., and Miller A. (2011b), Policy Considerations for Managing 3D Multi
User Learning Environments Achieving Usability and Trust for Learning, 6th
ICVL, 106-112
Pintrich, P. (2000), The role of goal orientation in self-regulated learning. In M. Boekaerts, P.
Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation, 452502
Pintrich P. (2003), A motivational science perspective on the role of student motivation in
learning and teaching contexts, Educational Psychology, 95, 667686.
Schunk D.H. (2005), Commentary on self-regulation in school contexts, Learning and
Instruction, 15(2), 173-177
SCQF (2007), Scottish Credit and Qualifications Framework, [available at]
http://www.scqf.org.uk/The%20Framework/
134 Applications of Virtual Reality
Stevens, J.P. (1992), Applied Multivariate Statistics for the Social Sciences (2nd Ed.) Hillsdale, NJ:
Erlbaum
Sturgeon, T., Allison, C. and Miller, A. (2009), 802.11 wireless experiments in a virtual
world, SIGCSE Bulletin, ACM, 41(3) 85-89
The Open Simulator Project (2007), Open Simulator, http://www.opensimulator.org/
Wang, M., Peng, J., Cheng, B., Zhou, H., & Liu, J. (2011), Knowledge Visualization for Self-
Regulated Learning. Educational Technology & Society, 14(3), 2842.
Weippl, E.R. (2005), Security in E-Learning, S. Jajodia, (Eds.) Advances in Information
Security, Springer, 16
7
1. Introduction
In the so-called real world, any given process occurs simultaneously with a myriad of other
processes. All these processes take place on a continuum of mass, energy and time (the world).
All things in this continuum are interlinked in some way. Defining a process as such is in
itself already an act of abstraction. One way to study a process is to split it into three entities:
the observer, the process itself and the rest of the continuum. The observer usually
simplifies a process in his or her attempt to understand it. The observer considers some
factors and variables as taking part in the process while ignoring or excluding others. All the
variables involved in a given process form a type of microcosm. Using the scientific method,
researchers aim to determine laws that govern the interplay among certain entities. To this
end, different models and techniques may be used.
A virtual environment is a computer environment. It represents a subset of the real world,
where models of real world variables, processes and events are projected onto a three-
dimensional space. The creation of a virtual environment is an important tool for simulating
certain critical processes, especially those in which human beings or things are likely to
suffer irreversible or long term damage. This methodology combines a representation of the
three-dimensional space with geographic references, the agent-based models, and an
adequate representation of time, and aims to construct virtual environments able to simulate
phenomena or processes. The three-dimensional geo-referenced representations of space can
either be the spatial representation of traditional geographical information systems (GIS), or
the representation adopted by Google MapsTM (Google, 2011). Adding autonomous agents
to these spatial representations allows us to simulate events, measure any variable, obtain a
possible spatial distribution of people and objects, estimate any environmental impacts,
build alternative scenarios and train staff to deal with these critical processes.
Simulations can be used to describe and analyze the behavior of a system, answer questions
about it and help to design a system as it exists in the real world. Both real and conceptual
systems can be modeled through simulation(Banks, 1998.
Choosing a tool that enables an effective visualization of the results of a simulation is the
primary goal in building a virtual environment. Ideally, the architecture of such
environments should allow for the presence of sets of concurrently interacting agents which
can be monitored.
The implementation of models for elaborate and dynamic systems is a highly complex task
since there is no tool that permits to describe, simulate, and analyze the system under study
without the need of advanced knowledge of mathematics and programming.
Through the tool of simulation, it is possible to build and explore models that help people
gain a deeper understanding of the behaviour of a specific process or system.
The advantages of using simulation include:
Informed decision-making: a simulation permits implementing a range of scenarios
without consuming any resources. This is critical because once a decision to implement
a policy is taken in the real world, material and human resources are used, thus
generating costs.
The compression or expansion of time: a simulation can compress or expand the
duration of a process or phenomenon, in order to allow a complete and timely analysis
of it. One can examine a problem in minutes instead of spending hours observing it
while all events unravel.
In order to fully understand a phenomenon, it is generally necessary to know why,
where and when it occurs in the real world. Thanks to simulation, it is possible to get
answers by reconstructing a scenario and subsequently carrying out a microscopic
analysis. In the real world, it is often very difficult to do this, because it is impossible to
observe or control the system completely.
Problem diagnosis: dynamic phenomena have a high degree of complexity; one
advantage of simulation is that it allows us to understand the interactions among the
different variables. This helps in the diagnosis of problems and enables researchers to
have a better understanding of the process.
Preparation for change: it is a fact that the future brings changes. Understanding in
advance "why" and "how" these changes will take place is very useful in redesigning
systems / existing processes and predicting behaviour in accordance with different
scenarios.
Training staff: simulation models are excellent training tools when designed for that
purpose. Used as such, staff and teams can come up with solutions and evaluate their
mistakes by comparing each proposed scenario.
Some disadvantages of simulation are as follow (Banks, 1998):
The construction of an effective model is a form of art: it requires special training
garnered over time and experience. If two models of the same system are built by two
distinct and competent technicians, these may be similar, but they are unlikely to be
identical.
Methodology for the Construction of a
Virtual Environment for the Simulation of Critical Processes 137
A simulation can be difficult to interpret: when most of the simulation outputs are
random variables, the results can be very difficult to analyze.
Modeling and analysis can be time consuming and expensive.
The goal of the proposed architecture of a virtual environment (VE) is to construct scenarios
to analyze the information related to given critical processes (Figure 1).
The models, built to represent different processes, phenomena or systems, are based on the
behaviour of objects over time. Eventual movements of people or objects are taken into
account, both of which are represented by an agent-based system (ABS). The spatial area
will also be incorporated into the model using a geographic information system (GIS).
Typical autonomous agents, in an agent-based system (ABS), have some state variables,
which determine their situation in the systems, and some of their behaviour, in the same
way as object methods in object-oriented programming. A typical autonomous agent, that
represents a person in some process, has the following state variables: i) a personal identity;
ii) a position given by a set of coordinates (x, y, z); iii) a direction or angle relating to the true
north; iv) a speed. Its behaviour could involve: i) moving to some position; ii) speeding up
or slowing down.
The proposed virtual environment (VE) must include an appropriate representation of the
space or environment in which these autonomous agents receive sensory inputs and
produce outputs in the form of actions. In general, geographic information systems (GIS)
use raster or vector structures to represent space in bi-dimensional models. In some cases, a
third dimension is represented through the digital elevation models (DEM) of a terrain. A
given GIS spatial representation (a shape file, for example) can integrate an agent-based
system (ABS) and the structure of a dynamic spatial model, incorporating the dimension of
time, in order to simulate a dynamic phenomenon. The GIS spatial representation is the
environment in which autonomous agents of these ABS models will operate.
For each specific phenomenon, we are only interested in particular information about the
environment. So, considering the geographical space where the phenomenon develops, it is
necessary to filter only the aspects of interest to the study. These factors are perceived by the
autonomous agents, thus causing their actions. The data is organized in different layers,
each layer representing different elements such as: utilities, river and lakes, roads and rails,
soil maps, land parcels, etc. It is necessary to select only the elements or layers that are of
interest to the study. Objects that are not of interest must be eliminated. An environment full
of superfluous objects would unnecessarily complicate the modelling and reduce its
effectiveness. So, depending on the problem, it is necessary to considerably simplify the
simulation, as has been done in the proposed architecture (De Almeida Silva & Farias, 2009;
Farias & Santos, 2005).
In dynamic spatial models, in particular cellular automata models (Lim et al., 2002), it is
common to represent space as a cell-based or raster structure (Veldkamp & Fresco, 1996;
Verburg et al., 2002; Soares Filho & Cerqueira, 2002; Lim et al., 2002). Time is characterized
as an incrementing value of a variable t (each increase corresponding to a simulation cycle).
At the start of the simulation (when t=t0), the system will be initialized with all data
describing the phenomenon or process under study as seen in the state t=t0. Following this,
the simulation as such begins (Figure 2).
138 Applications of Virtual Reality
In the first step, it is possible to analyze, given some inputs and/or a particular
configuration of cells, which cells should have their attributes changed. In the second step,
certain rules are applied to the cellular automata, in order to alter its structure. In the third
step, the t variable is incremented; if this t variable is less than a pre-determined value tf
- which represents the total time of the simulation - the simulation returns to the first step
and the whole cycle repeats itself; otherwise the simulation ends.
This structure of dynamic spatial models can be adapted to a GIS to represent the iterative
cycles during which knowledge about the environment is acquired (in other words, sensed
and perceived) by the agents who then act correspondingly.
Jennings, 1995): A system situated within a given environment, which senses that
environment through its perception mechanism and acts on that environment and/or on
other agents, as time flows, in pursuit of its own agenda, plans or beliefs. Eventually the
agents perception/action mechanism evolves with time (Figure 3).
Autonomous agents can be classified according to their amplitude of perception, their
capacity to act and the effectiveness of their action. There exist reactive and cognitive agents.
Reactive agents merely reactin an opportune way, according to very simple behaviour
patternsto changes in the way they perceive their environment. Cognitive agents are more
complex. They not only interact with their environment, but are also capable of
remembering previous experiences, learning from these, communicating with one and
another, and finally of following a defined goal, strategy or plan.
Multi-agent systems (MAS) are composed of several agents, which, in addition to having the
above mentioned characteristics, can interact with one another through communicative
mechanisms. In most cases, these autonomous agents can exhibit great variation and display
competitive or collaborative behaviour, depending on the context (Wooldridge & Jennings,
1995).
Simulations, using Multi-agent systems (MAS), provide a computational platform in which
the dynamic of spatial-temporal systems can be studied. To do this, it is possible to use
reactive (simpler) or cognitive (more complex) autonomous agents (Figure 4) (Benenson &
Torrens 2004; Wooldridge & Jennings, 1995).
It can also be said that an autonomous agent is a computer system situated in an
environment, and that is capable of autonomous action within it, in order to meet its design
objectives. For it to be autonomous, the system should be able to act without any direct
human (or other agent) intervention, and should have control over its own actions and
internal state (Longley & Batty, 2003).
others with their activities. This requires agents to follow certain communication
mechanisms;
ii. Responsiveness: agents should perceive their environment and respond in a timely
fashion to changes which occur within it;
iii. Proactiveness: agents should not simply act in response to their environment; they
should be able to exhibit opportunistic, goal-oriented behaviour and take the initiative
when appropriate.
In addition to these conditions, a number of other potentially desirable characteristics have
been proposed. These include: adaptability - the ability of an agent to modify its behaviour
over time in response to changing environmental conditions or an increase in knowledge
about its problem solving role; mobility - the ability of an agent to change its physical
location to enhance its problem solving; veracity - the assumption that an agent will not
knowingly communicate false information; and rationality - the assumption that an agent
will act in order to achieve its goals and will not act in such a way as to prevent its goals
being achieved without good cause (Maguire et al., 2005).
It is possible to model the behaviour of people in some determined environment through
the computational technology of agents, as well as consider other elements that interact with
these people using this same technology, observing their behaviour and the consequences of
these interactions. An environment can be created where software agents simulate critical
process. The agents will have some properties such as: mobility, reactivity and objectivity.
People, objects and elements will be also represented by software agents. In this way, it is
possible to study, analyse and manage the critical process through agent-based simulation.
iii. The construction of the spatial model (formalized abstraction without following any
convention);
iv. The construction of the data model (that reflects how the data will be stored on the
computer);
v. The construction of the physical computer model (How the files are organized on the
computer);
vi. The construction of the model defining data manipulation (the rules for data
processing);
vii. The construction of the GUI model (the rules and procedures for displaying spatial
data);
Geographic information systems (GIS) and its survey methods and software, have been
used in different fields. It is possible to use GIS for any activity that requires geo-referenced
information (location in geographical space). The potential of remote sensing and GIS
facilitates the planning, control and management of large geographical areas.
Compiling geo-referenced data enables temporal and spatial analysis and provides
parameters for implementing management programs in certain areas (Figure 5).
In areas where there is a critical process, a geo-referenced map can reveal the extent of the
process, existing landforms, roads, cities, and even streets and buildings. These data allow for
a detailed analysis of the critical process at hand and its potential impact on an urban area.
GIS works by combining tabular data with spatial data. The structures of the raster (satellite
imagery) and vector (point, line and polygon) data types are used to digitally represent
spatial objects. The vector structures identify vector space objects by points, lines or
polygons (two-dimensional space). This type of identification is combined with the
elaboration of a polygon "background", which complements the coverage of the entire plan
information. The raster structure (or matrix) uses discrete units (cells) which represent
spatial objects in the form of clusters (aggregates of cells). In this way, it is possible to see
layers of spatial data and their link with tabulated data (roads, buildings, population,
plantations, railways, ports, airports, consumers, buildings, water networks, etc.). Therefore,
GIS permits researchers to represent the real world through various aspects of geography,
which are separated into thematic layers.
For the GIS technology to be used effectively, it is necessary to define a methodology which
represents the geographical area under study and to select the main variables to be
analyzed.
This methodology consists of the following steps:
Elect the study area (define its limits);
Define the sources of information;
Collect data;
Analyze the data collected;
Provide geo-referenced data;
Manage the use of information.
Information systems are widely used by organizations to manage their critical data.
Through systems-based database, a large volume of information can be stored in different
business areas. The so-called spatial data refer to information in a defined geographical area
144 Applications of Virtual Reality
2.3 Merging an agent-based system (ABS) with a geographic information system (GIS)
When a phenomenon is too complex to be studied analytically, it is necessary to recreate it
in an artificial universe in which experiments can be made on a smaller scale and in an
environment that simulates the real world but where the parameters can be controlled with
greater precision (Drogoul & Ferber, 1994).
Computer models can be used to understand more about how real world systems work.
These models allow researchers to predict the behavior of a wide range of real world
processes and understand their structure. Computational models reduce the cost and time
of conducting actual experiments, which are often impossible to execute in the real world.
Simulations can be repeated many times over and are not invasive. The data generated by
computer models are easier to analyze than that of real-world systems. (Gimblett, 2002).
A process is deemed critical when it presents complex behaviour over time. Examples of
critical processes are population growth, radioactive accidents, chemical accidents, soil
erosion, movement of rivers, epidemic diseases, pests, forest fires, slums in urban areas,
vehicle traffic in urban areas, climate change and animal migration. These processes
provoke critical effects on the population, the surrounding environment and geographical
areas, which must be analyzed and mitigated in order to minimize any pre-existing
vulnerability (De Almeida Silva & Farias, 2009).
Methodology for the Construction of a
Virtual Environment for the Simulation of Critical Processes 145
One significant development in the creation of simulation models has been the inclusion of
intelligent agents (autonomous agents) in spatial models. This technique enables the
simulation of human behaviour in the environment within which a critical process is to be
analyzed and facilitates the study of simulation outcomes. (Gimblett, 2002).
The use of GIS allows a more accurate spatial representation of the environment within
which autonomous agents will act, thereby representing the elementary processes that exist
in the real world. The challenge is to build a simulation that enables the generation of data
appropriate to an analysis of the phenomenon under study.
The use of object orientation (OO) represents a new perspective for the integration of
geographic information systems (GIS) with agent-based systems (ABS). This technology
enables the creation of objects that can represent entities existing in the real world in a
simulation environment. The properties (attributes and methods) of these objects entities in
an OO design, a modularity capacity, the low complexity of programming code and
reusability are characteristics that give great flexibility to modeling the existing elementary
processes of complex processes (Gimblett, 2002).
Object orientation is a global trend in terms of programming and development systems.
Applied to the area of databases, the concept of OO enables a more accurate definition of
models and data structures which better represent the real world. This is especially useful to
GIS, since the spatial characteristics of data make it difficult to model using traditional
techniques.
The construction of a framework that represents a virtual environment is made possible
thanks to the principles of object orientation (OO). The objects built following these
principles provide a representation of the real world through software simulation. OO
focuses on the object and represents a tool to construct it.
The two fundamental OO concepts are classes and objects. An object is an entity that has
attributes) and an identity. A class is comprised of objects that share common properties.
OO as applied in GIS is characterized by types of objects, divided into classes, which may
have a hierarchical relationship, whereby subclasses inherit the behaviour derived from the
main classes (Figure 6). The geographic objects must be represented in a geographic or
spatial database. A geographic object is an element that can be geo-referenced and
registered into a spatial database. Such databases allow the storing, retrieving and
manipulating of spatial data including descriptive attributes that form the basis for the
geometric representation of a geographic space (Elmasri & Navathe, 1994).
Objects are programming codes that can represent activities, movements and properties of
real world entities (people, animals, cells, insects, chemicals, vehicles, etc.). They have three
fundamental characteristics: identity (which distinguishes it from other objects), attributes
and methods. A class provides a template for building a collection of objects of the same
type. Embedded in an environment, objects are able to simulate a phenomenon through
their multiple interactions. The application of a small number of rules and laws to entities
(objects) in a given space gives rise to so-called emerging systems, which are able to
simulate complex global phenomena, such as vehicle traffic, the growth of cities, etc. These
are systems in which the combination of objects and their interaction with the environment
reproduce the phenomenon analyzed. The objects contained in these systems are adaptable
146 Applications of Virtual Reality
and act in synergy between one another. The framework built with objects and their
combination automata (abstract automaton) lets researchers explore, analyze and predict
critical processes in a study.
A greater degree of realism in simulations based on agents can be achieved by integrating
the technologies of agent-based systems, geographic information systems and data
acquisition via satellite. In other words, the virtual environments in which the agents will
act may be two-dimensional (digital maps) or three-dimensional (digital elevation models),
which depict reality to a high degree of accuracy, as can be observed through tools like
Google MapsTM (Google, 2011; Gimblett, 2002; Longley & Batty, 2003). It is thus possible to
achieve a very accurate model environment where agents with their corresponding
attributes and behaviours are positioned and interact with each other and the environment,
thus simulating dynamic processes or phenomena.
The use of ABS, based on OO and set up through simple rules, should establish standards of
behaviour and the fulfillment of tasks in a geo-referenced environment (GIS). The autonomous
agents act and react in the geo-referenced environment in which they are immersed.
The representation of the area selected (Google MapsTM) (Google, 2011) will be the virtual
environment in which entities representing objects in the real world will be inserted. The
latter are abstractions, called autonomous agents, which represent different types of actors
of the real world ranging from living beings endowed with a high degree of intelligence to
entities exhibiting basic behaviour. The autonomous agents are able to interact with one
another and are modeled as pro-active autonomous objects endowed with a greater or lesser
degree of intelligence and which are able to understand the environment in which they are
immersed (Wooldrigde, 2002).
The architecture of the VE has the ability to organize and display information in a graphical
interface whereby the autonomous agents are positioned arbitrarily within a spatial
environment and their distribution appears on a geo-referenced map, to be viewed and
interpreted (Maguire et al., 2005) (Figure 8). These objects and individuals are modeled by
autonomous agents and have the following attributes:
i. object identifier;
ii. the object's position given by its coordinates;
iii. individual identifier;
iv. the individual's position given by its coordinates;
For the construction of this virtual environment, the following tasks were performed (Banks,
1998):
i. Formulation of the problem: define the problem to be studied, including the goal of the
solution;
ii. Goal setting and planning of the project: raise questions that should be answered by the
simulation and the scenarios to be analyzed. The planning of the project will determine
the required time, resources, hardware, software and people ware involved as well as
controls and outputs to be generated;
Methodology for the Construction of a
Virtual Environment for the Simulation of Critical Processes 149
iii. Creation of a conceptual model: build an abstraction of the real world based on
mathematical and logical modeling; Data collection: identify the data required for
simulation
Fig. 8. A virtual environment displaying autonomous agents, Google MapsTM and RepastTM
iv. Building the model: create the model using a conceptual tool for simulation;
v. Verification: Check that the implementation of this model has been done correctly;
vi. Test of valididity: verify that the model is a conceptual representation reasonably
"accurate" system of the real world;
vii. Simulation of scenarios: define various scenarios and their parameters;
viii. Running and analysis of the scenarios: estimate measures of performance for the
scenarios that are being simulated;
ix. Documentation and report generation: provide information about the result of the
modeling carried out;
x. Recommendation: determine the actions to be taken to solve the problem that is being
analyzed based on the results of the simulation.
150 Applications of Virtual Reality
For the implementation of this virtual environment, the following steps were performed
(Figure 9):
i. Definition of the critical process: choose a type of critical process to study (e.g.
population growth, radioactive accidents, chemical accidents, soil erosion, movement of
rivers, epidemic diseases, pests, forest fires, slums in urban areas, vehicle traffic in
urban areas, climate change and animal migration);
ii. Definition of a scenario: construct a conceptual model representing a phenomenon in
the real world;
iii. Planning the services of the virtual environment: define the characteristics of the system
necessary to model a phenomenon (procedures, input and output variables of this
system);
iv. Construction a physical model of the database: create the structure of the files;
v. Construction of individual autonomous agents whose goals, type, attributes and
methods are defined in accordance with the analyzed phenomenon Customization of a
Google Maps API: define new features for geographic representation;
vi. Customization of Repast libraries: define the graphics and outputs;
3. Conclusion
The proposed methodology to construct a virtual environment (VE) for the purpose of
simulating a critical process allows us to create a software simulation capable of analyzing a
phenomenon that occurs in a specific geographical area. This architecture of a VE explores
how systems based on autonomous agents can be effective in simulating complex phenomena.
The implemented model that incorporates autonomous agents, existing within a geo-
referenced environment and which capable of receiving stimuli as input and of producing
actions as output, aims to simulate the dynamics of the phenomenon selected to be studied.
This VE architecture enables researchers to do the following:
i. Construct and analyze scenarios;
ii. Design autonomous agents with goals, behaviours and attributes using RepastTM;
iii. Use a powerful and user-friendly GIS: Google EarthTM /MapsTM.
iv. Quantify a specific measure;
v. Obtain any spatial distribution of people or objects;
vi. Inform variable inputs and obtain variable outputs;
vii. Storage data output;
viii. Estimate the impact of a critical process;
ix. Define prompt answers;
x. Train personnel to handle the analyzed situation;
Following this methodology, it is possible to integrate both a geo-referenced environment
together with agent-based models in order to simulate a critical process present in the real world
4. Acknowledgment
We would like to thank to our families, the Instituto de Radioproteo e Dosimetria (IRD)
and Universidade do Estado do Rio de Janeiro(UERJ) for their support throughout the
duration our research, which we are grateful for.
5. References
Appleton, Ben; Stucker, Lary (2007). Using PHP/MySQL with Google Maps from
http://code.google.com/intl/en/apis/maps/articles/phpsqlajax_v3.html
Argonne, National Laboratory (2011). Recursive Porus Agent Simulation Toolkit from
http://repast.sourceforge.net/repast_3/index.html
Banks Jerry, (1998). Handbook of Simulation, Principles, Methodology, Advances,
Applications and Practice, Emp Books.
Benenson , Itzhak; Torrens Paul (2004). Geosimulation: Automata-Based Modeling of Urban
Phenomena. Wiley.
Burrough, P.A .; McDonnel, R. A. (1998). Principles of Geographical Information Systems,
ISBN 0-19-823366-3, Oxford University Press, New York , USA.
Crooks, Andrew T.(2007). The Repast Simulation/Modelling System for Geospatial
Simulation, ISSN 1467-1298, Paper 123, Centre for Advanced Spatial Analysis
University College London , London, England.
Drogoul, A.; Ferber, J. (1994). Multi-agent Simulation as a Tool for Studying Emergent
Processes in Societies. In Simulating Societies. The Computer Simulation of Social
Phenomena. edited by N. Gilbert and J. Doran, 127-142. London. UCL Press.
152 Applications of Virtual Reality
1. Introduction
In recent years, a large amount of data has been generated and recorded in various fields
according to the advances of computer simulation, sensor network, and database
technologies. However, it is generally difficult to find valuable information or knowledge
from the huge data. Therefore, the establishment of a methodology to utilize these data
effectively is an important issue. Though data mining is one of the methods to solve such a
problem, it is difficult for the user to understand the process of mining data and to evaluate
the result intuitively, because the data is processed in the computer (Kantardzie, 2002). On
the other hand, the visualization technique can be used effectively to represent the data so
that the user can understand it intuitively. This study focuses on the method of visual data
mining in which the user performs the data mining by visualizing the data (Wong, 1999)
(Keim, 2002).
In the visual data mining, the user can analyze the data interactively with the computer by
visualizing the process and the result of the data mining. Then, it is expected that a new
information or knowledge can be found by combining the ability of high-speed calculation of
the computer with the human's ability such as intuition, common sense and creativity.
Particularly, this study aims at improving the effect of visual data mining by enhancing the
ability of expressing data and interaction function by using the immersive virtual environment
based on super high-definition stereo images. In this system, it is expected that the user can
perform accurate and intuitive data mining by using the high resolution stereo images.
This paper discusses the system architecture and the visualization ability of the super high-
definition immersive visual data mining environment that was developed in this study, and
the effectiveness of this method is evaluated by applying the platform of super high-
definition immersive visual data mining to the seismic data analysis.
information processes performed by computer and human, a large amount of data could be
analyzed effectively to find new information.
In the researches by Wong (Wong, 1999) or Keim et al. (Keim, 2002), data mining is
effectively performed in the visualization environment. In these cases, visualization
technology is used to transmit the information from the computer to the human and
interactive interface technology is used to support the collaboration between human and
computer. If the expression abilities of information visualization and the interaction function
between human and computer were increased, the performance of visual data mining
would be improved based on the improvement of information processing in both human
and computer.
As for the increase of the expression ability in the information visualization, super high-
definition image would enables the transmission of the detailed information from the
computer to the human, and three-dimensional stereo image would be used effectively to
represent the relationship among several kinds of data in the three-dimensional space. In the
researches by Renambot (Renambot, 2006), high-resolution image is effectively used in the
data visualization. And as for the interaction between human and computer, immersive
virtual environment would be used to enable the user to operate the visualized data directly
and to explore data space as first person experience. Ammoura (Ammoura, 2001), Wegman
(Wegman, 2002) and Ferey (Ferey, 2005) discuss the effectiveness of immersive virtual
environment in visual data mining.
In this study, super high-definition immersive visual data mining environment was
constructed to utilize the effect of advanced visualization and interaction technologies.
Figure 1 shows the concept of super high-definition immersive visual data mining that is
proposed in this study. In this figure, it is shown that the performance of visual data mining
can greatly be improved by increasing the expression ability and the interaction function
between computer and human.
database
user
workstation improvement of
interaction function
broad-band network
4,096 x 2,160 pixels, and its image quality is more than four times of the usual high-
definition image. In this system, two stacked projectors (SONY SRX-S110) are used, and the
images seen from the right eye and left eye positions of the user are projected. The images
output from the projectors are rear projected onto the 180-inch acrylic rear projection screen
(Nippura, Blue Ocean Screen) through the polarizing filter. In this condition, 1 pixel in the
projected image has approximately 0.97 mm width on the screen. Then, the user can see the
high resolution passive stereo image based on the binocular parallax by wearing the
polarized 3D glasses. In this system, since two 108-inch LCD monitors are placed at both
sides of the 4K screen, various kinds of information can be displayed in the multi-display
environment. Figure 3 shows the system configuration of the super high-definition
immersive visual data mining environment.
graphics workstation
(renderer)
Dell Precision T7400 +
Nvidia Quadro Plex 1000
4K projector
Sony SRX-S110 matrix
Polarizing filter
switch
graphics workstation
(master)
Sharp Sharp Dell Precision T7400
Nippura Blue Ocean
108-inch 108-inch
LCD monitor LCD monitor
video server
polarized
GUI interface
3D glasses
physical MIDI controller
In order to generate and render the 4K stereo image, two high-end graphics workstations
(Dell Precision T7400, 2xQuad Core Xeon 3.2GHz) with the graphics engine (NVIDIA
Quadro Plex 1000 Model IV) that has a genlock function are used for right eye image and
the left eye image, respectively. In this system, interface devices such as a USB game
controller and a physical MIDI controller are connected to the graphics workstation. The
USB game controller is used to walk through the visualized data space or to change the
visualization method. On the other hand, the physical MIDI controller is used to change the
visualization parameter minutely. By using these interface devices, the user can perform
accurate and intuitive interaction with the visualized data, and the immersive visual data
mining using 4K resolution stereo image can be realized.
application 1 application 2
renderer renderer
master master
plug-in mechanism
OpenCABIN library
device driver device driver
besides the visualization program for each data. However, the plug-in mechanism can add a
visualization function for necessary data to the application program that is being executed.
Namely, the integrated visualization environment for several kinds of data can be
constructed, by executing several visualization programs for each data simultaneously by
using the plug-in mechanism.
screen
move
3m
field of view
subject
Fig. 5. Condition of experiment on the perception of spatial resolution.
Figure 6 shows the result of the experiment for five subjects with visual acuity of more than
1.0. In this graph, the contour lines are drawn by connecting the average values of the
158 Applications of Virtual Reality
boundary positions for each direction and for each gap of the parallel lines. From this graph,
we can see that the subjects could recognize 0.5mm gaps between lines when the parallel
lines were displayed at the distance of about 50cm, and they could recognize 10mm gaps
between lines when they were displayed at the distance of about 10m. This recognition
ability is equivalent to visual acuity of about 0.3, and it is a little bad compared with the
visual acuity (more than 1.0) of the subjects. From this result, we can consider that the effect
of stereo image caused the decrease of the visual acuity in the visualization environment.
However, the resolution of the parallel lines recognition at near distance (0.5mm) was
higher than the pixel width (0.97mm) of the projected image. We can consider that this
result is also caused by the effect of high resolution stereo image.
Therefore, it is understood that the 4K stereo display can represent the detailed information
with high accuracy, though the recognized spatial resolution of the displayed image
depends on the distance and direction from the user. Namely, this means that the visual
data mining using the super high-definition stereo image has an ability to transmit a large
amount of information from the computer to the user with high accuracy.
screen
(m) 6 30 deg. 25 deg.
20deg.
left side
4 15 deg.
5 mm
interval
3
0.5mm 2.5 mm 10 deg.
interval interval
2
1 mm 5 deg.
1 interval 10 mm
interval
front
0
0 2 4 6 8 10 12 14
distance from the subject (m)
Fig. 6. Result of experiment on perception of spatial resolution.
evaluate the three-dimensional sensation that was felt from the displayed image using the
three-grade system (2: clear three-dimensional sensation, 1: unclear three-dimensional
sensation, 0: no three-dimensional sensation).
Figure 8 shows the result of this experiment for nine subjects. Each plotted point means
average value of the evaluation for each depth condition and for each near distance
condition. From the results, there was significant difference among the condition of the
distance to the near side at 1% level, though there was no significant difference among the
condition of the depth of volume. From the graph, we can see that the evaluation of the
depth sensation felt by the subjects decreased according to the increase of the number of the
displayed points. This means that the subjects could not recognize the three-dimensional
scene from the left eye image and the right eye image when too many point clouds were
displayed. Namely, the representation ability of the super high definition display for the
three-dimensional point cloud image would decrease when a large amount of point data are
displayed simultaneously. In addition, the ability of representing three-dimensional point
depth
point cloud data 1.0, 1.5, 2.0 or 2.5m
3m
distance
1, 2 or 3m
subject
depth=1.5, near=1.0
depth=2.0, near=1.0
1.5 depth=2.5, near=1.0
depth=1.0, near=2.0
depth=1.5, near=2.0
1.0 depth=2.0, near=2.0
depth=2.5, near=2.0
depth=1.0, near=3.0
0.5 depth=1.5, near=3.0
depth=2.0, near=3.0
depth=2.5, near=3.0
0.0
0 10 20 30 40 50
number of point data x 10,000
cloud image decreased steeply when the data were displayed near the subjects. We can
consider that this is because the binocular parallax was large when the point image was
displayed close to the subject in front of the screen. Therefore, we can understand that the
control of the number of the visualized data is very important when a large amount of point
cloud data are visualized.
data in
application fields
pre-processing
5.2 Interface
In the data visualization, it is important that the user can interact intuitively with the
visualized data to recognize the phenomenon. In addition, in the visual data mining, it is
also important that the user can operate the visualized data precisely. Therefore, in this
system, GUI interface and physical USB controller are used to realize the intuitive and
precise operation.
Fig. 10 shows the screen image of the GUI interface. The visualization parameters X, Y, Z, C,
and R are defined using the linear equation,
X=a11d1+a12d2++a13d3+a14d4+a15d5+a16d6+a17d7+a18d8
Y=a21d1+a22d2++a23d3+a24d4+a25d5+a26d6+a27d7+a28d8
Z=a31d1+a32d2++a33d3+a34d4+a35d5+a36d6+a37d7+a38d8
C=a41d1+a42d2++a43d3+a44d4+a45d5+a46d6+a47d7+a48d8
R=a51d1+a52d2++a53d3+a54d4+a55d5+a56d6+a57d7+a58d8
where d1,.., d8 are data values in the floating point number fields of the database table, and
a11,, a58 are coefficients that are specified by the user. Each tab in the right side on the GUI
screen corresponds to the visualization parameter, and the user can define these
visualization parameters by moving the sliders to adjust the coefficients.
Point cloud data can be displayed in various colors according to the value of the color
parameter C. In this system, the color of the displayed data is defined by specifying
minimum value and maximum value of the color parameter using the sliders, and the value
of the color parameter is mapped to the displayed color. In this case, when the color value is
less than minimum value, the data is visualized with gray color, and when the color value
exceeds the maximum value, the data is visualized with white color.
The displayed data can be filtered using the value of the reference parameter R. When the
center value and the range are specified using the sliders on the left side of the GUI screen,
the visualized data is filtered by comparing the value of reference parameter with the
filtering range.
In addition, the view point for the visualization image in the three-dimensional space can be
controlled by dragging the cursor in the lower left area of the GUI interface. This function
supports the rotation of the view point around the arbitrary axis as well as the movement
along the screen and perpendicular to the screen. Then, the users view point can be
controlled minutely using the GUI interface as well as the user can walk through the
visualization space using the game controller.
Though the parameter values are controlled using the sliders on the GUI interface, these
sliders are assigned to the MIDI control channels. Then, the parameter values can also be
controlled by using the physical MIDI controller connected to the interface PC shown in
Figure 11. Since the MIDI controller communicates with the interface PC using USB
protocol, the control data are sent to the visual data mining system through the interface PC.
Therefore, the user can control the plural parameter values interactively in the immersive
virtual environment by operating the physical MIDI controller without looking at the
console. Figure 12 shows that the user is operating the platform of immersive visual data
mining using the GUI interface on the interface PC and the MIDI controller.
Fig. 12. Operation using GUI interface and physical MIDI controller.
Figure 13 shows the visualization image of the seismic hypocenter data that is colored
according to the magnitude value. From the high resolution stereo image, the user can
recognize the tendency of the earthquake distribution. Figure 14 shows another example of
the visualization image in which the occurrence time of the earthquake is mapped to the z-
axis. From this image, the user can see how long a lot of small aftershocks continued after
big earthquake.
Fig. 14. Visualization of seismic hypocenter data by mapping occurrence time to z-axis
Figure 15 shows the visualization of b-values calculated from the seismic hypocenter data.
In this case, it was difficult for the user to recognize the distribution of the visualized b-
Immersive Visual Data Mining Based on Super High Definition Image 165
value data, because too many point clouds were visualized simultaneously. Figure16 shows
the visualization image of b-value data filtered by time parameter interactively. With the
reduction of the number of displayed point data, the users can recognize the feature of
spatially distributed data. These visualization images were rendered in real time through
the users interactive operation using the physical MIDI controller. Thus, in this application,
the user could effectively examine the feature of the distribution of the earthquakes
interactively.
.. .. .. .. ..
.. .. ..
.. .. ..
.. .. .. .. ..
In this system, the user can access these databases from the virtual environment and
visualize the retrieved data by specifying the condition. Thus, this system enables the user
to understand the relationship among the hypocenter, terrain, basement depth, and plate
structure data intuitively, and to analyze the feature of the earthquake phenomenon, by
overlapping these data in the three-dimensional visualization environment.
As for the mechanism of integrating the visualized data in the virtual space, the plug-in
function of the OpenCABIN library was used. In this system, each data is visualized by
different application programs and they are integrated in the three-dimensional space in
the runtime. Figure 17 and figure 18 show that several visualization data are integrated in
the same space. In these examples, visualization programs for hypocenter data, terrain
data, basement depth data and plate structure data are plugged-in to represent the
relation among the data. When the hypocenter data were visualized using the sphere, the
Fig. 17. Visualization of relationship between terrain data and hypocenter data
168 Applications of Virtual Reality
size of it indicates the magnitude of the earthquake. The terrain data is created by mapping
the texture image captured from the satellite onto the shape model. And the basement depth
and the plate structure data are represented using the colors that indicate the depth values.
Fig. 18. Visualization of relationship among hypocenter data, basement depth data and plate
structure data.
In this system, when the several visualization programs are plugged-in, the toggle buttons
that show the conditions of each data are displayed. The user can switch the visible
condition of each data by using the toggle button in the virtual space. For example, the user
can change the visualization data from the combination of hypocenter data and basement
depth data to the combination of hypocenter data and plate structure data, while running
the application programs in the visualization environment. By using this method, the user
could intuitively understand the feature of the distribution of hypocenter and the relation
with other data. For example, the user could see whether the attention data of the
earthquake occurred on the plate or in the plate structure. Thus, this system could be
effectively used to represent the relationship among several kinds of data in the three-
dimensional space and to analyze the earthquake phenomenon.
7. Conclusion
In this study, the super high-definition immersive visual data mining environment that uses
4K stereo projectors was constructed. In this system, it is expected that the effect of visual
data mining is greatly improved, since the super high-definition stereo image transmits a
large amount of information from the computer to the user and the interactive interface
enables the user to explore the data space. However, the result of the experiments also
suggested the limitations of representation ability of the super high-definition image.
Therefore, the platform of immersive visual data mining for point cloud data was developed
Immersive Visual Data Mining Based on Super High Definition Image 169
so that the user can easily visualize the data by changing the visualization method and
visualization parameter in trials and errors. This platform was applied to the seismic data
analysis, and several kinds of data such as map, terrain model, depth of basement and plate
structure were visualized overlapped with the hypocenter data. Then, the effectiveness and
possibility of the intuitive and accurate analysis in the immersive visual data mining
environment were confirmed through the interaction with the visualized data.
Future research will include developing more effective visual data mining method through
the collaboration with the earthquake experts and applying this technology to other
application fields.
8. Acknowledgment
This study was partially supported by Keio University Global COE program (Center for
Education and Research of Symbiotic, Safe and Secure System Design). And we would like
to thank Takashi Furumura, Shoji Itoh, Kengo Nakajima, Takahiro Katagiri (The University
of Tokyo), Hanxiong Chen, Osamu Tatebe, Atsuyuki Morishima, Hiroto Tadano (University
of Thsukuba) for their supports.
9. References
Ammoura, A., Zaiane, O.R., Ji, Y. (2001). Immersed Visual Data Mining: Walking the Walk,
Lecture Notes in Computer Science, Proc. of BNCOD 2001 (18th British National
Conference on Databases), pp.202-218
Ferey, N., Gros, P.E., Herisson, J., Gherbi, R. (2005). Visual Data Mining of Genomic
Databases by Immersive Graph-Based Exploration, Proc. of 3rd International
Conference on Computer Graphics and Interactive Techniques in Australasia and South
East Asia, pp.143-146
Furumura, T., Kennett, B.L.N. (2005). Subduction Zone Guided Waves and the
Heterogeneity Structure of the Subducted Plate: Intensity anomalies in northern
Japan, Journal of Geophysical Research, Vol.110, B10302.1-B10302.27
Kantardzie M. (2002). Data Mining: Concepts, Models, Methods, and Algorithms, Wiley-IEEE
Press
Keim, D.A. (2002). Information Visualization and Visual Data Mining, IEEE Transactions on
Visualization and Computer Graphics, Vol.7, No.1, pp.100-107
Nagel, H.R., Granum, E., Bovbjerg, S., Vittrup, M. (2008). Immersive Visual Data Mining:
The 3DVDM Approach, Visual Data Mining: Theory, Techniques and Tools for Visual
Analytics, Springer-Verlag, pp.281-311
Ogi, T., Daigo, H., Sato, S., Tateyama, Y., Nishida, Y. (2008). Super High Definition Stereo
Image Using 4K Projection System, ICAT 2008 (Proceedings of 18th International
Conference on Artificial Reality and Telexistence), pp.365-366
Okada, Y., Kasahara, K., Hori, S., Obara, K., Sekiguchi, S., Fujiwara, H. Yamamoto, A. (2004).
Recent progress of seismic observation networks in Japan Hi-net, F-net, K-NET
and KiK-net, Earth Planets Space, 56, xv-xxviii
Renambot, L., Jeong, B., Jagodic, R., Johnson, A., Leigh, J., Aguilera, J. (2006). Collaborative
Visualization using High-Resolution Tiled Displays, CHI 06 Workshop on Information
Visualization and Interaction Techniques for Collaboration Across Multiple Displays
170 Applications of Virtual Reality
Tateyama, Y., Oonuki, S., Ogi, T. (2008). K-Cave Demonstration: Seismic Information
Visualization System Using the OpenCABIN Library, ICAT 2008 (Proceedings of 18th
International Conference on Artificial Reality and Telexistence), pp.363-364
Wegman, E.J., Symanzik, J. (2002). Immersive Projection Technology for Visual Data Mining,
Journal of Computational and Graphical Statistics, Vol.11, pp.163-188
Wong, P.C. (1999). Visual Data Mining, Computer Graphics and Applications, Vol.19, No.5,
pp.20-21
9
1. Introduction
Multi-User Virtual Environment (MUVE) has attracted much attention recently due to the
increasing number of users and potential applications. Fig. 1 shows the common
components that a MUVE system may provide. Generally speaking, a MUVE refers to a
virtual world that allows multiple users to log in concurrently and interact with each other
by texts or graphics provided by the system. On-line games can be considered as a special
kind of virtual environment with specific characters, episode, and ways of interactions.
Other MUVE systems such as SecondLife provide a general framework for users to design
their own 3D contents and interact with other users through their avatars in a more general
way. Although the users are allowed to build their own world, the animations that can be
displayed are limited to those that have been prepared by the system. In addition, due to the
lack of semantic information, it is not feasible to design virtual avatars that are controlled by
the computer to interact with other avatars.
Under the concept of web 2.0, we think future virtual environments will also depend on
how easily the users can share their own designs of procedures for customized animations
and high-level behaviours. However, it is a great challenge to design an extensible virtual
environment system that allows the users to write their own customized procedures that
can dynamically acquire the information of the virtual environment and other users. In our
previous work, we have succeeded in extending a MUVE system developed by ourselves,
called IMNET (Li et al., 2005), to allow user-defined animation procedures to be specified,
downloaded, and executed on the fly (Chu et al., 2008). However, in order to enable these
user-defined procedures to create richer animations for interactions, we must be able to
describe the semantics of the objects in the world in a standard way accessible to all
potential animation/behaviour designers.
In this paper, we aim to make use of ontology to describe the semantics of the objects in the
virtual environment such that users can design their own animation procedures based on
the information. For examples, if we can acquire object information such as 3D geometry,
height, and 2D approximation, we can design a motion planning procedure that can
generate a collision-free path for the avatar to walk to a given destination. In addition, we
have also designed the ontology for information exchange between avatars and added a
new information query mechanism to facilitate the communication between avatars. These
172 Applications of Virtual Reality
new functions will be demonstrated through several examples where the user-designed
programs acquire application-specific semantics in the standard ontology format after the
programs have been deployed dynamically to other clients machines.
(a) (b)
(c) (d)
Fig. 1. Common components in a multi-user virtual environment: (a) login; (c) choose an
avatar; (b) interact with virtual world; (d) a scripting interface (optional)
The remaining of the paper is organized as follows. In the next section, we will review the
research related to our work. In the third section, we will describe the example design of
ontology for the objects in the virtual worlds and for the avatars. In Section 4, we will
describe the improved communication protocol allowing on-demand query of
information among avatars. Then, in the following two sections, we will give examples of
how to design animation components that can take advantage of the semantic information
to generate richer user behaviours. Finally, we will conclude the paper with future
directions.
2. Related work
In this work, we aim to provide a smart virtual environment that can enable richer contents
and interactions. Before the concept of semantic virtual environment emerges, there has
been much research about how to integrate AI technologies into a virtual environment
system. R. Aylett et al. (2001) found that this type of virtual environments have several
common features. For example, these systems added components for solving problems such
Realizing Semantic Virtual Environments with Ontology and Pluggable Procedures 173
file
WorldInf y x
description
z
hasWorldInfo
HotPosition
IMWorld
hasObject* isFocusedOn
name
WorldObject isa
tag
Ground
baseLevel hasGeometry
hasTransform
height hasApproximation GeometryInfo
Transform
hasScale Approximation2D file
hasTranslation
hasRotation
Scale
Translation Polygon
Rotation
x
x y z hasPolygon
z x rot
y value
y z
: class : property : inherited property
Fig. 2. Ontology design for virtual world
Our ontology design of the virtual environment is shown in Fig. 2. The root of the world
document is the IMWorld node, which contains world information (WorldInfo) and all the
virtual objects (WorldObject) in the world. In order to retain the semantic information of the
virtual objects existing in the original IMNET, we have designed the GeometryInfo and
Transform nodes. Each object also has some additional attributes such as name, tag,
baseLevel, and height. The tag attribute is designed for the user to denote application-
specific properties for virtual objects. For example, in the example of path planning, one can
tag certain objects as sidewalk and crosswalk such that these regions can be treated
appropriately by the path planner according to their meanings in the world. Each object
may also have the attribute of Approximation2D, which is a polygon that can be used to
define 2D approximation of obstacles in the environment for the path planner. In addition, if
2D approximation is over simplified for the application, one can also use the baseLevel and
height attributes to define 3D approximation regions where the obstacles are located. If
these attributes are not available for some objects, they still can be computed from the given
3D geometry and transformation of the objects. Some objects may also serve as the ground
of the world through the node of Ground to define the boundary of the world. In addition,
some objects could also be treated as HotPosition when they are the foci of interest in the
application.
by using the hasBehaviour property to connect to the Behaviour class. This Behaviour class
defines the procedure (with name, package, and codebase) in order to generate the desired
animation. In addition, an avatar may contain some basic attributes such as name,
geometryInfo, and status. We also use the hasFriend and hasPosition properties to get the
friendship and current position information of the avatars.
package
Behavior
codeBase
hasBehavior* updateTimestamp
name hasFriend*
status Avatar
UI hasPosition
x
geometryInfo Position y
updateTimestamp z
updateTimestamp
class property
first loads and parses the OWL file into an object format through the automatic generated
Java class and the Protg API. The geometry file for each object is then retrieved from the
ontology and loaded into the system by the VRMLModelFactory module.
Fig. 5. Processing an OWL file and loading multiple VRML files to generate the virtual world
The application protocol in the original IMNET is similar to other MUVEs that only
encapsulates predefined message types in the XML format. The underlying animation
Realizing Semantic Virtual Environments with Ontology and Pluggable Procedures 177
scripting language, XAML, is an example of message type (Li, et al., 2004). Another example
is the message for textual information used in the chat module. For instance, in Fig. 6, we
show an example where User1 wants to send a <Chat> message to user2. However, in the
original design there is no way for the clients to query the information of other avatars that
may be defined by the avatar designers instead of the system. This function is crucial for the
avatars to exchange information for richer interactions in a semantic virtual environment.
Server
Client
Communication
Answer
Information Static
Avatar
Information
answer Ontology
Update
Information
Question
Processing Query
Component ask Information
Dialog interface
for questions User
5. Demonstrative examples
In this section, we will give two examples of using semantic information in the virtual world
to enhance the functions and behaviours of the avatars.
<MoPlan package='imlab.osgi.bundle.interfaces'
codebase='http://imlab.cs.nccu.edu.tw/plan.jar'>
<param name="-s" value="1.1 2.3"/>
<param name="-g" value="5.2 3.8"/>
</MoPlan>
Planner Bundle
MoPlan IMBrowser
Service
OWLModel
Get OWL
Model
MapLoader
Path Avatar
Planning Walking
XAML
Script Animation
Path Data
Manager
Fig. 12. The process of how to generate animations through the motion planning component
Obstacle information can not only be inferred from low-level geometry but also be given
as approximation by the scene designer. In Section 3, we have designed an optional
attribute called Ap-proximation2D in the ontology of a virtual object. In Fig. 13, we show
an example of the collision-free path generated by the planner by the use of the 2D
approximation of the objects in the world. If the planner can find this 2D approximation
for an object, it will use it to build the 2D bitmap needed in the planner. If not, it still can
build the convex hull of the 3D geometry and project it into the ground to form a 2D
approximation. In other words, semantic information could be designed to facilitate
automatic reasoning but it is not mandatory. The designers of virtual objects are not
obligated to define all attributes in an ontology that could be large in collaborative
creation. In addition, the user-defined animation procedures do not easily break down in
such a loosely coupled distributed environment either since they can take this into
account in the design stage.
However, some semantic information cannot be inferred directly from geometry. For
example, in the virtual environment, there could be some crosswalk or sidewalk regions
that need to be used whenever possible. One can tell this kind of objects from their
appearance but it would be difficult for the machine to infer their functions through
geometry. In this case, the planner has to acquire this information through the semantics
defined in the ontology of the virtual world. In the example shown in Fig. 14, the planner
180 Applications of Virtual Reality
knows where the sidewalk and crosswalk through object tagging in the ontology and
makes the regions occupied by these objects a higher priority when planning the path for
the avatar. The potential values in these regions are lowered to increase the priority
during the search for a feasible path. Consequently, a path passing through these regions
was generated in the example shown in Fig. 14. In addition, according to this semantic
information, appropriate animations, such as looking around before moving onto the
crosswalk region, could be inserted into the motion sequence of walking to the goal, as
shown in Fig. 14(c).
(a) (b)
Fig. 13. An example path generated by the path planner: (a) avoiding the obstacles described
by the 2D approximation in semantics; (b) a snapshot of the scene from the avatars view.
To facilitate the interaction between the avatars, we have designed a component called
SocialService. There are three steps for initiating an interaction between avatars as shown in
Fig. 15. A user who would like to initiate the interaction first sends a customized XAML
script shown in Fig. 16 to the other avatar (step 1) for it to install this social interaction
component (step 2). Once the component has been installed, interaction queries related to
social activities can be delivered through the communication protocol described in Section 4
and processed by the SocialService component (step 3).
1) request
Client A Client B
3) interaction
IMNET messages IMNET
Communication Communication
Module Module
Components Components
2) components
Ontology install Ontology
<SocialService package='imlab.osgi.bundle.interfaces'
codebase='http://imlab.cs.nccu.edu.tw/social.jar'/>
Fig. 16. Starting the mechanism of the interaction between avatars through XML script
In the first scenario, both users are real users. First, user1 would like to invite user2 to be his
friend (Fig. 17(a)). Therefore, a query message: User1 added you to his friend list. Do you
want to invite user1 to be your friend as well? appeared in user2s interface (Fig. 17(b)). If
user2 choose yes, user1 would be added into her friend list and a confirmation message
would be sent back to user1 (Fig. 17(c)). Through the interaction between the two real users,
the friend information was updated into the ontology of both avatars.
In the second scenario, user1 arranged a virtual user called door-keeper to watch the door
and provide information to potential guests (Fig. 18(1~2)). When user2 entered a designated
region, the doorkeeper would turn to face user2 and ask: May I help you? At the first
encounter, user2 just entered this area by accident and therefore chose the answer: Just
look around. The doorkeeper replied: Have a good day! (Fig. 18(3~5)) The state of the
doorkeeper in this interaction was then set to FINISH. After user2 left the area, the state was
restored to IDLE (Fig. 18(6)). Assume that after some period of time, user2 approached the
doorkeeper again for the second time. This time user2 chose: Im looking for my friend.
The doorkeeper replied: Whos your friend? Then user2 answered: Jerry. At this
moment, the doorkeeper queried the avatar ontology of user1 (named Jerry) to see if user2 is
in his friend list. If so, the doorkeeper would inform user2 the current position of user1.
Otherwise, the doorkeeper would answer: Sorry, Jerry does not seem to know you. If
there is no such a user called Jerry, the doorkeeper would answer: Jerry is not in this
world. (Fig. 18(7~9))
Fig. 18. An example of interaction between a real users and a virtual user (doorkeeper)
software components in IMNET. In this work, we have extended the MUVE system to allow
the semantics of the objects and avatars in the virtual environment to be described in the
form of ontology. This provides a standard way for the software components to acquire
semantic information of the world for further reasoning. We have used two types of
examples: path planning and social interaction, to show how users can design their own
code to facilitate richer or autonomous behaviours for their avatars (possibly virtual). We
hope that these examples will shed some lights on the further development of object
ontology and more sophisticated applications.
7. Acknowledgement
This research was funded in part by the National Science Council of Taiwan, R.O.C., under
contract No. NSC96-2221-E-004-008. The paper is extended from a conference paper
published in the International Conference on Virtual Reality Continuum and Its
Applications in Industry (VRCAI2008)
8. References
Abaci, T.; Ciger, J. & Thalmann, D. (2005). Action semantics in Smart Objects. Proc. of
Workshop towards Semantic Virtual Environments
Aylett, R. & Cavazza, M. (2001). Intelligent Virtual Environments - A State-of-the-art Report.
Proc. of Eurographics
Chu, Y.L.; Li, T.Y. & Chen, C.C. (2008). User Pluggable Animation Components in Multi-
user Virtual Environment. Proc. of the Intl. Conf. on Intelligent Virtual Environments
and Virtual Agents, China
Garcia-Rojas, A.; Vexo, F.; Thalmann, D.; Raou-Zaiou, A.; Karpouzis, K. & Kollias, S. (2006).
Emotional Body Expression Parameters In Virtual Human Ontology. Proc. of the 1st
Intl. Workshop on Shapes and Semantics, pp. 63-70, Matsushima, Japan
Gutierrez, M.; Garcia-Rojas, A.; Thalmann, D.; Vexo, F.; Moc-Cozet, L.; Magnenat-
Thalmann, N.; Mortara, M. & Spag-Nuolo, M. (2005). An Ontology of Virtual
Humans: incorporating semantics into human shapes. Proc. of the Workshop towards
Semantic Virtual Environments
Kleinermann, F.; Troyer, O.D.; Creelle, C. & Pellens, B. (2007). Adding Semantic
Annotations, Navigation paths and Tour Guides to Existing Virtual Environments.
Proc. of the 13th Intl. Conf. on Virtual Systems and Multimedia (VSMM07), Brisbane,
Australia.
Li, T.Y.; Liao, M.Y. & Liao, J.F. (2004). An Extensible Scripting Language for Interactive
Animation in a Speech-Enabled Virtual Environment. Proc. of the IEEE Intl. Conf. on
Multimedia and Expo (ICME2004), Taipei, Taiwan.
Li, T.Y.; Liao, M.Y & Tao, P.C. (2005). IMNET: An Experimental Testbed for Extensible
Multi-user Virtual Environment Systems. Proc. of the Intl. Conf. on Computational
Science and its Applications, LNCS 3480, O. Gervasi et al. (Eds.), Springer-Verlag
Berlin Heidelberg, pp. 957-966.
Otto, K.A. (2005). The Semantics of Multi-user Virtual Environments. Proc. of the Workshop
towards Semantic Virtual Environments
184 Applications of Virtual Reality
Salomon, B.; Garber, M.; Lin, M. C. & MANOCHA, D. (2003). Interactive Navigation in
Complex Environment Using Path Planning. Proc. of the 2003 Symposium on
Interactive 3D graphics.
0
10
1. Introduction
Since the emergence of databases in the 1960s, the volume of stored information has grown
exponentially every year (Keim (2002)). This information accumulation in databases has
motivated the development of a new research eld: Knowledge Discovery in Databases
(KDD) (Frawley et al. (1992)) which is commonly dened as the extraction of potentially useful
knowledge from data. The KDD process is commonly dened in three stages: pre-processing,
Data Mining (DM), and post-processing (Figure 1). At the output of the DM process
(post-processing), the decision-maker must evaluate the results and select what is interesting.
This task can be improved considerably with visual representations by taking advantage of
human capabilities for 3D perception and spatial cognition. Visual representations can allow
rapid information recognition and show complex ideas with clarity and efcacy (Card et al.
(1999)). In everyday life, we interact with various information media which present us with
facts and opinions based on knowledge extracted from data. It is common to communicate
such facts and opinions in a virtual form, preferably interactive. For example, when watching
weather forecast programs on TV, the icons of a landscape with clouds, rain and sun, allow
us to quickly build a picture about the weather forecast. Such a picture is sufcient when we
watch the weather forecast, but professional decision-making is a rather different situation.
In professional situations, the decision-maker is overwhelmed by the DM algorithm results.
Representing these results as static images limits the usefulness of their visualization. This
explains why the decision-maker needs to be able to interact with the data representation in
order to nd relevant knowledge. Visual Data Mining (VDM), presented by Beilken & Spenke
(1999) as an interactive visual methodology "to help a user to get a feeling for the data, to
detect interesting knowledge, and to gain a deep visual understanding of the data set", can
facilitate knowledge discovery in data.
In 2D space, VDM has been studied extensively and a number of visualization taxonomies
have been proposed (Herman et al. (2000), Chi (2000)). More recently, hardware progress has
led to the development of real-time interactive 3D data representation and immersive Virtual
Reality (VR) techniques. Thus, aesthetically appealing element inclusion, such as 3D graphics
and animation, increases the intuitiveness and memorability of visualization. Also, it eases
the perception of the human visual system (Spence (1990), Brath et al. (2005)). Although there
is still a debate concerning 2D vs 3D data visualization (Shneiderman (2003)), we believe that
186
2 Applications ofWill-be-set-by-IN-TECH
Virtual Reality
3D and VR techniques haves a better potential to assist the decision-maker in analytical tasks,
and to deeply immerse the users in the data sets. In many cases, the user needs to explore
data and/or knowledge from the inside-out and not from the outside-in, like in 2D techniques
(Nelson et al. (1999)). This is only possible in using VR and Virtual Environment (VEs).
VEs allow users to navigate continuously to new positions inside the data sets, and thereby
obtain more information about the data. Although the benets offered by VR compared to
desk-top 2D and 3D still need to be proven, more and more researchers is investigating its
use with VDM (Cai et al. (2007)). In this context, we are trying to develop new 3D visual
representations to overcome some limitations of 2D representations. VR has already has been
studied in different areas of VDM such as pre-processing (Nagel et al. (2008), Ogi et al. (2009)),
classication (Einsfeld et al. (2006)), and clustering (Ahmed et al. (2006)).
In this context, we review some work that is relevant for researchers seeking or intending to
use 3D representation and VR techniques for KDD. We propose a table that summarizes 14
VDM tools focusing on 3D - VR and interaction techniques based on 3 dimensions:
Visual representations;
Interaction techniques;
Steps in the KDD process.
This paper is organized as follows: rstly, we introduce VDM. Then we dene the terms
related to this eld of research. In Section 3, we explain our motivation for using 3D
representation and VR techniques. In Section 4, we provide an overview of the current state
of research concerning 3D visual representations. In Section 5, we present our motivation
for interaction techniques in the context of KDD. In Section 6, we describe the related work
about visualization taxonomy and interaction techniques. In Section 7, we propose a new
classication for VDM based on both 3D representations and interaction techniques. In
addition, we survey representative works on the use of 3D and VR interaction techniques
in the context of KDD. Finally, we present possible directions for future research.
(a) (b)
Fig. 2. Scientic visualization and information visualization examples: (a): visualization of
the ow eld around a space shuttle (Laviola (2000)) (b): GEOMIE (Ahmed et al.
(2006))information visualization framework
Beilken & Spenke (1999) presented the purpose of VDM as a way to "help a user to get a feeling
for the data, to detect interesting knowledge, and to gain a deep visual understanding of the
data set". Niggemann (2001) looked at VDM as a visual representation of the data close to the
mental model. In this paper we focus on the interactive exploration of data and knowledge
that is built on extensive visual computing(Gross (1994)).
As humans understand information by forming a mental model which captures only the
main information, in the same way, data visualization, similar to the mental model, can
reveal hidden information encoded in the data. In addition to the role of the visual data
representation, Ankerst (2001) explored the relation between visualization and the KDD
process. He dened VDM as "a step in the KDD process that utilizes visualization as a
communication channel between the computer and the user to produce novel and interpreted
patterns". He also explored three different approaches to VDM, two of which affect the nal or
intermediate visualization results. The third approach involves the interactive manipulation
of the visual representation of the data rather than the results of the KDD methods. The
three denitions recognize that VDM relies heavily on human perception capabilities and the
use of interactivity to manipulate data representations. The three denitions also emphasize
the key importance of the following three aspects of VDM: visual representations; interaction
processes; and KDD tasks.
188
4 Applications ofWill-be-set-by-IN-TECH
Virtual Reality
In most of the existing KDD tools, VDM is only used during two particular steps of the KDD
process: in the rst step (pre-processing) VDM can play an important role since analysts need
tools to view and create hypotheses about complex (i.e. very large and / or high-dimensional)
original data sets. VDM tools, with interactive data representation and query resources, allow
domain experts to explore quickly the data set (de Oliveira & Levkowitz (2003)). In the last
step (post-processing) VDM can be used to view and to validate the nal results that are
mostly multiple and complex. Between these two steps, an automatic algorithm is used to
perform the DM task. Some new methods have recently appeared which aim at involving
the user more signicantly in the KDD process; they use visualization and interaction more
intensively, with the ultimate goal of gaining insight into the KDD problem described by vast
amounts of data or knowledge. In this context, VDM can turn the information overload into
an opportunity by coupling the strengths of machines with that of humans. On the one hand,
methods from KDD are the driving force of the automatic analysis side, while on the other
hand, human capabilities to perceive, relate and make conclusions turn VDM into a very
promising research eld. Nowadays, fast computers and sophisticated output devices can
create meaningful visualization and allow us not only to visualize data and concepts, but
also to explore and interact with this data in real-time. Our goal is to look at VDM as an
interactive process with the visual representation of data allowing KDD tasks to be performed.
The transformation of data / knowledge into signicant visualization is not a trivial task. Very
often, there are many different ways to represent data and it is unclear which representations,
perceptions and interaction techniques needs to be applied. This paper seeks to facilitate this
task according to the data and the KDD goal to be achieved by reviewing representation and
interaction techniques used in VDM. KDD tasks have different goals and diverse tasks need
to be applied several times to achieve a desired result. Visual feedback has a role to play,
since the decision-maker needs to analyze such intermediate results before making a decision.
We can distinguish two types of cognitive process within which VDM assists users to make a
decision:
Exploration: the user does not know what he/she is looking for (discovery).
Analysis: the user knows what he/she is looking for in the data and tries to verify it (visual
analysis).
3.1 2D versus 3D
Little research has been dedicated to the comparison of 2D and 3D representations.
Concerning the non-interactive visualization of static graphs, 3D representations have
An Overview
An Overview of Interaction
of Interaction Techniques and 3DTechniques
Representations forand 3D Representations for Data Mining
Data Mining 1895
generally not been advised ever since the publications by Tufte (1983) and Cleveland &
McGill (1984). Nevertheless, the experiments of Spence (1990) and Carswell et al. (1991) show
that there is no signicant difference of accuracy between 2D and 3D for the comparison
of numerical values. In particular, Spence (1990) pointed out that it is not the apparent
dimensionality of visual structures that counts but rather the actual number of parameters
that show variability. Under some circumstances, information may be processed even faster
when represented in 3D rather than in 2D. Concerning the perception of global trends in data,
experimental results of Carswell et al. (1991) also show an improvement in answer times using
3D but to the detriment of accuracy. Other works compare 2D and 3D within the framework
of interactive visualization. Ware & Franck (1994) indicated that displaying data in 3D instead
of 2D can make it easier for users to understand the data. Finally, Tavanti & Lind (2001)
pointed out that realistic 3D displays could support cognitive spatial abilities and memory
tasks, namely remembering the place of an object, better than with 2D.
On the other hand, several problems arise such as intensive computation, more complex
implementations than 2D interfaces, and user adaptation and disorientation. The rst
problem can be addressed by using powerful and specialized hardware. However, one of the
main problems of 3D applications is user adaptation. In fact, most users just have experience
with classical windows, icons, menu pointing devices (WIMP) and 2D-desktop metaphors.
Therefore, interaction with 3D presentations and possibly the use of special devices demand
considerable adaptation efforts to use this technology. There is still no commonly-accepted
standard for interaction with 3D environments. Some research has shown that it takes
users some time to understand what kind of interaction possibilities they actually have
(Baumgrtner et al. (2007)). In particular, as a consequence of a richer set of interactions and a
higher degree of freedom, users may be disoriented.
the user to process the image data 30 times faster than manually. As a result, they suggested
that human interaction may signicantly increase overall productivity.
We can therefore conclude that stereoscopy and interaction are the two most important
components of VE and the most useful to users. Therefore, the equipment used should be
taken into account from the very beginning of application design, and consequently be taken
into account as a part of VDM techniques taxonomy.
A technique based on the hyper system (Hendley et al. (1999)) for force-based visualization
can be used to create a graph representation. The visualization consists of nodes and links
whose properties are given by the parameters of the data. Data elements affect parameters
such as node size and color, link strength and elasticity. The dynamic graphs algorithm
enables the self-organization of nodes in the visualization area by the use of a force system
in order to nd a steady state, and determine the position of the nodes. For example, Beale
(2007) proposed a Haiku system (Figure.3(b)) which provides an abstract 3D perspective
of clustering algorithm results based on the hyper system. One of the characteristics of
this system is that the user can choose which parameters are used to create the distance
metrics (distance between two nodes), and which ones affect the other characteristics of the
visualization (node size, link elasticity, etc.). Using the hyper system allows related things
(belonging to the same cluster) to be near to each other, and unrelated things to be far away.
(a) (b)
(c)
Fig. 3. An example of graph representations: (a) Source code Ougi (Osawa et al. (2002)), (b)
Association rules: Haiku (Beale (2007)), (c) DocuWorld (Einsfeld et al. (2006))
2. 3D trees
3D trees (Figure.4) is a visualization technique based on the hierarchical organization of data.
A tree can represent many entities and the relationships between them. In general, the
192
8 Applications ofWill-be-set-by-IN-TECH
Virtual Reality
(a) (b)
Fig. 4. An example of trees representing ontology classication: SUMO (Buntain (2008))
3. Geometric shapes
In this technique, 3D objects with certain attributes are used to represent data and knowledge.
The 3D scatter-plot visualization technique (Nagel et al. (2001)) is one of the most common
An Overview
An Overview of Interaction
of Interaction Techniques and 3DTechniques
Representations forand 3D Representations for Data Mining
Data Mining 1939
(a) (b)
(c) (d)
Fig. 5. Different 3D scatter plot representations: (a) VRMiner (Azzag et al. (2005)), (b)
3DVDM (Nagel et al. (2008)), (c) DIVE-ON (Ammoura et al. (2001)), (d) Visualization with
augmented reality (Meiguins et al. (2006))
(Figure.6). The virtual worlds (sometimes called cyber-spaces) for VDM are generally based
either on the information galaxy (Krohn (1996)) or the information landscape metaphor
(Robertson et al. (1998)). The difference between the two metaphors is that in the information
landscape, the elevation of objects is not used to represent information (objects are placed on
a horizontal oor). The specicity of virtual worlds is that they provide the user with some
real world representations.
(a) (b)
Fig. 6. Example of virtual world representation (a) faults projected onto a car model in
(Gtzelmann et al. (2007)) (b) documents classication in @VISOR Baumgrtner et al. (2007)
In visual exploration, the user can also manipulate the objects in the scene. In order to do this,
interaction techniques provide means to select and zoom-in and zoom-out to change the scale
of the representation. Beale (2007) has demonstrated that using a system which supports the
free exploration and manipulation of information delivers increased knowledge even from a
well know dataset. Many systems provide a virtual hand or a virtual pointer (Einsfeld et al.
(2007)), a typical approach used in VE, which is considered as being intuitive as it simulates
real-world interaction (Bowman et al. (2001)).
Select: this technique provides users with the ability to mark interesting data items in
order to keep track of them when too many data items are visible, or when the perspective
is changed. In these two cases, it is difcult for users to follow interesting items. By making
items visually distinctive, users can easily keep track of them even in large data sets and/or
with changed perspectives.
Zoom: by zooming, users can simply change the scale of a representation so that they
can see an overview (context) of a larger data set (using zoom-out) or the detailed view
(focus) of a smaller data set (using zoom-in). The essential purpose is to allow hidden
characteristics of data to be seen. A key point here is that the representation is not
fundamentally altered during zooming. Details simply come into focus more clearly or
disappear into context.
Visual exploration (as we can see in Section.7) can be used in the pre-processing of the KDD
process to identify interesting data (Nagel et al. (2008)), and in post-processing to validate
DM algorithm results (Azzag et al. (2005)). For example, in VRMiner (Azzag et al. (2005)) and
in ArVis (Blanchard et al. (2007)), the user can point to an object to select it and then obtain
informations about it.
In KDD, the user is essentially faced with a mass of data that he/she is trying to make sense
of. He/she should look for something interesting. However, interest is an essentially human
construct, a perspective of relationships among data that is inuenced by tasks, personal
preferences, and past experience. For this reason, the search for knowledge should not only
be left to computers; the user has to guide it depending upon what he/she is looking for,
and hence which area to focus computing power on. Manipulation techniques provide users
with different perspectives of the visualized data by changing the representation. On of this
techniques is the capability of changing the attributes presented in the representation. For
example, in the system shown by Ogi et al. (2009), the user can change the combination of
presented data. Other systems have interaction techniques that allow users to move data items
more freely in order to make the arrangement more suitable for their particular mental model
(Einsfeld et al. (2006)). Filter interaction techniques enable users to change the set of data items
being presented on some specic conditions. In this type of interaction, the user species
a range or condition, so that only data meeting those criteria are presented. Data outside
the range or not satisfying the conditions are hidden from the display or shown differently;
even so, the actual data usually remain unchanged so that whenever users reset the criteria,
the hidden or differently-illustrated data can be recovered. The user is not changing data
perspectives, just specifying conditions within which data are shown. ArVis (Blanchard et al.
(2007)), allows the user to look for a rule with a particular item in it. To do this, the user can
search for it in a menu which lists all the rule items and allows the wanted object to be shown.
An Overview
An Overview of Interaction
of Interaction Techniques and 3DTechniques
Representations forand 3D Representations for Data Mining
Data Mining 197
13
Dimension Modalities
Visual representation Graphs, 3D trees, geometrical shapes, virtual worlds
Interaction techniques Visual exploration, visual manipulation, human-centered
KDD tasks Pre-processing, classication, clustering, association rules
design taxonomies include only a small subset of techniques (e.g., locomotion Arns (2002)).
Currently, visualization tools have to provide not only effective visual representations but also
effective interaction metaphors to facilitate the exploration and help users achieve insight.
Having a good 3D representation without a good interaction technique does not mean
having a good tool. This classication looks at some representative tools for doing different
KDD tasks, e.g., pre-processing and post-processing (classication, clustering and association
rules). Different tables summarize the main characteristics of the reported VDM tools with
regard to visual representations and interaction techniques. Other relevant information
such as interaction actions ( navigation, selection and manipulation, and system control),
input-output devices (CAVE, mouse, hand tracker, etc.) presentation (3D representation or
VR representation) and year of creation is also reported.
7.1 Pre-processing
Pre-processing (in VDM) is the task of data visualization before the DM algorithm is used. It is
generally required as a starting point of KDD projects so that analysts may identify interesting
and previously unknown data by the interactive exploration of graphical representations of
a data set without heavy dependence on preconceived assumptions and models. The basic
visualization technique used for data pre-processing is the 3D scatter-plots method, where
3D objects with attributes are used as markers. The main principle behind the design of
traditional VDM techniques, such as The Grand Tour (Asimov (1985)), the parallel coordinate
(Inselberg & Dimsdale (1990)), etc., is that they are viewed from the outside-in. In contrast to
this, VR lets users explore the data from inside-out by allowing users to navigate continuously
to new positions inside the VE in order to obtain more information about the data. Nelson
et al. (1999) demonstrated through comparisons between 2D and VR versions of the VDM
tool XGobi that the VR version of XGobi performed better.
In the Ogi et al. (2009) system, the user can see several data set representations integrated in
the same space. The user can switch the visible condition of each data set. This system could
be used to represent the relationships among several data sets in 3D space, but it does not
allows the user to navigate through the data set and interact with it. The user can only change
the visual mapping of the data set. However, the main advantage of this system is that the
data can be presented with a hight degree of accuracy using hight-denition stereo-images
that can be benecial especially when visualizing a large amount of data. This system has
been applied to the visualization and analysis of earthquake data. Using the 3rd dimension
has allowed the visualization of both the overall distribution of the hypocenter data and the
individual location on any earthquake, which is not possible with the conventional 2D display.
Figure 9 shows hypocenter data recorded over 3 years. The system allows the visualization
of several databases at the same time e.g. map data, terrain data, basement depth, etc and
the user can switch the visible condition of each data in the VE. For example, the user can
change the visualization data from the combination of hypocenter data and basement depth
200
16 Applications ofWill-be-set-by-IN-TECH
Virtual Reality
data to the combination of hypocenter data and terrain data. Thus, the system can shows the
relationships between only any two data sets among the others.
As a result of using VR, the 3DVDM system (Nagel et al. (2008)) is capable of providing
real-time user response and navigation as well as showing dynamic visualization of large
amounts of data. Nagel et al. (2008) demonstrated that the 3DVDM visualization system
allows faster detection of non-linear relationships and substructures in data than traditional
methods of data analysis. An alternative proposal is available with DIVE-ON (Data mining
in an Immersed Visual Environment Over a Network) system, proposed by Ammoura et al.
(2001). The main idea of DIVE-ON is visualizing and interacting with data from distributed
data warehouses in an immersed VE. The user can interact with such sources by walking
or ying towards them. He/she also can pop up a menu, scroll through it and execute all
environment, remote, and local functions. Thereby, DIVE-ON makes intelligent use of the
natural human capability of interacting with spatial objects and offers considerable navigation
possibilities e.g. walking, ying, transporting and climbing.
An Overview
An Overview of Interaction
of Interaction Techniques and 3DTechniques
Representations forand 3D Representations for Data Mining
Data Mining 201
17
Fig. 9. Visualization of earthquakes data using a 4K stereo projection system (Ogi et al. (2009))
Inspired by treemaps Wang et al. (2006) presented a novel space-lling approach for tree
visualization of le systems (Figure.10). This system provides a good overview for a large
hierarchical data set and uses nested circles to make it easier to see groupings and structural
relationships. By clicking on an item (a circle), the user can see the associated sub-items
represented by the nested circles in a new view. The system provides the user with a
control panel allowing him/her to lter les by types; by clicking on one le type, the other
les types are ltered out. A zoom-in/zoom-out function allows the user to see folder or
le characteristics such as name, size, and date. A user-feedback system means that user
interaction techniques are friendly and easy to use.
Fig. 10. Representation of a le system with 3D-nested cylinders and spheresWang et al.
(2006)
202
18 Applications ofWill-be-set-by-IN-TECH
Virtual Reality
7.2 Post-processing
Post-processing is the nal step of the KDD process. Upon receiving the output of the DM
algorithm, the decision-maker must evaluate and select the interesting part of the results.
7.2.1 Clustering
Clustering is used for nding groups of items that are similar. Given a set of data items, this
set can be partitioned into a set of classes, so that items with similar characteristics are grouped
together.
The GEOMI system proposed by Ahmed et al. (2006) is a visual analysis tool for the
visualization of clustered graphs or trees. The system implements block model methods to
associate each group of nodes to corresponding cluster. Two nodes are in the same cluster if
they have the same neighbor set. This tool allows immersive navigation in the data using 3D
head gestures instead of the classical mouse input. The system only allows the user visual
exploration. Users can walk into the network, move closer to nodes or clusters by simply
aiming in their direction. Nodding or tilting the head rotates the entire graph along the X and
Y axes respectively, which provides users with intuitive interaction.
The objective of @VSIOR (Baumgrtner et al. (2007)), which is a human-centered approach, is
to create a system for interaction with document, meta-data, and semantic relations. Human
capabilities in this context are spatial memory and the fast visual processing of attributes and
patterns. Articial intelligence techniques assist the user, e.g. in searching for documents and
calculating document similarities.
Otherwise, VRMiner (Azzag et al. (2005)) uses stereoscopic and intuitive navigation; these
allow the user to easily select the interesting view point. VRMiner users have found that using
this tool helps them solve 3 major problems: detecting correlation between data dimensions,
checking the quality of discovered clusters, and presenting the data to a panel of experts. In
this context, the stereoscopic display plays a crucial role in addition to the intuitive navigation
which allows the user to easily select the interesting view point.
A detailed comparison of these techniques is presented in Table.3.
7.2.2 Classication
Given a set of pre-dened categorical classes, determine which of these classes a specic data
item belongs to.
An Overview
An Overview of Interaction
of Interaction Techniques and 3DTechniques
Representations forand 3D Representations for Data Mining
Data Mining 203
19
In SUMO (Figure.4), a tool for document-class visualization is proposed (Buntain (2008)). The
structure classes and relations among those classes can be presented to the user in a graphic
form to facilitate understanding of the knowledge domain. This view can then be mapped
onto the document space where shapes, sizes, and locations are governed by the sizes,
overlaps, and other properties of the document classes. This view provides a clear picture of
the relations between the resulting documents. Additionally, the user can manipulate the view
to show only those documents that appear in a list of a results from of a query. Furthermore,
if the results view includes details about subclasses of results and "near miss" elements in
conjunction with positive results, the user can rene the query to nd more appropriate results
or widen the query to include more results if insufcient information is forthcoming. The third
dimension allows the user a more expressive space, complete with navigation methods such
as rotation and translation. In 3D, overlapping lines or labels can be avoided by rotating the
layout to a better point of view.
DocuWorld (Einsfeld et al. (2006)), is a prototype for a dynamic semantic information
system. This tool allows computed structures as well as documents to be organized by
users. Compared to the web Forager (Card et al. (1996)), a workspace to organize documents
with different degrees of interest at different distances to the user, DocuWorld provides the
user with more exible possibilities to store documents at locations dened by the user and
visually indicates cluster-document relations (different semantics of connecting clusters to
each other).
A detailed comparison of these techniques is presented in Table.4.
Fig. 11. ArVis a tool for association rules visualization Blanchard et al. (2007)
In order to nd relevant knowledge for decision-making, the user needs to rummage through
the rules.
ArVis proposed by Blanchard et al. (2007) is a human-centred approach. This approach
consists of letting the user navigate freely inside the large set of rules by focusing on successive
limited subsets via a visual representation of the rules (Figure.11). In other words, the user
gradually drives a series of visual local explorations according to his/her interest for the
rules. This approach is original compared to other rule visualization methods (Couturier et al.
(2007), Gordal & Demiriz (2006), Zhao & Liu (2005)). Moreover, ARVis generates the rules
dynamically during exploration by the user. Thus, the users guidance during association rule
An Overview
An Overview of Interaction
of Interaction Techniques and 3DTechniques
Representations forand 3D Representations for Data Mining
Data Mining 205
21
post-processing is also exploited during association rule mining to reduce the search space
and avoid generating huge amounts of rules.
Gtzelmann et al. (2007) proposed a VDM system to analyze error sources of complex
technical devices. The aims of the proposed approach is to extract association rules from a set
of documents that describe malfunctions and errors for complex technical devices, followed
by a projection of the results on a corresponding 3D model. Domain experts can evaluate
the results gained by the DM algorithm by exploring a 3D model interactively in order to
nd spatial relationships between different components of the product. 3D enables a exible
spatial mapping of the results of statistical analysis. The visualization of statistical data on
their spatial reference object by modifying visual properties to encode data (Figure.6(a) ) can
reveal apriori unknown facts, which where hidden in the database. By interactively exploring
the 3D model, unknown sources and correlations of failures can be discovered that rely on the
spatial conguration of several components and the shape of complex geometric objects.
A detailed comparison of these techniques is presented in Table.5.
The Haiku tool (Figure.3(b)) combines several DM methods: clustering, classication and
association rules (Beale (2007)). In this tool, the use of 3D graphs allows the visualization
of high-dimensional data in a comprehensible and compact representation. The interface
provides a large set of 3D manipulation feature of the structure, such as zooming in and out,
moving through the representation (ying), rotating, jumping to specic location, viewing
data details, and dening an area of interest . The only downside is that the control is done
using a mouse. A detailed presentation is shown in Table.6.
206
22 Applications ofWill-be-set-by-IN-TECH
Virtual Reality
8. Conclusion
A new classication of VDM tools composed of 3 dimensions: visual representations;
interaction techniques; and DM tasks, has been presented along with a survey of visual
representations and interaction techniques in VDM. We can see that most of the recent VDM
tools still rely on interaction metaphors developed more than a decade ago, and do not take
into account the new interaction metaphors and techniques offered by VR technology. It is
questionable whether these classical visualization/interaction techniques are able to meet
the demands of the ever-increasing mass of information, or whether we are losing ground
because we still lack the possibilities to properly interact with the databases to extract relevant
knowledge. Devising intuitive visual interactive representations for DM and providing
real-time interaction and mapping techniques that are scalable to the huge size of many
current databases, are some of the research challenges that need to be addressed. In answer to
this challenge, Mackinlay (1986) proposes two essential criterias to evaluate data mapping
by visual representation: expressiveness and effectiveness. Firstly, expressiveness criteria
determine whether a visual representation can express the desired information. Secondly,
effectiveness criteria determine whether a visual representation exploits the capabilities of
the output medium and the human visual system. Although the criteria were discussed in
a 2D-graphic context, they can be extended to 3D and VR visualization. Finally, VDM is
inherently cooperative requiring many experts to coordinate their activities to make decisions.
Thus, collaborative research visualization may help to improve VDM processes. For example,
current technology provided by 3D collaborative virtual worlds for gaming and social
interaction, may support new methods of KDD.
9. References
Ahmed, A., Dwyer, T., Forster, M., Fu, X., Ho, J., Hong, S.-H., Koschutzki, D., Murray, C.,
Nikolov, N. S., Tarassov, R. T. A. & Xu, K. (2006). Geomi: Geometry for maximum
insight, Graph Drawing 3843: 468479.
Aitsiselmi, Y. & Holliman, N. S. (2009). Using mental rotation to evaluate the benets
of stereoscopic displays, Proceedings of SPIE, the International Society for Optical
Engineering, pp. 112.
Ammoura, A., Zaane, O. R. & Ji, Y. (2001). Immersed visual data mining: Walking the walk,
Proceedings of the 18th British National Conference on Databases, pp. 202218.
An Overview
An Overview of Interaction
of Interaction Techniques and 3DTechniques
Representations forand 3D Representations for Data Mining
Data Mining 207
23
Ankerst, M. (2001). Visual Data Mining, PhD thesis, Institute for Computer Science Database
and Information Systems, University of Munich.
Arns, L. L. (2002). A new taxonomy for locomotion in virtual environments, PhD thesis, Iowa State
University, USA.
Asimov, D. (1985). The grand tour: a tool for viewing multidimensional data, SIAM Journal on
Scientic and Statistical Computing 6(1): 128143.
Azzag, H., Picarougne, F., Guinot, C. & Venturini, G. (2005). Vrminer: a tool for multimedia
databases mining with virtual reality, in J. Darmont & O. Boussaid (eds), Processing
and Managing Complex Data for Decision Support, pp. 318339.
Baumgrtner, S., Ebert, A., Deller, M. & Agne, S. (2007). 2d meets 3d: a human-centered
interface for visual data exploration, Extended abstracts on Human factors in computing
systems, pp. 22732278.
Beale, R. (2007). Supporting serendipity: Using ambient intelligence to augment
user exploration for data mining and web browsing, International Journal of
Human-Computer Studies 65(5): 421433.
Becker, B. (1997). Volume rendering for relational data, Proceedings of the IEEE Symposium on
Information Visualization, pp. 8791.
Beilken, C. & Spenke, M. (1999). Interactive data mining with infozoom : the medical data set,
Workshop Notes on Discovery Challenge, at the 3rd European Conference on Principles and
Practice of Knowledge Discovery in Databases, pp. 4954.
Blanchard, J., Guillet, F. & Briand, H. (2007). Interactive visual exploration of association rules
with rule-focusing methodology, Knowledge and Information Systems 13(1): 4375.
Bowman, D. A., Kruijff, E., LaViola, J. J. & Poupyrev, I. (2001). An introduction to 3-d user
interface design, Presence: Teleoper. Virtual Environ. 10(1): 96108.
Brath, R., Peters, M. & Senior, R. (2005). Visualization for communication: The importance
of aesthetic sizzle, Proceedings of the 9th International Conference on Information
Visualisation, pp. 724729.
Bukauskas, L. & Bhlen, M. (2001). Observer relative data extraction, In Proceedings of the
International Workshop on Visual Data Mining, pp. 12.
Buntain, C. (2008). 3d ontology visualization in semantic search, Proceedings of the 46th Annual
Southeast Regional Conference, pp. 204208.
Cai, Y., Stumpf, R., Wynne, T., Tomlinson, M., Chung, D. S. H., Boutonnier, X., Ihmig,
M., Franco, R. & Bauernfeind, N. (2007). Visual transformation for interactive
spatiotemporal data mining, Knowledge and Information Systems 13(2): 119142.
Card, S. K., Mackinlay, J. D. & Schneiderman, B. (1999). Readings in information visualization :
using vision to think, Morgan Kaufmann publishers.
Card, S. K., Robertson, G. G. & York, W. (1996). The webbook and the web forager: an
information workspace for the world-wide web, Proceedings of the SIGCHI conference
on Human factors in computing systems, pp. 416417.
Carswell, C. M., Frankenberger, S. & Bernhard, D. (1991). Graphing in depth: Perspectives on
the use of three-dimensional graphs to represent lower-dimensional data., Behaviour
& Information Technology. 10(6): 459474.
Ceglar, A., Roddick, J. F. & Calder, P. (2003). Managing data mining technologies in organizations,
IGI Publishing, chapter Guiding knowledge discovery through interactive data
mining, pp. 4587.
Chi, E. (2000). A taxonomy of visualization techniques using the data statereference model,
Proceedings of IEEE Symposium on Information Visualization, pp. 6975.
208
24 Applications ofWill-be-set-by-IN-TECH
Virtual Reality
Chi, E. H. & Riedl, J. T. (1998). An operator interaction framework for visualization systems,
Proceedings of IEEE Symposium on Information Visualization, pp. 6370.
Cleveland, W. S. & McGill, R. (1984). Graphical perception: Theory, experimentation,
and application to the development of graphical methods., Journal of the American
Statistical Association 79(387): 531554.
Couturier, O., Rouillard, J. J. L. & Chevrin, V. (2007). An interactive approach to display
large sets of association rules, Proceedings of the 2007 conference on Human interface,
pp. 258267.
Dachselt, R. & Hinz, M. (2005). Three-dimensional widgets revisited: towards future
standardization, in K. Y. Bowman D, Froehlich B & S. W (eds), New directions in 3D
user interfaces, pp. 89 92.
de Oliveira, M. C. F. & Levkowitz, H. (2003). From visual data exploration to visual data
mining: A survey, Visualization and Computer Graphic 9(3): 378394.
Eidenberger, H. (2004). Visual data mining, SPIE Information Technology and Communication
Symposium, pp. 121132.
Einsfeld, K., Agne, S., Deller, M., Ebert, A., Klein, B. & Reuschling, C. (2006). Dynamic
visualization and navigation of semantic virtual environments, Proceedings of the
conference on Information Visualization, pp. 569574.
Einsfeld, K., Ebert, A. & Wolle, J. (2007). Hannah: A vivid and exible 3d information
visualization framework, Proceedings of the 11th International Conference on Information
Visualization, pp. 720725.
Frawley, W. J., Piatetsky-Shapiro, G. & Matheus, C. J. (1992). Knowledge discovery in
databases: An overview, AI Magazine 13(3): 5770.
Gordal & Demiriz, A. (2006). A framework for visualizing association mining results, Lecture
Notes in Computer Science 4263/2006: 593602.
Gtzelmann, T., Hartmann, K., Nrnberger, A. & Strothotte, T. (2007). 3d spatial data
mining on document sets for the discovery of failure causes in complex technical
devices, Proceedings of the I2nd Int. Conf. on Computer Graphics Theory and Applications,
pp. 137145.
Gross, M. (1994). Visual computing : the integration of computer graphics, visual perception and
imaging, Springer-Verlag.
Hendley, R. J., Drew, N. S., Wood, A. M. & Beale, R. (1999). Narcissus: visualising information,
Proceedings of the IEEE Symposium on Information Visualization, pp. 90 96.
Herman, I., , Melancon, G. & Marshall, M. S. (2000). Graph visualization and navigation in
information visualization: A survey, IEEE Transactions on Visualization and Computer
Graphics 6(1): 2443.
Hibbard, W., Levkowitz, H., Haswell, J., Rheingans, P. & Schroeder, F. (1995). Interaction
in perceptually-based visualization, Perceptual Issues in Visualization IFIP Series on
Computer Graphics: 2332.
Inselberg, A. & Dimsdale, B. (1990). Parallel coordinates: a tool for visualizing
multi-dimensional geometry, Proceedings of the 1st conference on Visualization,
pp. 361378.
Johnson, B. & Shneiderman, B. (1991). Tree-maps: a space-lling approach to the visualization
of hierarchical information structures, Proceedings of the 2nd conference on Visualization,
pp. 284291.
Kalawsky, R. & Simpkin, G. (2006). Automating the display of third person/stealth views of
virtual environments, Presence: Teleoper. Virtual Environ. 15(6): 717739.
An Overview
An Overview of Interaction
of Interaction Techniques and 3DTechniques
Representations forand 3D Representations for Data Mining
Data Mining 209
25
Keim, D. A. (2002). Information visualization and visual data mining, IEEE Transactions on
Visualization and Computer Graphics 8(1): 18.
Krohn, U. (1996). Vineta: navigation through virtual information spaces, Proceedings of the
workshop on Advanced visual interfaces, pp. 4958.
Laviola, J. J. (2000). Msvt: A multimodal scientic visualization tool, the 3ed IASTED
International Conference on Computer Graphics and Imaging, pp. 117.
Mackinlay, J. (1986). Automating the design of graphical presentations of relational
information, ACM Transactions on Graphics 5(2): 110141.
Maletic, J. I., Marcus, A., Dunlap, G. & Leigh, J. (2001). Visualizing object-oriented software in
virtual reality, Proceedings of the 9th International Workshop on Program Comprehension
(IWPC01), pp. 2638.
Meiguins, B. S., Melo, R., do Carmo, C., Almeida, L., Goncalves, A. S., Pinheiro, S.
C. V. & de Brito Garcia, M. (2006). Multidimensional information visualization
using augmented reality, Proceedings of ACM international conference on Virtual reality
continuum and its applications, pp. 391 394.
Nagel, H. R., Granum, E., Bovbjerg, S. & Vittrup, M. (2008). Visual Data Mining,
Springer-Verlag, chapter Immersive Visual Data Mining: The 3DVDM Approach,
pp. 281311.
Nagel, H. R., Granum, E. & Musaeus, P. (2001). Methods for visual mining of data in virtual
reality, Proceedings of the International Workshop on Visual Data Mining in conjunction
with 2nd European Conference on Machine Learning and 5th European Conference on
Principles and Practice of Knowledge Discovery in Databases, pp. 1327.
Nelson, L., Cook, D. & Cruz-Neira, C. (1999). Xgobi vs the c2: Results of an experiment
comparing data visualization in a 3-d immersive virtual reality environement with a
2-d workstation display, computational statistics 14: 3951.
Niggemann, O. (2001). Visual Data Mining of Graph-Based Data, PhD thesis, Department of
Mathematics and Computer Science of the University of Paderborn, Germany.
Ogi, T., Tateyama, Y. & Sato, S. (2009). Visual data mining in immersive virtual environment
based on 4k stereo images, Proceedings of the 3rd International Conference on Virtual and
Mixed Reality, pp. 472 481.
Osawa, K., Asai, N., Suzuki, M., Sugimoto, Y. & Saito, F. (2002). An immersive programming
system: Ougi, Proceedings of the 12th International Conference on Articial Reality and
Telexistence, pp. 36 43.
Parker, G., Franck, G. & Ware, C. (1998). Visualization of large nested graphs in 3d: Navigation
and interaction, Journal of Visual Languages & Computing 9(3): 299317.
Pike, W. A., Staskob, J., Changc, R. & OConnelld, T. A. (2009). The science of interaction,
Information Visualization 8(4): 263274.
Plaisant, C., Grosjean, J. & Bederson, B. B. (2002). Spacetree: Supporting exploration in large
node link tree, design evolution and empirical evaluation, Proceedings of the IEEE
Symposium on Information Visualization, pp. 5764.
Poulet, F. & Do, T. N. (2008). Visual Data Mining, Springer-Verlag, chapter Interactive Decision
Tree Construction for Interval and Taxonomical Data, pp. 123135.
Pryke, A. & Beale, R. (2005). Interactive comprehensible data mining, Ambient Intelligence for
Scientic Discovery 3345/2005: 4865.
Robertson, G., Czerwinski, M., Larson, K., Robbins, D. C., Thiel, D. & van Dantzich, M. (1998).
Data mountain: using spatial memory for document management, Proceedings of the
11th annual ACM symposium on User interface software and technology, pp. 153162.
210
26 Applications ofWill-be-set-by-IN-TECH
Virtual Reality
Robertson, G. G., Mackinlay, J. D. & Card, S. K. (1991). Cone trees: animated 3d visualizations
of hierarchical information, Proceedings of the SIGCHI conference on Human factors in
computing systems: Reaching through technology, pp. 189 194.
Shneiderman, B. (2003). Why not make interfaces better than 3d reality?, IEEE Computer
Graphics and Applications 23(6): 1215.
Spence, I. (1990). Visual psychophysics of simple graphical elements., Journal of Experimental
Psychology: Human Perception and Performance. 16(4): 683692.
Tavanti, M. & Lind, M. (2001). 2d vs 3d, implications on spatial memory, Proceedings of the
IEEE Symposium on Information Visualization, p. 139.
Teyseyre, A. R. & Campo, M. R. (2009). An overview of 3d software visualization, IEEE
Transactions on Visualization and Computer Graphics 15(1): 87105.
Tory, M. & Moller, T. (2004). Rethinking visualization: A high-level taxonomy, Proceedings of
IEEE Symposium on Information Visualization, pp. 151158.
Tufte, E. R. (1983). The Visual Display of Quantitative Information, Graphics Press.
Van Ham, F. van Wijk, J. (2002). Beamtrees: compact visualization of large hierarchies,
Proceedings of IEEE Symposium on Information Visualization pp. 93 100.
Wang, W., Wang, H., Dai, G. & Wang, H. (2006). Visualization of large hierarchical data by
circle packing, Proceedings of SIGCHI conference on Human Factors in computing systems,
pp. 517520.
Ware, C. & Franck, G. (1994). Viewing a graph in a virtual reality display is three times as
good as a 2d diagram, Proceedings of IEEE Visual Languages, pp. 182183.
Ware, C. & Franck, G. (1996). Evaluating stereo and motion cues for visualizing information
nets in three dimensions, ACM Transactions on Graphics 15(2): 121140.
Ware, C. & Mitchell, P. (2008). Visualizing graphs in three dimensions, ACM Transactions on
Applied Perception 5(1): 115.
Zhao, K. & Liu, B. (2005). Opportunity Map: A Visualization Framework for Fast Identication of
Actionable Knowledge, PhD thesis, University of Illinois at Chicago.