0% found this document useful (0 votes)
32 views

Introduction To Multimedia Its Application

introduction to multimedia its application

Uploaded by

frantickat007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Introduction To Multimedia Its Application

introduction to multimedia its application

Uploaded by

frantickat007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 198

BMG 108

INTRODUCTION TO MULTIMEDIA AND


ITS APPLICATION

. The draft instructional material is being made


available “as received” from authors. The editing
and various other quality checks are under progress.
The uses are advised to consult Study Centers for
any missing content, updates and instructions
according to syllabi of course.
UNIT 1 INTRODUCTION TO
MULTIMEDIA SYSTEM
Program Name:BSc(MGA)
Written by: Mrs.Shailaja M. Pimputkar,Srajan
Structure:
1.0 Introduction
1.1 Unit Objectives
1.2 What is Multimedia System
1.3 History of multimedia System
1.4 Feature of Multimedia System
1.4.1 Challenges of Multimedia System
1.4.2 Features of Multimedia System
1.4.3 Components of Multimedia System
1.5 Trends in multimedia
1.6 Advantages and Disadvantages of Multimedia
1.6.1 Advantages of multimedia
1.6.2 Disadvantages of Multimedia
1.7 Summary
1.8 Key Terms
1.9 End Questions
1.10 Further Reading
.

1.0 INTRODUCTION
As the first Unit of the course we are going to learn what actually the multimedia is.
In this lesson we will learn the preliminary concepts of Multimedia. We will discuss the
various benefits and applications of multimedia. After going through this chapter then you
will be able to understand why multimedia is important in today’s world.
The termed Multimedia is combination of two terms. Multiple and media. Multimedia
is a combination of different media elements like text, video, audio, graphics. Nowadays
multimedia is used in every field education, advertising, medicine, business etc. These days,
the rapid evolution of multimedia is the result of the emergence and convergence of all these
technology.
In this unit you will come to know about the various trends of multimedia system.
Also the challenges faced by multimedia system, features of multimedia system and various
components of the multimedia system.

1.1 UNIT OBJECTIVES:


After studying this unit you will be able to
Describe the history of multimedia
Explain the different challenges of multimedia
Explain the components of multimedia and their application
Describe the basic features of multimedia

1.2 WHAT IS MULTIMEDIA SYSTEM

Multimedia
A Multimedia is an Application which uses a collection of multiple media sources e.g.
text, graphics, images, sound/audio, and video. Multimedia describes in simple language is
"more than one medium." In other words, Multimedia is an integration of text, images,
sounds, and movement.
Multimedia System
Multimedia system is a computer system that has ability to store, compress, decompress,
capture, digitize, and present information. The main purpose of multimedia system is to
provide a creative and effective way of producing, storing and communicating information.
Marketing, Training, Education, Entertainment are the areas where multimedia used.

1.3 HISTORY OF MULTIMEDIA SYSTEM


As we all know now days there are many mediums available for communication like
radio, television, internet etc. But Newspaper was probably the first mass communication
medium to use Multimedia as a tool .they used mostly text, graphics, and images.
In 1895, an Italian scientist named Gugliemo Marconi sent his first wireless radio
transmission at Pontecchio, Italy. In 1901, he detected radio waves beamed across the
Atlantic. Wireless radio was initially invented for telegraph. Today radio is a major medium
for audio broadcasting.
After its invention, Television was the new media for the 20th century. It incorporates
video and audio and has since changed the world of mass communications.

Some of the important events in relation to Multimedia in Computing include:

• 1945 – Vannevar Bush wrote about Memex


• 1967 – Nicholas Negroponte formed the Architecture Machine Group at MIT
• 1969 – Ted Nelson &Andrives Van Dam developed the hypertext editor at
Brown university
• Birth of The Internet
• 1971 – Electronic mail(Email)
• 1976 - Architecture Machine Group proposal to Defence Advance Resarch
project Agency(DARPA): Multiple Media
• 1980 – Andrew Lippman & Bob Mohl developed the hypermedia system
called Aspen Movie Map
• 1983 – David S.Backer presents a prototype electronic Book
• 1985 – Nicolas Negroponte,and Jerome B. Wiesner opened MIT Media Lab
• 1989 - Tim Berners-Lee proposed the World Wide Web to CERN (European
Council for Nuclear Research)
• 1990 - K. Hooper Woolsey, Apple Multimedia Lab, 100 people, educ.
• 1991 - Apple Multimedia Lab: Visual Almanac, Classroom MM Kiosk
• 1992 - the first M-bone audio multicast on the Net
• 1993 - U. Illinois National Center for Supercomputing Applications: NCSA
Mosaic
• 1994 - Jim Clark and Marc Andreesen creates Netscape browser
• 1995 – JAVAwas unveiled for platform-independent application
development. Duke is the first applet.
• 1996 – Microsoft developed Internet Explorer.

Check your progress-1


What is multimedia?
Define the term Multimedia System.
Who invented the first radio?

1.4 FEATURES OF MULTIMEDIA


Multimedia is an integration of media and contents that user a combination of different
content forms. One can use the term as a noun or as adjective defining a medium as having a
combination of content forms.

1.4.1 Challenges of Multimedia System

To support multimedia applications over a computer network, multimedia provides a


distributed application. This application has many special computing techniques.

Multimedia systems may have to render a variety of media at the same time. There is a
material relationship between many forms of media (e.g. Video and Audio.) These may
create problems such as following:

• Sequencing within the media -playing frames in correct order/time frame in


video
• Synchronization - inter-media scheduling (e.g. Video and Audio). Lip
synchronization is very important for humans to watch playback of video and
audio and even animation and audio.

The main issues that multimedia systems need to deal with are as follows:
How to represent and store temporal information.
How to strictly maintain the temporal relationships on play back/retrieval
What process are involved

To be able to digitize many initial sources of data, one should translate the data form an
analog source to digital representation. The will involve sampling (audio/video) although
digital cameras now exist for direct scene to digital capture of images and video. Scanning
(graphics, still images).

The data is large several Mb easily for audio and video -- therefore storage, transfer
(bandwidth) and processing overheads are high. Data compression techniques very common
and regularly employed to compress large volume of data.

1.4.2 Features of Multimedia System

Following are some features of multimedia System:

Very High Processing Power


Very high processing power is needed to deal with large data processing
and real time delivery of media. Special hardware required.
Multimedia Capable File System
Multimedia Capable File System needed to deliver real-time media for
e.g. Video/Audio Streaming. Special Hardware/Software needed for this
e.g RAID technology.
Data Representations/File Formats that support multimedia
Data representations/file formats should be easy to handle yet allow for
compression/decompression in real-time.
Efficient and High I/O
Input and output to the file subsystem needs to be efficient and fast. This
feature needs to allow for real-time recording as well as playback of
data. E.g. Direct to Disk recording systems.
Special Operating System
A special operating system is needed to allow access to file system and
process data efficiently and quickly. It needs to support direct transfers to disk,
real-time scheduling, fast interrupt processing, I/O streaming etc.
Storage and Memory
Large storage units and large memory are needed. Large Caches also
required and frequently of Level 2 and 3 hierarchy for efficient management.
Network Support
Client-server systems common as distributed systems common.
Software Tools
User friendly tools needed to handle media, design and develop applications,
deliver media.

1.4.3 Components of Multimedia System


Let us consider the Components (Hardware and Software) that are very essential for a
multimedia system:

Capture devices
The various capture devices that are required for multimedia: Video
Camera, Video Recorder, Audio Microphone, Keyboards, mice, graphics
tablets, 3D input devices, tactile sensors, VR devices. Digitizing/Sampling
Hardware
Storage Devices
The storage devices that are required: Hard disks, CD-ROMs, Jaz/Zip drives,
DVD, etc
Communication Networks
Ethernet, Token Ring, FDDI, ATM, Intranets, Internets.
Computer Systems
Multimedia Desktop machines, Workstations, MPEG/VIDEO/DSP Hardware
Display Devices
The display devices are: CD-quality speakers, HDTV,SVGA, Hi-Res
monitors, Color printers etc.

Check your progress-2


Why is lip synchronization important in multimedia?
Why did television become one of the dominant multimedia formats in the 20th
century?

1.5 TRENDS IN MULTIMEDIA


Multimedia applications basically involve storing and delivering large volumes of data.
Researcher can collaborate on multimedia systems to discuss new trends in multimedia
applications and multimedia server technologies. Some of the following Topics are:

• New architectures for multimedia storage and delivery systems,


• Network support for distributed multimedia systems,
• Network attached storage devices (NASD),
• Active storage devices (intelligent disks),
• Distributed multimedia databases,
• Multimedia object retrieval by content,
• Multimedia database indexing,
• Distributed real-time file systems,
• OS support for distributed multimedia systems,
• modeling and analysis of distributed multimedia systems
• multimedia digital libraries
• Multimedia human-computer interaction.
The current popular applications areas in multimedia include the following:
a) World Wide Web

Hypermedia systems –it incorporates nearly all multimedia technologies and


application areas and has ever increasing popularity and wide use.
b) MBone

Multicast Backbone: it is equivalent of conventional TV and Radio on the


Internet.
c) Enabling Technologies

These are developing very fast to support ever increasing need for Multimedia.
Switching, protocol, carrier, application, coding/compression, database, processing,
and system Integration technologies are the forefront of this development.

1.6 ADVANTAGES AND DISADVANTAGES OF


MULTIMEDIA
1.6.1Advantages of Multimedia

It can be used to help students and teacher to teach as well as learn the given topics
easily.
It can be used for different kind of people .we can spread knowledge easily all over
the world. It can be used to spread knowledge easily from one person to a whole
group.
It is very User friendly. You can do multitasking using multimedia.
It is easy to take the multimedia files from one to other places as it can be stored in
the cheap and light storage devices like CD-ROM.
It is integrated .and interactive. It can be used for any subject and for anyone.
It can be used in Television, Films Industries and for personal entertainments.
It is also used in Internet to make up the interactive web-page contents.
We can give the everlasting impression to the intended audiences on a specific topic
by the use of multimedia.
Colored pictures, Motion pictures and other graphics could be shown in monitors and
other big screens so that many people could view it and make out the impression
about it.
Multimedia systems are generally very interactive so it is interesting to use.

1.6.2 DISADVANTAGES OF MULTIMEDIA

It is expensive to set up the multimedia systems. It makes use of wide range of


resources, which can increase the cost.
It needs well trained manpower to create and use it.
Multimedia files are too large so, it is time consuming to transfer across the Internet
and Intranet.
It is more time consuming. Creating multimedia requires more time.
Multimedia requires electricity, which adds cost of its use.
It is so easy to use create problem that contain so much information. And information
may be overloaded

1.7 SUMMARY
The newspaper was the first mass communication medium to use multimedia as a
tool.
Multimedia defines electronic media devices employed to store and experience
multimedia content.
To support multimedia application over a computer network, multimedia provides the
distributed application.

1.8 KEY TERMS


Multimedia: Multimedia is a technology which stores data as text, photo, pictures,
music, sounds, graphic, film and animation and gives the methods to collect and
modify the data as desired.
MPEG:A motion Picture compression system
Graphics: A generic term for image based symbols and signs.technicaly, text in a
document is also graphic symbol. Graphic refer to icons, pictures and similar images.

1.9 END QUESTIONS


1. What is multimedia?
2. What is multimedia system?
3. Who invented radio?
4. What are the advantages of multimedia?
5. Explain the features of multimedia.
6. What are the disadvantages of multimedia?
7. Explain the components of multimedia.
8. Discuss the latest trends in multimedia.
9. Explain the challenges for multimedia.

Answer to check your progress questions


Check your progress -1:
A Multimedia is an Application which uses a collection of multiple media
sources e.g. text, graphics, images, sound/audio, and video.
Multimedia system is a computer system that has ability to store,
compress, decompress, capture, digitize, and present information.
Gugliemo Marconi
Check your progress -2:
Lip synchronization is very important for humans to watch playback of
video and audio and even animation and audio.
It incorporates video and audio and has since changed the world of mass
communications.

BIBLIOGRAPHY
1. Bush, Vannevar 1970.Pieces of the Action. New York: William Morrow&
company.
2. Bush, Vannevar 1967. Science is not Enough. New York: William Morrow &
Company.
3. Hackbarth, Steven. 1996. The Educational Technology handbook: A
comprehensive Guide. New Jersey: Educational Technology Publications.
4. Information Age Tour.1997. National Museum of American History. Smithsonian
Photographs.
5. PBS 1992.[ Video] The machine That changed the World (v-14), Bostan MA:
WGBH Educational Foundation

UNIT 2 MULTIMEDIA SYSTEM AND


APPLICATIONS
Program Name:BSc(MGA)
Written by: Mrs.Shailaja M. Pimputkar,Srajan
Structure:
2.0 Introduction
2.1 Unit Objectives
2.2 Categorization of multimedia
2.2.1 Linear
2.2.2 Non-Linear
2.3 History of Term ‘Multimedia’
2.4 Characteristics of Multimedia
2.5 Applications of Multimedia (Usage)
2.5.1 Creative Industries
2.5.1.1Mass Media
2.5.1.2 Commercial
2.5.1.3 Entertainment and fine arts
2.5.2 Education
2.5.3 Other Fields
2.6 Structuring Information in a multimedia form
2.7 Summary
2.8 Key Terms
2.9 End Questions
.

2.0 INTRODUCTION
In this unit we are going to learn what actually the multimedia system is and usage of
multimedia in different fields. Multimedia mainly divides into two categories. (I) Interactive
and (II) Non Interactive. After doing through this unit you will come to know about how
interactivity is important in multimedia. You will understand the different interactive
medium.
In today’s world multimedia is very important. In our daily work we often use
multimedia. In this unit we will come to know about different sectors where multimedia used.

2.1 UNIT OBJECTIVES:


After studying this unit you will be able to
Describe the different applications of multimedia
Discuss the categorizations of multimedia
Identify the structuring of information in multimedia system

2.2 CATERGORIZATION OF MULTIMEDIA

Multimedia broadly divides into two categories:


• Linear
• Non Linear
2.2.1 Linear:
Linear multimedia presentation is not interactive. This means user can only watches the
media as it plays from beginning to end. That means user have no control over the content
that is being showed to them. For example Movies, Demo show corporate presentations,
newspaper, radio etc.
2.2.2 Non-Linear:
Non-linear multimedia presentation is interactive. Users can control the sequence and
timing of media elements. For example, a music CD named ‘learned music instruments’.
User can select different music instruments he/she wants.
To understand non-linear, we have to know the meaning of interactivity.
Interactivity:
Interaction comes from Latin word inter, meaning between. Any “action between” two
objects are called interaction. For example, the interaction between a teacher and a student.
Interaction is a kind of action that occurs as two or more objects have an effect upon one
another. In computers, interactivity is the dialog that occurs between a human being and a
computer program or any other live things.
Human to human communication:
Human communication is the best example of interactive communication which
involves two different processes; human to human interactivity and human to
computer interactivity. The communication between people called Human-to -
Human interactivity. On the other hand, human to computer communication is the
way that people communicate with new digital media.
Human to artifact communication
As we seen that the purpose of human to human communication is knowledge
sharing whereas the purpose of human to artifact communication is task sharing.
Before going further we should understand what artifacts are. In simple words
artifacts are those objects or things developed by human. For example weapon,
tools, phone, ornaments of historical interest.

Fig: 2.1 Communication between human and artifact

Human to artifact is a single way communication, which also called mono


directional communication. When human have to perform some task he has the
intension to convey the message to the artifact through communication.
Communication can be done through two modes:
• Aware communication: this includes verbal behavior, sigh language or
gesture
• Unaware communication: this means non-verbal behavior.
Let’s take an example of iPod. One should note that the interactivity of an
iPod is not its physical shape means its design is not important for communication
but its storage capacity or its ability to play music is important. This is the
behavior and nature of its user interface as experienced by its user. This includes
the way iPod allows you to select a tune in the playlist, the way you move your
finger on its wheel and the way you control the volume.
Computer science
In Computer science, interactivity refers to software that accepts and responds to
input from humans-such as commands or data. Interactive software includes most
widely used programs, such as spreadsheet applications or word processor. By
comparison, non-interactive programs operate without human intervention.
Examples of these include batch processing applications and compilers. If the
response is very complex, it is said that the system id carrying out social
interaction and some systems try to accomplish this via implementation of social
interfaces. Also there is the concept of types of user interaction, like the rich user
interface.
Interactivity in new media
New Media is an expanding term that is combination of old and new technology.
New media is integrates all the new technology we have today. In interactive new
media user play more active role. Old media could only provide a sit-back type of
interaction, but in new media users are more responsible for interaction.
Technologies such as digital TV and DVDs are well known examples of
interactive media devices. With the help of this technology user can always
control what they watch and when they watch. Now a day, internet becomes one
of the popular medium of interaction. User can totally involved by commenting on
material, watching the contained and then actively participate in it. S.J.McMillan
stated in 2005 that interactivity can take place at various different degrees and
levels of engagements and that is very essential to differentiate between these
levels and degrees. Internet is medium of user to user interaction.
Lev Manovich, in 2001, also gave a definition of what interactivity implies for
the general user in his book “The Language of New Media”. He attributes the
‘open interactivity’ actions such as developing media systems and computing
programming. On the other side ‘close interactivity’ is solely where the elements
of access are determined by the user.
Interactivity is also associated with the new media art technologies where animals
and humans are able to interact with and change the path of an artwork.
Creating interactivity
Many authoring tools are available for creating different kind of interactivity.
Some of the most common and widely used platform for generating interactivity
includes Adobe Flash and later released Microsoft Silverlight. Articulate’s Engage
and Harbinger’s Raptivity are the most commonly used authoring tools for
creating interactivities. Interaction model is a concept used in e-learning. By
employing an interaction model, any person can produce interactivities in a short
period of time.
Some of the interactions models equipped with authoring tools fall under
various categories, such as puzzles, games, presentation tools, simulation tools
etc., and can be completely customized.
Following are some non-linear i.e. interactive multimedia presentation:
A. Hypermedia:
Hypermedia is an example of non linear multimedia. In hypermedia
multimedia presentation can be live or recorded. Hypermedia, where video, audio,
graphics, plain text and hyperlinks blend to generate interactive medium of
information. WWW is a great example of hypermedia. On the World Wide Web,
you not only interact with the browser but also interact with the pages that the
browser brings to you. The invitations called hypertext that link you to other pages
provide the most common form of interactivity when using the Web.
B. E-Learning:
The e-learning or electronic learning covers different forms of technology-
enhanced learning (TEL) or very specific type of TEL, such as web based or
online learning- learning is self learning for user. It creates an environment that
stimulates learners through self-directed training. E-learning has effective use of
animation authoring tools accomplishes to develop interactive multimedia
.According to Bassoppo-Moyo “e-learning industry refers to the effective
integration of a range of technologies across all areas of learning.
Different tools and technologies are there in E-learning to help the user for
learning. It generally refers to strategies to provide training courses to employees.
In e-learning is that learner having the opportunity to manipulate experiences. In
many universities, e-learning is a method of attending a course or study
programme where the students never meet face to face, because their study is
totally internet based. With the more practice and learning more, Learners can
handle tasks and questions with a high rate of success.

C. Presentations:
Presentation is the practice of explaining the content of a topic to a learner or
an audience. A presentation program, such as Microsoft PowerPoint, is commonly
used to create the presentation content.
Useful examples of presentation:
• BCC diagram: This diagram is extremely helpful in marketing, although
most students only learn the fundamentals of the diagram at the
undergraduate level.
• SWOT analysis: This is useful in business and states a problem effectively
and precisely.

D. Computer Games:
A computer games also known as PC games is a game played on personal
computer or an arcade machine or video game console.PC games are developed
by one or more game developers, often in collaboration with other specialists.
These games may then be distributed on physical media, such as CDs, DVDs or
through online delivery services, possibly free redistributed software.PC games
normally required specialized hardware in user’s computer in order to play those
advance games, also need internet connection for online play or specific
generation of graphics processing unit.
E. Movie Theatre:
Movie theater or cinema theatre also called picture theatre is a place, normally
a building, for viewing motion pictures called films or movies. Movie theaters are
commercial operations serving to the common people for their entertainment.
With the help of movie projector movie is projected onto a large projection
screen at the front of the auditorium .while the dialogue, sounds and music are
played through a number of wall-mounted speakers. Some movie theatres are now
equipped for high-end digital cinema projection.
F. Spatial navigation:
Spatial Navigation is part of our daily life. In the field of computing, spatial
navigation is the ability to navigate between the focusable elements, such as form
controls and hyperlinks, within a user interface. We navigate through real objects such
as buildings, trees, etc. Spatial navigation is widely used in multimedia applications
such as web pages and interactive 2D maps and computer games.
Previously, in case of web browsers tabbing navigation was utilized to change
the focus within an interface, by pressing the tab key on a computer keyboard to focus
on the next element (or Shift + Tab to focus on the previous one). The order is based
on that in the source document. Without any style HTML, this method normally
works as the spatial location of the element is in the same order of the source
document. However, with the introduction of style through presentational attributes
or style sheets such as CSS, this type of navigation is being used less often. Spatial
navigation uses the arrow keys (with one or more modifier key held) to navigate on
the "2D plane" of the interface. For example, pressing the "up" arrow key will focus
on the closest focusable element on the top (relative to the current element). In many
cases, a lot of key presses could be saved.
In opera’s web browser this feature is available. This allows user a faster way
to jump to different areas in long articles or web pages without manually scanning and
scrolling with their eyes. Moreover, when scanning long text pages, opera also
permits user to jump directly to sub-headers by using the S key. User can move up by
using W key. Different applications have different spatial navigation level.

2.3 HISTORY OF TERM ‘MULTIMEDIA’


In July 1966 Bobb Goldstein (Bob Goldstein) an American showman, songwriter, and
artist coined the term multimedia to promote opening of his "Light Works at L'Oursin" show
at Southampton, Long Island, New York. After that On August 10, 1966, Richard Albarino
used the terminology, reporting: 'Brainchild of song scribe-comic Bob (‘Washington Square’)
Goldstein, the ‘Light works’ is the latest multi-media music-cum-visuals to debut as
discotheque fare’. Two years later, in 1968, the term "multimedia" was appropriately used to
describe the work of a political consultant named David Sawyer, the husband of Iris Sawyer.
She was one of Goldstein’s producers at L’Oursin. The evolving concept of multimedia
involves combinations of text, still images, video, animation, sound, and interactivity. Thus,
technically an illustrated book could be considered a multimedia object with a combination of
texts and images; however, multimedia primarily implies combinations of electronic media.
In 1966, within 40years, he word has changed and taken on different meanings. In the
late 1970s, the term was used to define presentations comprising of multi-projector slide
shows timed to an audio track. In the 1990s 'multimedia' took on its current meaning.
In the 1993 first edition of McGraw-Hill’s Multimedia: Making It Work, Tay
Vaughan declared “Multimedia is any combination of text, graphic art, sound, animation, and
video that is delivered by computer. Much of the billions of content on the web today fall
within this definition as well understood by the millions. These computers had CD-ROM
drive, which allowed for the delivery of several hundred megabytes of pictures, audio and
video data.

Check your progress-1


Define interactivity?
Define the term ‘Presentation’ in multimedia?
What is E-learning?
What is spatial navigation?
What is BCC Disgram?

2.4 CHARACTERISTICS OF MULTIMEDIA


A Multimedia System is a system capable of processing multimedia data and applications.
A Multimedia System is characterized by the processing, storage, generation, manipulation
and rendition of Multimedia information.

A Multimedia system has four basic characteristics:

• Multimedia systems must be computer controlled.


• Multimedia systems are integrated.
• The information they handle must be represented digitally.
• The interface to the final use may permit interactivity.

2.5 APPLICATIONS OF MULTIMEDIA (USAGE)


Multimedia finds its application in various areas including education,
advertisements, art, engineering, medicine, entertainment, business, mathematics, scientific
research and spatial temporal applications.
Using multimedia students can understand the subject or the concept more
easily. Multimedia increases the understanding capability of students. It is very useful in
getting across new ideas and concepts.
2.5.1 Creative industry:
Creative industries used multimedia for variety of purpose ranging from arts to
entertainment to commercial art, to journalism, to media and software services provided for
any of the industries listed below. An individual multimedia designer may cover the spectrum
throughout their career. Their skills range from technical, to creative to analytical.
a. Mass Media

Multimedia is used in the field of mass media i.e. journalism, there are many
magazines and newspaper that are published periodically. Publishing house also use
multimedia for newspaper designing and other stuff also. And now days it's not only the text
that we can see in the newspaper, but we can also see photographs in newspaper, this shows
today how multimedia is important and worthy.

b. Commercial
Much of the electronic old and new media used by commercial artists is multimedia.
In the field of advertising the multimedia plays a great and a vital role. Exciting presentations
are used to grab and keep attention in advertising. For business communication Multimedia is
very powerful tool. It helps to increase the quality of business communication. Commercial
multimedia developers may also be hired to design for nonprofit services and government
service applications as well.
c. Entertainments and fine arts:
Multimedia is heavily used in the entertainment industry, especially to develop special
effects in movies. For examples movies like Jurassic park, Ice Age. Avtar movie is also
remembered for their special effects and animation. Multimedia games are very popular
among the children, basically software program which is available on CD or online. Some
video games also based on multimedia features. We can see the use of multimedia in art
gallery displayed different pictures. Though the multimedia display material may be violate,
the preservation of the content is as strong as traditional media.
2.5.2 Education

Multimedia is important in n the area of education also. Talking particularly about the
schools, the education of multimedia is very important for children also. In Education,
multimedia is used to produce computer-based training courses (popularly called CBTs) and
reference books like encyclopedia and almanacs. A CBT allows the user to go through a
series of presentations, text about a particular topic, and associated illustrations in various
information formats. Edutainment is an unusual term used to define mix entertainment with
education especially multimedia entertainment.

In the past decade learning methods has improved a lot because of the introduction of
multimedia. Various field of research have evolved (e.g. Multimedia learning, Cognitive load
and the list goes on). The possibilities for learning and instruction are nearly endless.

2.5.3 Use of Multimedia in Other fields


a. Medicine: In medicine, Doctors can get trained watching virtual surgery or
operation.
They can simulate how the human body is affected by diseases spread by bacteria
and viruses and then develop a new technique to prevent it.
b. Mathematical and scientific research: In mathematical and scientific research,
multimedia is mainly employed for simulation and modeling, for example, a
molecular biologist can look at a molecular model of a particular substance, for
instance, a protein and manipulate it to arrive at new substance. Many research
articles can be found in journals, such as journals of Multimedia.
c. Engineering: Software engineers may employ multimedia in computer
simulations for anything from training, such as industrial or military training to
entertainment. Mechanical engineers can learn more about engines.
d. Industry: A sale person can learn about the product details using multimedia. It is
also helpful for advertising product through web based technology. In the
industrial sector, multimedia helps to give present information to supervisors,
shareholders and co-workers.
e. Public places: Multimedia is useful in public place also. For example in railway
station, shopping mall, grocery shops, mall we can see terminal or cubical having
all the information and help. Multimedia is used to provide all this information.

2.6 STRUCTURING INFORMATION IN MULTIMEDIA


FORM
Multimedia represents the convergence of pictures, video, text, and sound into a single form.
The strength of multimedia and the Internet lies in the way in which information is
interconnected.
Multimedia and the Internet require a completely new approach to writing. The style
of writing that is suitable for the 'on-line world' is highly optimized and designed to be able to
be quickly scanned by readers.
To create a good site, purpose of the site should be clear and specific. A site with
good interactivity and new technology can also be useful for attracting visitors. The site
should be easy to navigate .site’s design must be innovative and attractive. It should be easy
to frequently updated and fast to download. When users view a page, they can only view one
page at a time. As a result, multimedia users must create a ‘mental model of information
structure’.
The author of the Yale University Web Style Manual, Patrick Lynch states that users
need predictability and structure, with clear functional and graphical continuity between the
various components and subsections of the multimedia production. With the help of this
method, the home page of any multimedia production should always be a symbol, able to
accessed from anywhere within a multimedia device.

Check your progress-2


What is the use of multimedia in engineering?
How multimedia is important in advertising?
How multimedia useful in medical field?
How Multimedia System is characterized?

2.7 SUMMARY
Multimedia finds its application in various areas including education,
advertisements, art, engineering, medicine, entertainment, business, mathematics,
scientific research and spatial temporal applications.
The e-learning, or electronic learning, field creates a dynamic environment that
stimulates learners through self-directed training. E-learning accomplishes self-
directed learning by utilizing animation authoring tools to develop interactive
multimedia.
PC games are developed by one or more game developers, often in collaboration with
other specialists. These games may then be distributed on physical media, such as
CDs, DVDs or through online delivery services, possibly free redistributed software.
WWW is a best example of hypermedia. A non-interactive cinema presentation is a
classic example of standard multimedia, because it lacks hyperlinks.
Multimedia and the Internet require a completely new approach to writing.
The style of writing that is suitable for the 'on-line world' is highly optimized and
designed to be able to be quickly scanned by readers.

2.8 KEY TERMS


Multimedia: Multimedia is a technology which stores data as text, photo, pictures,
music, sounds, graphic, film and animation and gives the methods to collect and
modify the data as desired.
Interactivity: This implies that a message is connected to a number of previous
messages and also to the relationship between those messages.
Non-Interactive: These are the programs that operate without human intervention.
Examples of these include batch processing applications and compilers.
Hypermedia: It is used as a logical addition of the term hypertext in which audio,
video, graphics, plain text and hyperlinks intermix to create a generally non-linear
medium of information.
Spatial navigation: The ability to navigate between focusable elements, such as form
controls and hyperlinks, within a user interface or structured document according to
the spatial location. This method is widely employed in application software like
computer games.
Computer game: A game played on a personal computer, rather than on an arcade
machine or video game console.
Scene: A segment of an entire movie. In software program like Flash, a scene is
consisting of a number of events that the developer considers as constituting a sub-
theme of the entire movie.

2.9 END QUESTIONS


10. Explain Human-to-human communication.
11. How are interactivity and media related?
12. Explain Linear and non-linear multimedia.
13. What are the characteristics of multimedia?
14. Write a note on E-Learning.
15. What is interactive multimedia?
16. How are interactivity and computer system related?
17. What is spatial navigation? Explain in detail.
18. How multimedia is useful in education?
19. What is the use of multimedia in medicine and engineering?

Answer to check your progress questions


Check your progress -1:
interactivity is the dialog that occurs between a human being and a
computer program or any other live things.
Presentation is the practice of explaining the content of a topic to a learner
or an audience.
creates a dynamic environment that stimulates learners through self-
directed training.
spatial navigation is the ability to navigate between the focusable
elements, such as form controls and hyperlinks, within a user interface.
BCC diagram: This diagram is extremely helpful in marketing, although
most students only learn the fundamentals of the diagram at the
undergraduate level.
Check your progress -2:
computer simulations for anything from training, such as industrial or
military training to entertainment.
For Presentations, business communications
Doctors can get trained watching virtual surgery or operation.
by the processing, storage, generation, manipulation and rendition of
Multimedia information.

BIBLIOGRAPHY
Richard, Albarino, ‘Goldstein’s light Works at Southampton,’ Variety, Vol.213, No.
12
Variety, 1-7 January, 1996:3.
Stewart, C and A. Kowaitzke.1997. Media: new Ways and Meanings, 2nd Edition.
Sydney: Jacaranda, Press.
Sears, Andrew and Julie A. Jacko, (Eds).2007. Handbook of Human Computer
Interaction, 2nd Edition. Boca Raton: CRC press.

UNIT 3 COMPUTER GRAPHICS


Program Name:BSc(MGA)
Written by: Mrs.Shailaja M. Pimputkar,Srajan
Structure:
3.0 Introduction
3.1 Unit Objectives
3.2 Computer Graphics
3.2.1 History of Computer Graphics
3.2.2 Applications of Computer Graphics
3.3 Concept and Principles of Computer Graphics
3.3.1 Image
3.3.2 Pixel
3.3.3 Graphics
3.3.4 Rendering
3.3.5 3D projection
3.3.6 Ray tracing
3.3.7 Shading
3.3.8 Texture mapping
3.3.9 Volume Rendering
3.3.10 3D Modeling
3.4 2D Computer Graphics
3.4.1 Raster Graphics (Bitmap Image)
3.4.1.1 Advantages and Disadvantages of Raster Graphics
3.4.2 Vector Graphics
3.4.2.1 Editing Vector Graphics
3.4.2.2 Viewing of Images
3.4.2.3 Application
3.4.2.4 Advantages and Disadvantages of Vector Graphics
3.5 3D computer Graphics
3.5.1 Computer Animation
3.6 Pioneers in graphic Design
3.7 History of Digital Images
3.7.1 Pixel Storage
3.7.2 Other Bitmap File Formats
3.8 Bitmap files Formats
3.8.1 Bitmap File Format
3.8.2 Bitmap File Structure
3.8.2.1 Bitmap File Header
3.8.2.2 Bitmap Information (DIB Header)
3.8.2.3 Color Palette
3.8.2.4 Bitmap Data
3.8.3 Usage of BMP Format and Related Formats
3.9 File Formats
3.10 Vector Editors versus Raster Graphics Editors
3.10.1 Graphics File Format
3.10.1.1 Raster Format
3.10.1.2 Vector Format
3.10.1.3 3D Format
3.11 Summary
3.12 Key Terms
3.13 End Questions
.

3.0 INTRODUCTION
In the last two units we learn about the multimedia systems and its applications. In
this unit we are going to learn about computer graphics. Computer graphics is actually
become a bridge between computers and humans. Today almost everything becomes digital
and things doing in old ways are almost gone. In 1950s computer graphics used in small scale
due to limitations in technology. But in 1951, Massachusetts Institute of Technology (MIT)
developed a mainframe computer called Whirlwind become the origin of computer graphics.
Computer graphics has growing in large scale over the past 40-45 years. Now it becomes the
important medium of communication with computer application.
In our day to day lives, computer graphics are playing an important role. Computer
imagery is found in everywhere like in newspaper, Photography, television, Weather reports,
Medical applications in all kind of surgical procedures. Computer graphics is a single image
or a series of images called video. The main purpose of computer graphics is to visualize real
object with the help of computer. The computer graphics are divided into three types: Two
Dimensional (2D), three dimensional (3D) and animated graphics.

3.1 UNIT OBJECTIVES:


After studying this unit you will be able to
Describe the basic of computer graphics
Describe the concept of bitmap and vector graphics
Explain the 3D computer graphics
Identify the difference between raster and vector graphics
Discuss the features of graphics file format

3.2 COMPUTER GRAPHICS

Computer Graphics are generated using computers and generally the represented and
manipulation of pictorial data by computer. Computer graphic are used in every filed.
Computer graphics can be a single image or series of images called video. Computer graphics
can be categorized in to two dimensional (2d), three dimensional (3d) and animation
graphics. A specialized sub filed of computer science has emerged that studies methods for
manipulating and digitally synthesizing visual content.
Instead of drawing on a paper, the many artists, designer uses computer graphics.
Using computer graphics it becomes easier to scale, save, and edit your image. We can easily
swap the colors, easily print the image and also upload it on the web.
3.2.1 History of Computer Graphics
In 1950, Verne Hudson and William Fetter, graphic designer working in Boeing,
crated computer graphics as visualization tool .They developed computer graphic tool for
engineers and scientists. This is also called computer generated imagery or CGI. In 1951, Jay
Forrester and Robert Everett of Massachusetts Institute of Technology (MIT) developed a
mainframe computer called Whirlwind. The Whirlwind was the first computer with a video
display in real time.
In 1955, a light pen is introduced. A light pen can be able to work with any CRT-based
display. In 1961, Ivan Sutherland develops Sketchpad software. It is also called Robot
Draftsman. Sketchpad application allow user to draw sketched on computer using light pen.
Ivan Sutherland is considered the ‘grandfather’ of the graphical user interface (GUI) and
interactive computer graphic because his development of Sketchpad. Later, virtual reality
equipment and flight simulators also developed by Sutherland.

Fig 3.1 Light Pen

Soon after IBM releasing 2250 terminal,


considered the first commercially available graphic
computer. In mid 1960s, several computer graphics
companies were founded, including Lockheed-
Georgia, TRW, Sperry Rand and General Electric. In 1966, VICAR (Video Image
Communication and Retrieval), an image-processing program was developed by NASA’s Jet
Propulsion Laboratory (JPL). It helped to process images of the moon captured by the
spacecraft.

In 1969, a Special Interest Group in Graphics (SIGGRAPH) was initiated by the


Association for Computing Machinery (ACM) which organizes graphics standards,
conferences and publications within the field of computer graphics. In 1973, the first annual
SIGGRAPH conference took place. As the area of computer graphics has expanded over
time, the size and importance of SIGGRAPH has grown. In 1970 a new tool developed called
Bezier curves. Later on Bezier curves become an important tool in vector graphics.

In 1977, Graphic designers and artist began to understand the importance of personal
computer. The Apple II was the first graphic personal computer. In 1981, Quantel a UK
based company developed software called Paintbox. It was a revolutionary computer graphic
program. With the help of this application Filmmaker and TV producers can edit and
manipulate video images digitally.
Fig 3.2 Apple-II Personal Computer

In late 1980s, Silicon Graphics Interface (SGI) computers were employed to design
some of the full-length computer-generated short films at Pixar. In 1982,TRON was the first
movie in which computer graphics are used extensively. In Tron, computer graphic imagery
mixes with live action.

One of the most popular graphic software named Photoshop ‘s first version was
developed in 1990.Another graphic program called Paintshop also launched the same year. In
the 1990s, 3D graphics became popular in multimedia, gaming and animation. 1996 saw the
release of fully 3D games. Quake was one of those games which was released and become
very popular. In 1995, Pixar Animation Studio produces the movie ‘Toystory’ uses CGI
graphics in impressive way. Due to the advancement of 3d modeling software and powerful
graphics hardware, computer graphics has become more interactive and true to life.

3.2.2 Applications of computer graphics


The applications of computer graphics are:
• Computational biography
• Computational Physics
• Computer-aided design (CAD)
• Digital art
• Computer simulation
• Graphic design
• Info graphics
• Information visualization
• Rational drug design
• Scientific visualization
• Education
• Video games
• Virtual reality
• Web design

3.3 CONCEPT AND PRINCIPLES OF


COMPUTER GRAPHICS
Computer graphics are the images created by computer with the help of some specialized
software and hardware. The images created on computer also called computer generated
images (CGI). Following are some basic concepts and principles in computer graphics.
3.3.1 Image
A picture or an image is an artwork, which looks like an object or some person. This
object or artwork is either two-dimensional like photographs or three dimensional like statue,
wallpaper etc. they can captured by optical devices like mirrors, lenses, camera etc. A digital
image is represent two dimensional image using binary form zeros and ones. There are two
types of digital images: Raster and Vector.
3.3.2 Pixel

Fig 3.3: Image of individual pixels as square


In a simple words pixel refers as a packet of color. Pixel is a single point in a raster image
that is arranged in a 2-dimensional grid. Usually these pixels are represented using dots or
squares. Each pixel contains some specific color. In color systems, pixel will usually have
three or four components (CMYK-Cyan, Magenta, Yellow, Kinda (Black) or RGB -Red,
Green, Blue).
3.3.3 Graphics
Graphics is a combination of text, color and illustration. Graphics refers to visual
presentation of any object on the surface such as wall, canvas, computer screen, maps,
drawings, photograph etc. Some more examples of graphics are line art, graphs, numbers,
geometric designs, typography, engineering drawings etc. The main objective of the graphics
is to create effective communication with the help of other cultural elements to create a
unique style.
3.3.4 Rendering
Rendering is the process to get output from the computer. Rendering is the process of
generating a 2d image from a 3D model. The model is a three dimension using a strictly
defined language or data structure. It contains geometry, viewpoint, texture, lighting and
shading information. The image formed may be raster graphic image. In video editing,
rendering has been used in describing the process of calculating effects given to a file to
produce final output of a video.
3.3.5 3D projection
Fig 3.4: Image of 3D Projection
Three-dimensional projection is a method of mapping 3-dimensional points within a 2-
dimensional plane. The method for dsplaying graphical data is based on planar 2-dimensional
media. This type of projection has wide application in computer graphics, engineering and
drafting drawings.
3.3.6 Ray Tracing

Fig 3.5: Process of Raytracing


Ray tracing is a technique used for generating an image by tracing the path of light by
measuring the number of pixels in an image plane. This technique is able to produce a very
high degree of photorealism; generally higher than that of typical scan line rendering
methods, but at a grater computational cost.
3.3.7 Shading
Fig 3.6: Types of Shading
Shading is depicting the depth in 3D models or illustrations by varying the levels of
darkness. It is applied in drawing to show the levels of shading on paper by applying the less
(lighter shade for lighter area) or more (darker shade for darker areas) densely.
There are various techniques of shading available, such as cross hatching. In cross
hatching, perpendicular lines of varying closeness in a grid pattern are drawn to shade an
area. As the lines are drawn close together, the darkness of the area increases. Similarly, if the
lines are drawn farther apart, the area appears lighter. Nowadays, the term shading has been
generalized and it means that different types of shaders are applied to graphics to give them a
3D effect.

3.3.8 Texture Mapping

Fig 3.7: Example of texture Mapping


Texture method is a method for adding any 2d texture to the 3D object. Through texture
mapping, one can add colors to the 3D or computer generated model. In 1974, Dr Edwin
Catmull founded its application to 3D graphics. Texture maps are used on the surface of a
shape or polygon. We can do multi-texturing on the same polygon or surface using texture
mapping technique. Bitmap textures and procedural textures are commonly used method for
texture mapping. But for detail texturing and detail placement of a texture on to the objects
surface used different technique called UV mapping.

Fig 3.8: Example of UV Mapping


3.3.9 Volume Rendering
Volume rendering technique used to display a 2D projection of a specific part of a 3D
sampled data set. A typical 3D data set is a group of 2D projected images that are obtain from
MRI or CT scanner. Following figure shows the volume rendered CT scan of a forearm
deploying various color schemes for diverse groups of tissues, such as fact, muscle, blood
and bone.

Fig 3.9: Volume Render CT Scan of a forearm using different color


scheme

Generally, they are obtained in a regular pattern and usually possess a regular pattern.
This is an example of a regular volumetric grid, with each voxel (volumetric element) being
represented by a single value that can be obtained by sampling the close area that surrounds
the voxel.
3.3.10 3D Modeling
The process of developing mathematical and wireframe representation of any three
dimensional object using specialized software is called 3D modeling. Models are created
automatically or manually. The manual modeling process of preparing geometric data for 3D
computer graphics is similar to that of plastic arts, such as sculpting. 3D models are created
using multiple approaches, such as:
i. NURBS curves for generating accurate and smooth surface patches.
ii. Polygonal mesh modeling for manipulation of faceted geometry.
iii. Polygonal mesh subdivisions are advanced tessellations of polygons that result in
generation of smooth surfaces like that of NURBS models.
A 3D model can be displayed as two-dimensional image by the process of 3D rendering.
These can be used in a computer simulation of physical phenomena, or animated directly for
other purposes. For physically creating the model, a 3D printing device is required.

Check your progress-1

What is pixel?
What is ray tracing?
Define shading?
What is texture mapping?
What is rendering?
3.4 2D GRAPHICS
2D computer graphics are computer based conceptions of digital images mostly produced
by two-dimensional models, like 2D geometric models, digital images etc., and by techniques
relevant to them.
2D graphics used in many applications that are based on printing and drawing
technologies, like cartography, typography, advertising etc. Two dimensional models are
more preferred by many professionals as controlling the image is easier in comparison to
three dimensional computer graphics.
In this section you will learn about two types of 2D computer graphics.
3.4.1 Raster Graphics (Pixel Art)
In computer graphics, raster images are made up of grid of Pixels. Pixel is a packet of
color. These pixels together form boxes of color to create an overall finished image. Raster
image is also called bitmap image because it contains information that is directly mapped to
the display grid. Most of the pictures or images import from the digital camera are raster
images.
Raster images have a finite set of pixels called picture elements. This type of digital
images contains a fixed number of rows and columns. Each pixel is assigned a specific value
that determines its color. The raster image system uses the red, green, blue (RGB) color
system. In general, pixels are a two-dimensional array. It means it has width and height. This
width and height comparison ratio called pixel aspect ratio.

Fig 3.10: Example of raster image


When a raster image is viewed, we cannot see pixels easily. We can only see the
smooth image. By blowing up the image, we can see the pixels in the image. This is
depending upon the resolution of the image. Resolution is the number of dots per inch (DPI)
or pixel per inch (PPI) in a particular image. When resolution increases, the number of pixels
also increases. Images with smaller resolution have smaller digital image files. As the number
of pixels grows, the individual points of data stored also increases.
Thus, it is very important that people working with computer graphics find an
appropriate balance between image size and resolution to generate apparent image.
Professional photography in particular requires high DPI for better resolution. Enlarging the
bitmap image at some extent it looks pixilated. Because of this reason vector graphics are
often used to create company logos, or banners.
3.4.1.1 Advantages and disadvantages of raster graphics
Advantages:
1) Raster graphics are more easy to use.
2) It can be edited using very common editing programs.
Disadvantages:
1) If the resolution is high, file size become large. And need more disk space.
2) Other disadvantage of raster graphic is when image is enlarge, image get
pixilated.
3.4.2 Vector Graphics (Vector Art)
Vector graphics are different from the raster graphics. Vector graphics are not made by
pixels. Vectors graphics are made by geometrical primitives like line point, curves and
shapes, which all are based on mathematical equations, to represent images in computer
graphics. These primitives also called path. This path has defined start and end point and it
could be line, square, curve shape.
Vector images are not made by any specific numbers line or dot; we can stretch or
scaled to a larger size without losing image quality. Because of this reason vector graphics
are used to create logo, business card, banner, and hoardings.

3.4.2.1 Editing Vector Graphics


Vector graphics editor is a computer program used for creating as well as editing
vector graphics. Digital devices like scanners and cameras generate raster graphics that are
not viable for conversion into vectors. Thus in such cases, the editor will work on the pixels
than on drawing objects that are defined by mathematical formula. Some part of the image
can come from the camera and rest from the vector tools.

Fig 3.11: Example of Vector image


3.4.2.2 Viewing of Images
The user can utilize various programs to observer an image. GIF, JPEG and PNG images
can be seen using a web browser, as they are standard Internet Image Formats. The Scalable
Vector Graphics (SVG) format is usually used in the Web. In recent years, SVG files are
essentially printable text that describes both straight and curved paths. SVG is also a format
for animated graphics and mobile phones.

3.4.2.3 Applications
These days, the term ‘vector graphics’ is commonly used in the perspective of two-
dimensional computer graphics. It is one of the ways for an artist to produce an image on a
raster display. Other ways include text, 3D rendering and multimedia. Almost all modern 3D
rendering is achieved through the use of extensions used in 2D vector graphics techniques.

3.4.2.4 Advantages and disadvantages of vector graphics


Advantages:
1) The main advantage of vector graphics is scale. User can scale or stretch larger
size without losing image quality.
2) Vector graphics are files are small in size. So it will save disk space on your
PC
Disadvantage:
1) Vector graphics cannot reproduce 'continuous tone' photographic images like
bitmaps.
2) Vector graphics can only created or edited with some specific design software,
so it requires expertise.
3) Vector software is costly compare to raster graphic software.

Check your progress-2


What is pixel aspect ratio?
What is resolution?
What is bitmap image?
What is vector graphics?

3.5 3D COMPUTER GRAPHICS


3D computer graphics utilize a three –dimensional representation of geometric data that
is stored in the computer for the intent of performing calculations and presenting 2D images.
Such images may be later used for display or for real-time viewing.
Fig 3.12: Example of 3D Graphics
In Two dimensional images pixel has width and length. In 3D it has width, length and
depth also.3D computer graphics depends on many of the same algorithms as 2d computer
vector graphics in the wireframe model and 2D computer raster graphics in the resultant
rendered display.
A three dimensional graphics system can be divided into three major components:
• Modeling
• Scene Specification (layout)
• Rendering

3D computer graphics is commonly referred as 3D models. Other than the rendered


graphic, the model resides within the graphical data file. A 3D model is created by the
mathematical representation of any three-dimensional object. Technically, a model cannot be
considered a graphic until we get the output. Rendering is the process to get the output.
Through the process f rendering, a model can be displayed as a two-dimensional image.
3.5.1 Computer Animation

Fig 3.13: Computer animation using Motion Capture


Animation means to animate non living things or object in computer graphics. By
animation we can give motion, emotion to any nonliving object. Basic concept behind the
animation is to run single images in the sequences in the continuous motion. Most often,
animation done through 3D as well as 2D computer graphics. Today animation is a booming
industry. Animation can be used in many sectors like Education, Advertisement,
Commercials, Entertainment, Training, E-commerce etc.

3.6 PIONEERS IN GRAPHIC DESIGN


Some pioneers in the field of graphic design are as follows:
1. Charles Csuri
Charles Csuri created the fist computer art in 1964. He is the pioneer of computer
animation and digital fine art. Csuri was recognized by Smithsonian as the ‘father of
digital art and computer animation’.
2. Donald P.Greenberg
Donald P. Greenberg is one of the innovators of computer graphics. He is mentor to
several Well-known computer graphic artists, researchers and animators, including
Robert L.Cook and Wayne Lytle. He is also a prolific writer on the subject. Greenberg
was the founding director of the NSF Center for Computer Graphics and Scientific
Visualization. Several of his students have won the SIGGRAPH Achievement Award and
the Academy Award for Technical Achievements.
3. A. Michael Noll
A. Michael Noll was one of the first researchers to use a digital computer to create
artistic patterns and to formalize the use of random processes in the creation of visual arts.
In 1962, he began creating digital computer art. In 1965, Noll and two other scientist,
Frieder Nake and Georg Nees, were the first to exhibits their computer art publicity.
During the period of April 1965, the Howard Wise Gallery exhibited Noll’s computer art
along with random dot patterns created by Bela Julesz.
Other pioneers in the fields are:
• Benoit B. Mandelbrot
• Bui Tuong Phong
• Henri Gouraud
• Paul De Casteljau
• Pierre Bezier
• Daniel J. Sandin
• Ivan Sutherland
• Alvy Ray Smith
• Steve Russell

3.7 HISTORY OF DIGITAL IMAGES


In 1920, Harry G. Bartholomew and Maynard D. McFarlane produced the first digital
image. Harry and Maynard developed the method called Bart lane cable picture transmission
system to produce the digital image. In 1960s, Frederick G. Weighart and James F. McNulty
co-invented the first apparatus to generate a digital image in real-time. Bell Laboratory, Jet
Propulsion Laboratory, MIT and the University of Maryland, among others, used digital
images to advance satellite imagery, medical imaging, wire photo standards conversion,
character recognition, and videophone technology and photo enhancement. In the early
1970s, rapid advances in digital imaging started with the introduction of microprocessors
along with the process in display technologies and related storage.
The invention of Computerized Axial Tomography (CAT scanning) using X-rays to
produce a digital image of a ‘slice’ through a three-dimensional object was of great
importance to medical diagnostics. The beginning of digital images and digitization of analog
images allowed the restoration and enhancements of archaeological artifacts. This was used
in the fields as astronomy, nuclear medicine, law enforcement, defense and industry.
Towards the end of the 20th century, advances in microprocessor technology
facilitated the process of development and marketing of charge-coupled devices (CCDs) for
use in image-capturing devices and gradually displaced the use of analog tape and film in
videography and photography. To achieve a level of refinement close to photorealism, the
computing power required to process capture of digital image also permit computer-
generated digital images.
In computer graphics, a bitmap is a type of memory organization or image file format
used to store digital images. The term bitmap comes from computer programming
terminology, which means just a map of bits or a spatially mapped array of bits.
3.7.1 Pixel Storage
Image pixels in typical uncompressed bitmaps are generally stored with a color depth of
1, 14, 8, 16, 24, 32, 48, 64 bits per pixel. Pixel of 8 bits and less represent either indexed or
grayscale color. An alpha channel may be stored in separate bitmap, where it is similar to
grayscale bitmap, or in the forth channel.
3.7.2 Other Bitmap File Formats
The X windows system uses the same XBM format for black and white images and XPM
(pixelmap) for color images. Many other uncompressed bitmap file formats are in use,
through most are not used widely. The common ones are the standardized and compressed
bitmap files, such as GIF, PNG, TIFF, and JPEG. TIFF and JPEG have a wide variety of
options. JPEG is usually lossy compression.

3.8 BITMAP FILE FORMATS


In this section we are going to learn about the bitmap file one of the graphic files used by
Microsoft Operating system and the structure of Bitmap file.
3.8.1 Bitmap File Format:
Microsoft has defined particular representation of color bitmaps of different color depths
to help exchange bitmaps between devices and applications with the variety of internal
representations. These are called device-independent bitmaps (DBI). Device independent
means using independent method to represent the color bitmap specifies the pixel color. The
file format of is called DIB file format or BMP (Bit Map Picture) file format. The default
extension of DBI file is .BMP
3.8.2 Bitmap File Structure:
A BMP file loaded in the memory as a DIB data structure is an important component of
the windows GDI API. The DIB data structure is the same as the BMP file format but without
the 14-byte BMP header.
A typical BMP file usually contains the following blocks of data:

BMP file Header Stores general information about


the BMP file
Bitmap Stores detailed information about
information (DIB the bitmap image
header)
Color Palette Stores the definition of the colors
being used for indexed color bitmaps
Bitmap Data Stores the actual image, pixel-by-
pixel

3.8.2.1 BMP File Header


This block of bytes is at the beginning of the file and is used for its identification. A
typical application reads this block first to ensure that the file is actually a BMP file, and also
that the file is not damaged. It contains the information about the type, size, and layout of the
device-independent device file. The header is written as BITMAPFILEHEADER structure
bmfh.
3.8.2.2 Bitmap Information (DIB Header)
DIB header is a block of bytes that contains the application –detailed information about the
image. It contains the information about color format, compression type and dimension. The
bitmap information header defines as BITMAPINFOHEADER structure bmih.

3.8.2.3 Color Palette


The color palette is actually is an array contains the colors in the bitmap. The palette occurs
in BMP file, directly after the BMP and DIB headers. So its offset is the size of the BMP
header plus the size of DIB header. In color palette each pixel contains a single color and
represented by a number of bits (1, 4, or 8). The color table used to inform the application
about the real color which is corresponding to this indexed values.
3.8.2.4 Bitmap Data

Bitmap data is an array of Bytes that comes after the color table. There are consecutive rows
are called ‘scan line’ to represent the array. Each scan line contains the number of bytes that
represent the pixels. Scan order always goes in left- to- right direction. In scan line Number
of bytes depends upon pixel width and color format. The scan lines started in the lower left
corner going from left to right and then row by row from the bottom to the in the upper right
corner. So the first byte represents the pixels in lower-left corner and last byte represent in the
upper right corner.
3.8.3 Usage of BMP Format and Related Formats
The simplicity of the BMP file Format is the main reason for its widespread usage in
Windows and other operating systems. The fact that this format is relatively well-documented
and free of patents makes it a convenient format for image processing programs.
The X windows system uses the same XBM format for black- and-white images
and XPM (pixelmap) for color images. The Portable Pixmap (PPM) and True vision TGA
format also exist but are less often used or only for special purposes. i.e. TGA can also
contain transparency information. There are also a variety of ‘raw’ formats, which help in
saving raw data with no other information.

3.9 FILE FORMATS


A. Bitmap
A bitmap or pixmap is a pixel data storage structure employed by the majority of
raster graphics file formats, such as PNG.
B. .ico
The ico file format is an image file format for icons in Microsoft Windows. .ico file
contain one or more small images at multiple sizes and color depth.
C. Open Raster
OpenRaster is a file format being developed under the auspices of the Create Project
to give free software graphics editors a common raster-graphics interchange format that
can maintain as much of the working information that the application use.
A raster graphics editor is a computer program that allows the users to paint
and edit pictures interactively on the computer screen and save them in one of the many
popular ‘bitmap’ or ‘raster’ formats. (e.g. PNG, JPEG, GIF and TIFF).

Check your progress-3


What is 3D object?
What is animation?
What is color palette?

3.10 VECTOR GRAPHIC EDITORS V/S RASTER


GRAPHIC EDITOR
Vector editors are often compared with raster graphics editors. Vector editors are
better for page layout, graphic design, typography, sharp-edged artistic illustrations, logos,
(such as clip art, cartoons, and complex geometric patterns), technical illustrations, making
diagrams and flow chart.
Raster editors are more suitable for retouching, photo realistic illustrations, photo
processing and collage. Several contemporary illustrators use Corel Photo-Paint and
Photoshop software to create all kinds of illustrations.
3.10.1 Graphic File Format
Together with the proprietary types, there are hundreds of images file types. A few of
them, such as PNG, JPEG and GIF, are most often used to display images on the internet.
3.10.1.1 Raster Formats
These formats store images as bitmap.
I. JPEG

Joint Photographic Experts Group (JPEG) is mostly used for images those are
created by digital photography.JPEG is a compression method.JPEG compressed
images are usually stored in JFIF (File Interchange Format). JPEG is a very common
format for storing and transmitting photographic images on the web. JPEG
compression is a lossy compression. The JPEG filename extension is .jpg in DOS
operating system. Other operating system may use .jpeg extension.
II. Exif
Exchangeable image file format (Exif) is a file standard similar to the JFIF format
with .tiff extensions. It is integrated in the JPEG –writing software used in most
cameras. The metadata are recorded for individual images and include things like
name of the camera, camera settings, time and date, shutter speed, exposure,
compression, image size, color information etc.
III. TIFF

TIFF is most common graphic image format. In 1986, Aldus Corporation


developed The TIFF format. Now it is a part of Adobe Software. TIFF files are
mainly used in desktop publishing. TIFF is a flexible format that normally saves 8 bits
or 16 bits per color. Extension for TIFF file is .tiff or .tif . Optical Character
Recognition (OCR) software packages generally generate some form of TIFF image
for scanned text pages.
IV. RAW
RAW refers to a family of raw image formats; the options are available on some
digital cameras. Usually these formats use a lossless or nearly-lossless compression.
Even if there is a standard raw image format (ISO 12234-2, TIFF/EP), the raw
formats used by most cameras are not standardized or documented, and differ among
camera manufactures.

V. PNG
Portable Network Graphics (PNG) file format was created as the free and open-source
successor to the GIF. The PNG format supports true color (16 million colors),while
the GIF supports only 256 colors.

VI. GIF

Graphic Interchange Format (GIF) is limited to an 8 bit palette or 256 colors. This
makes the GIF format suitable for storing graphics with relatively few colors, such as
simple diagrams, shapes, logos and style images.
VII. BMP
The BMP file format handles graphics files within the Microsoft Windows OS. In
general, BMP files are uncompressed and large. The advantage is their simplicity and
wide acceptance in Windows programs.
VIII. PPM,PBM and PGM

NETPBM format is a family including the portable Pix Map file format (PPM),
the portable bitmap file format (PBM) and the portable Grey Map file format (PGM).
Others
Other image file formats for raster type are as follows:
• Inter Leaved Bit Map (ILBM)
• TARGA
• Personal Computer eXchange (PCX)
• Chasys Draw Image (CD5)
• Enhanced Compression Wavelet (ECW)
• Flexible Image Transport System (FITS)

3.10.1.2 Vector Formats


I. CGM
Computer Graphics Metafile (CGM) is a file format for 2D vector graphics, raster
graphics and text. It is defined by ISO/IEC 8632.
II. SVG
SVG is an open standard created and developed by the WWWconsortium to address
the requirement for a scriptable, versatile and multi-purpose vector formats for the
web.
III. Others
Other image file formats of vector type include:
• Open Document Graphics (ODG)
• Portable Document Format (PDF)
• Encapsulated PostScript (EPS)
• Shockwave Flash (SWF)
• XML Paper Specification (XPS)
• Windows Metafile/Enhanced Metafile (WMF/EMF)

3.10.1.3 3D Formats
Some of the 3D formats are as follows:
I. PNS
The PNG Stereo (.pns) format consists of a side-by side image based on Portable
Network Graphics (PNG).
II. JPS
The JPEG Stereo (.jps) format consists of a side-by-side image format based on
JPEG.
III. MPO
It also known as Multi Picture Object (MPO), the file format was first used in the
FinePix REAL 3D WI camera made by FujiFilm. The format is proposed as an open
standard as CIPA DC-007-2009 by Camera and Imaging Products Association
(CIPA).

3.11 SUMMARY
Computer Graphics are generated using computers and generally the represented and
manipulation of pictorial data by computer.
Graphics is a combination of text, color and illustration. Graphics refers to visual
presentation of any object on the surface such as wall, canvas, computer screen, maps,
drawings, photograph etc. Some more examples of graphics are line art, graphs,
numbers, geometric designs, typography, engineering drawings etc. The main
objective of the graphics is to create effective communication with the help of other
cultural elements to create a unique style.
A pixel in digital imaging is a single point in a raster image.
Rendering is the process to get output from the computer. Rendering is the process of
generating a 2d image from a 3D model.
Shading is depicting the depth in 3D models or illustrations by varying the levels of
darkness.
2D computer graphics are computer based conceptions of digital images mostly
produced by two-dimensional models, like 2D geometric models, digital images etc.,
and by techniques relevant to them.
Raster image is also called bitmap image because it contains information that is
directly mapped to the display grid. Most of the pictures or images import from the
digital camera are raster images.
Vectors graphics are made by geometrical primitives like line point, curves and
shapes, which all are based on mathematical equations, to represent images in
computer graphics.
3D computer graphics utilize a three –dimensional representation of geometric data
that is stored in the computer for the intent of performing calculations and presenting
2D images. Such images may be later used for display or for real-time viewing.
Vector editors are often compared with raster graphics editors. Vector editors are
better for page layout, graphic design, typography, sharp-edged artistic illustrations,
logos, technical illustrations, making diagrams and flow chart.
Raster editors are more suitable for retouching, photo realistic illustrations, photo
processing and collage. Several contemporary illustrators use Corel Photo-Paint and
Photoshop software to create all kinds of illustrations.

3.12 KEY TERMS


• Bitmap: A data structure generally representing rectangular points of color, grid of
pixels, viewable via monitor, paper or other display medium.
• Digital Image: A representation of a two-dimensional image using binary digits, ones
and zeros. Depending on the image resolution, whether it is fixed or not, it should be
raster or vector.
• Exif: A file standard similar to the JFIF format with .tiff extensions. It is integrated in
the JPEG-writing software used in most cameras.
• GIF: A suitable file format for storing graphics with relatively few colors, such as
simple diagrams, shapes, logos, and cartoon style images.
• JPEG/JFIF: A compression method that compresses images those are usually stored
in the JFIF file format. In most cases, JPEG compression is a lossy compression.
• Pixel: A pixel in digital imaging is a single point in a raster image. Pixels are usually
arranged in a 2-dimensional grid. Each pixel is a sample of an original image.
• PNG: This file format was created as the free and open source successor to the GIF.
The PNG file format supports true colors (16 million colors), while the GIF supports
only 256 colors.
• Raster image: A method of representing digital images that is known as raster or
bitmap. Raster images are made by pixels. The raster image takes a wide variety of
formats like .jpg, .gif etc.
• TIFF: A flexible format that normally saves 8 bits or 16 bits per color (red, green,
blue) for 24-bit and 48 bit totals, respectively, usually using either the .tiff or .tif
filename extension.

3.13 END QUESTIONS


20. Write a note on raster graphics.
21. What are the advantages and disadvantages of vector graphics?
22. What is the difference between raster and vector graphics?
23. Discus the concept and principles of computer graphics.
24. What are the applications of computer graphics?
25. Explain any two raster formats.
26. What is texture mapping?
27. How are vector images created.
28. Write a note on 2D computer graphics.
29. What do you understand by 3D computer graphics?

Answer to check your progress questions


Check your progress -1:
packet of color.
Ray tracing is a technique used for generating an image by tracing the path
of light by measuring the number of pixels in an image plane.
Shading is depicting the depth in 3D models or illustrations by varying the
levels of darkness.
Texture method is a method for adding any 2d texture to the 3D object.
Rendering is the process to get output from the computer.
Check your progress -2:
width and height comparison ratio of a pixel is called pixel aspect ratio.
Resolution is the number of dots per inch (DPI) or pixel per inch (PPI) in
a particular image.
Bitmap images made by pixel also called raster images.
Vectors graphics are made by geometrical primitives like line point, curves
and shapes, which all are based on mathematical equations, to represent
images in computer graphics.
Check your progress-3:
A 3D model is created by the mathematical representation of any three-
dimensional object.
Animation means to animate non living things or object in computer
graphics.
The color palette is actually is an array contains the colors in the bitmap.

BIBLIOGRAPHY
Foley, James D., John F.Hughes and Andries Van Dam. 1995. Computer Graphics:
Principles and Practice. Montreal: Addison-Wesley Professional.
Hearn, Donald and M.Pauline Baker.1994. Computer Graphics. Upper Saddle River, NJ:
Prentice –Hall.
Hill, Francis S.2001. Computer Graphics. Upper Saddle River, NJ: Prentice –Hall.
Lewell, John,1985. Computer Graphics. A survey of Current Techniques and
Applications. New York: Van Nostrand Reinhold.
McConnell, Jeffrey J.2006. Computer Graphics: Theory Into Practice. MA: Jones &
Bartlett Publishers.
Slater, M.,A. Steed, Y. Chrysantho. 2002. Computer Graphics and Virtual Environments:
From Realism to Real-Time.Montreal: Addison- Wesley.

UNIT 4 COMPUTER ANIMATION


Program Name: B.Sc. MGA
Written by: Srajan Institute of Gaming, Multimedia and Animation.
Structure:
4.0 Introduction
4.1 Unit Objectives
4.2 Early Animation Techniques
4.3 Innovation by animators at Disney
4.4 Types of Animation
4.5 Software for animation
4.5.1 Renderers
4.6 Difference between Traditional Animation and Computer Animation
4.7 Pixar and Disney Studious
4.8 Summary
4.9 Key terms
4.10 Questions and Exercises
4.11 Further reading

1.0 INTRODUCTION
Animation is one of the most universal and all-permeating forms of visual
communication today, seen everywhere from the TV channels dedicated exclusively to
cartoons to the title sequences of our favorite movies to the reactive graphic interfaces our
smartphones. Animation is a fast-growing and exciting area.
Animation is the method of making the illusion of motion and change by means of the
rapid display of a sequence of static images that slightly differ from each other. The illusion
of movement in animation is created by a physiological phenomenon called persistence of
vision. Therefore, when many images are passing the eye, the eye retains each one and the
brain interprets it as a continuous image. Each image is seen for less than a millisecond and is
travelling at the speed of light.
Computer animation is the use of computers to create animations. There are a few
different ways to make computer animations. An early step in the history of computer
animation was the sequel to the 1973 film Westworld, a science-fiction film about a society in
which robots live and work among humans.
Developments in CGI technologies are reported each year at SIGGRAPH, an annual
conference on computer graphics and interactive techniques that is attended by thousands of
computer professionals each year.
In this unit, you will learn about computer animation along with some early animation
techniques and their differences. You will learn various types of animation and software used
for it. You will also see the brief history of leading animation studios.

4.1 UNIT OBJECTIVES:


After studying this unit you will be able to
Understand earlier techniques of animation
Describe the innovations of animators at Disney
Explain different types of animation
Know about various software for animation
Discuss the difference between traditional animation and computer animation

4.2 EARLY ANIMATION TECHNIQUES


Animation’s origin can be traced from the Paleolithic paintings of animal with multiple legs,
in superimposed position, clearly depict the perception of motion. Cave paintings are an
initial example of capturing the phenomenon of motion. One early example is a more than
5,000-year old pottery bowl discovered in Shahr-e Sukhteh, Iran. The bowl has five pictures
painted around it that show phases of a goat leaping up to grab a tree as shown in Figure 4.1.

Figure 4.1 Ancient drawings on pottery bowl


Another example found was an Egyptian mural, from the tomb of Khnumhotep at the
Beni Hassan cemetery, features a very long sequence of images that superficially depict the
series of events in a wrestling match as shown in Figure 4.2. It is approximately 4000 years
old.

Figure 4.2 Egyptian mural


Apart from the paintings and murals, several devices that successfully displayed
animated images were introduced well before the beginning of the motion picture. These
devices were used to for the purpose of entertainment and sometimes even to scare people.
Generally these devices were unable to project their images, and accordingly could only be
viewed by a single person at any one time. Due to this drawback, they were considered as
toys rather than devices for a large scale entertainment industry like later animation. Many of
these devices are still built by and for film students learning the basic principles of animation.
Let us see few of them as follow.
The magic lantern (1650)
You can say today’s projector systems are the descendent of the magic lantern. It is an early
example of projector. It consisted of a transparent oil painting, a simple lens and a candle or
oil lamp. In a darkened room, the image would appear projected onto an adjacent flat surface.
Lots of times it was used to project demonic, frightening images in a fantasy that convinced
people they were witnessing the supernatural. Few of the slides of the lanterns contained
moving parts, which marks the magic lantern the earliest known example of projected
animation.
Thaumatrope (1824)
One of the popular toys in the 19th century was a thaumatrope. The thaumatrope is a circular
card attached between two strings with an image on each side of the card such as a bird and a
cage. When you twist the strings around many times and then pull on them, the card spins
round quickly and the pictures appear to combine into a single image. This demonstrates the
persistence of vision. Figure 4.3 shows the thaumatrope.
Figure 4.3 Thaumatrope
Phenakistoscope (1831)
The phenakistoscope was one of the early animation devices. It was invented in 1831. It
consists of a disk with a series of images, drawn on radii evenly spaced around the center of
the disk as shown in Figure 4.4. Slots are cut out of the disk on the same radii as the
drawings, but at a different distance from the center. The device would be placed in front of a
mirror and spun. As the phenakistoscope rotates, a viewer looks through the slots at the
reflection of the drawings, are momentarily visible when a slot passes by the viewer's eye.
This created the illusion of animation.

Figure 4.4 A phenakistoscope disc by Eadweard Muybridge


Zoetrope (1834)
The zoetrope was a device popularized in the Victorian era for entertainment. It shows how
animation is a sequence of images viewed over time. It operates on the same principle as the
phenakistoscope. A circle of paper with drawings on was used, each drawing slightly
different from the previous one. This paper was placed in the zoetrope, a cylinder like device
as shown in Figure 4.5, which had small slits around the outside. When the zoetrope was
spun round, the observer looks through vertical slits to view the moving images on the
opposite side as the cylinder spins. As it spins, the material between the viewing slits moves
in the opposite direction of the images on the other side and in doing so serves as a primary
shutter. The pictures would pass each of the slits at speed, creating the illusion of movement.
The advantages of zoetrope over the basic phenakistoscope were it did not involve the use of
a mirror to view the illusion, and because of its cylindrical shape it could be viewed by
several people at once.
Figure 4.5 Zoetrope
Flip book (1868)
A flip book (also known as a flick book) is a small book with relatively elastic pages, each
having one in a series of animation images located near its unbound edge. So when the book
is flicked rapidly, you can see the series of images in a fluid motion, trying to show a scene as
shown in Figure 4.6. Early film animators mentioned flip books as their inspiration more
often than the earlier devices, which did not reach as wide an audience. Though Flipbook
animation is one of the oldest but still it is a fascinating type of animation.
John Barnes Linnett patented the first flip book in 1868 as the kineograph.

Figure 4.6 Flipbook animation


Praxinoscope (1877)
The first known animated projection on a screen was created in France by Charles-Émile
Reynaud as shown in Figure 4.7, who was a French science teacher. On 28 October 1892, he
projected the first animation in public, Pauvre Pierrot, at the Musée Grévin in Paris. This
film is also notable as the first known instance of film perforations being used. His films were
not photographed, but drawn directly onto the transparent strip. In 1900, more than 500,000
people attended these screenings.

Figure 4.7 The First Known Animated Projection


The first film recorded on standard picture film that included animated sequences was
the 1900 The Enchanted Drawing, which was followed by the first entirely frame-by-frame
animation film drawn on a chalkboard , Humorous Phases of Funny Faces by J. Stuart
Blackton in 1906 who is, for this reason, considered the father of American animation. Refer
Figure 4.8.

Figure 4.8 Humorous Phases of Funny Faces by J. Stuart Blackton


In Europe, the French artist, Émile Cohl, created the first animated film
Fantasmagorie in 1908 as shown in Figure 4.9. It is the first animated film which used the
traditional animation creation methods.

Figure 4.9 A Scene from Fantasmagorie


The film largely comprised of a stick figure moving about and facing all manner of
morphing objects, such as a wine bottle that transforms into a flower. There were also
sections of live action where the animator’s hands would enter the scene. The film was
created by drawing each frame on paper and then shooting each frame onto negative film,
which gave the picture a blackboard look.
In 1914, the more detailed hand-drawn animations, with detailed backgrounds and
characters, were directed by Winsor McCay, a successful newspaper cartoonist, also known
as Father of character animation. His famous work includes Little Nemo and Gertie the
Dinosaur. Gertie the Dinosaur, is an early example of character development in drawn
animation as shown in Figure 4.10. The film was made for McCay's entertainment act and as
it played McCay would speak to Gertie who would respond with a series of gestures. There
was a scene at the end of the film where McCay walked behind the projection screen and a
view of him appears on the screen showing him getting on the cartoon dinosaur's back and
riding out of frame. This scene made Gertie the Dinosaur the first film to combine live action
footage with hand drawn animation. McCay hand-drew almost every one of the 10,000
drawings he used for the film.
Figure 4.10 Gertie the Dinosaur
During the 1910s, the production of animated short films, typically referred to as
"cartoons", became an industry of its own and cartoon shorts were produced for showing in
movie theaters. John Randolph Bray, known as a most successful producer of that time, along
with animator Earl Hurd, patented the cel animation process in 1914 that dominated the
animation industry for the rest of the decade.

Figure 4.11 Cel Animation

This involved animating moving objects on transparent celluloid sheets. Animators


photographed the sheets over a stationary background image to generate the sequence of
images. The cel is an important innovation to traditional animation, as it allows some parts of
each frame to be repeated from frame to frame, thus saving labor. Figure 4.11 shows how two
transparent cels, each with a different character drawn on them, and an opaque background
are photographed together to form the composite image.
In 1915, Max and Dave Fleischer invented rotoscoping, the process of using film as a
reference point for animation and their studios went on to later release such animated classics
as Ko-Ko the Clown as shown in Figure 4.12, Popeye the Sailor Man, and Superman.

Figure 4.12 Ko-ko the Clown


The first known animated feature film was El Apóstol, made in 1917 by Quirino Cristiani
from Argentina as shown in Figure 4.13. He also directed two other animated feature films,
including 1931's Peludópolis, the first feature length animation to use synchronized sound.
None of these, however, survived.

Figure 4.13 El Apóstol


Shooting on twos
Moving characters are often shot "on twos", that is to say, one drawing is shown for
every two frames of film (which usually runs at 24 frames per second), meaning there are
only 12 drawings per second. Even though the image update rate is low, the fluidity is
satisfactory for most subjects. However, when a character is required to perform a quick
movement, it is usually necessary to revert to animating "on ones", as "twos" are too slow to
convey the motion adequately. A blend of the two techniques keeps the eye fooled without
unnecessary production cost.
Academy Award-nominated animator Bill Plympton is noted for his style of animation
that uses very few inbetweens and sequences that are done on threes or on fours, holding each
drawing on the screen from an eighth to a sixth of a second.
Animation loops
A horse animated by rotoscoping from Eadweard Muybridge's 19th-century photos as shown
in Figure 4.14. The animation consists of 8 drawings which are "looped", i.e. repeated over
and over. This example is also "shot on twos", i.e. shown at 12 drawings per second.

Figure 4.14 A Frame of an animated Loop


Creating animation loops or animation cycles is a labor-saving technique for animating
repetitive motions, such as a character walking or a breeze blowing through the trees. In the
case of walking, the character is animated taking a step with his right foot, then a step with
his left foot. The loop is created so that, when the sequence repeats, the motion is seamless.
However, since an animation loop essentially uses the same bit of animation over and over
again, it is easily detected and can in fact become distracting to an audience. In general, they
are used only sparingly by productions with moderate or high budgets.
Multiplane camera
In 2D animated movies, the multiplane camera is used as a tool to enhance depth to scenes,
called the multiplane effect or the parallax process. In this technique the art is placed on
different layers of glass plates as shown in Figure 4.15, and as the camera moves vertically
towards or away from the artwork levels, the camera's viewpoint appears to move through the
various layers of artwork in 3D space. Pinocchio is a good example of the panorama views
by the effects a multiplane camera can achieve. Over the time different versions of the
camera have been made, but the most well-known is the one developed by the Walt Disney
studio beginning with their 1937 short The Old Mill.

Figure 4.15 Multiplane camera


Xerography
Xerography applied to animation by Ub Iwerks at the Walt Disney studio during the late
1950s. This electrostatic copying technique allowed the drawings to be copied directly onto
the cels, eliminating much of the "inking" portion of the ink-and-paint process. This saved
time and money, and it also made it possible to put in more details and to control the size of
the xeroxed objects and characters. At first it resulted in a sketchier look, but the technique
was improved upon over time.
The APT process
Invented by Dave Spencer for the 1985 Disney film The Black Cauldron, the APT
(Animation Photo Transfer) process was a technique for transferring the animators' art onto
cels. Basically, the process was a modification of a repro-photographic process; the artists'
works were photographed on high-contrast "litho" film, and the image on the resulting
negative was then transferred to a cel covered with a layer of light sensitive dye. The cel was
exposed through the negative. Chemicals were then used to remove the unexposed portion.
Small and delicate details were still inked by hand if needed. Spencer received an Academy
Award for Technical Achievement for developing this process.
Check Your Progress 1
What is an animation?
How the illusion of movement in animation created?
What is the advantage of zoetrope over the basic phenakistoscope?
To whom early film animators mentioned their inspiration?
What is the first known instance of film perforations being used?
Name the first animated film which used the traditional animation creation methods.
Who is Father of character animation?
4.3 INNOVATIONS BY ANIMATORS AT DISNEY
The multiplane camera used in the traditional animation process is a special motion picture
camera that moves a number of pieces of artwork at various speed and distances from one
another. This creates a 3D effect, although not actually a stereoscopic.
A horizontal camera was invented in 1933 by Ub Iwerks, former Walt Disney Studios
animator/director, using parts from an old Chevrolet automobile. His multiplane camera was
used to create a number of the Iwerks Studio's cartoons in the mid-1930s.
Xerography or electrophotography is a dry photocopying technique invented by
Chester Carlson in 1938, for which he was later awarded US Patent in 1942. Carlson initially
called his invention electrophotography. Later it was renamed xerography—from the Greek
for xeros meaning dry and graphos meaning writing to emphasize the reproduction
techniques then in use such as cyanotype. This is process used no liquid chemicals.
Computer Animation Production System (CAPS) is a collection of software
programs, scanning camera systems, servers, networked computer workstations and custom
desks that were all developed by the Walt Disney Company in collaboration with Pixar in the
late 1980s. Its main objective was to automate the ink and paint as well as the post-
production processes. The previously animated feature films from the Walt Disney
Animation Studios and Pixar Animation Studios stables deployed this method extensively.
CAPS was the first to use digital ink and paint system in animated films. This
replaced the process of transporting animated drawings to cels through the use of expensive
India ink or xerographic technology, and subsequently painting the cels' reverse sides with
powder paint. Coloring the enclosed areas and lines using an unlimited palette through the
use of CAPS system, in the digital environment, was easy. Complicated techniques like
blending color were now possible.
Evolution of the system
CAPS process was used for the first time for Mickey standing on Epcot's Spaceship Earth
for The Magical World of Disney titles. The system's first use in a feature film was in the
production of The Little Mermaid in 1989; however, the system was limited to the rainbow
sequence at the end of the film. Later CAPS’ 2D and 3D integration technique proved
beneficial to The Lion King, Aladdin and few more.
For the special edition IMAX and DVD versions of Beauty and the Beast, Aladdin, and
The Lion King, new renderings were done and recorded to change master formats.
The CAPS team won an Academy of Motion Picture Arts and Sciences Scientific and
Engineering Award.
Check Your Progress 2
Who invented a horizontal camera?
What is Xerography or electrophotography?
What does CAPS stand for?
Who used digital ink and paint system in animated films for the first time?
When CAPS process was used for the first time?

4.4 TYPES OF ANIMATION


1. Traditional animation
Traditional animation also called cel animation or hand-drawn animation was the process
used for most animated films of the 20th century. In this the individual frames are
photographs of drawings, first drawn on paper. To produce the illusion of movement,
each drawing differs a little from the one before it. The animators' drawings are traced or
photocopied onto transparent acetate sheets called cels, which are filled in with paints in
assigned colors or tones on the side opposite the line drawings. The completed character
cels are photographed one-by-one against a painted background by a rostrum camera onto
motion picture film.
Examples of traditionally animated feature films include Pinocchio (1940).
Traditionally animated films which were produced with the aid of computer technology
include The Lion King (1994) and The Prince of Egypt (1998).
• Full animation: It refers to the process of creating high-quality traditionally animated
films that regularly use complete drawings and possible movement, having a smooth
animation. Fully animated films are animated at 24 frames per second, with a
combination of animation on ones and twos, meaning that drawings can be held for
one frame out of 24 or two frames out of 24. Fully animated films can be made in a
variety of styles, from more realistically animated works those produced by the Walt
Disney studio (The Little Mermaid, Beauty and the Beast) to the more 'cartoon' styles
of the Warner Bros. animation studio.
• Limited animation: It contains the use of less detailed or more stylized drawings and
techniques of movement usually a choppy or "skippy" movement animation. Limited
animation uses less drawings per second, thereby limiting the flexibility of the
animation. This is a more economic technique. Its primary use, however, has been in
producing cost-effective animated content for media for television.
• Rotoscoping: It is a method where animators trace live-action movement, frame by
frame. The source film can be directly copied from actors' outlines into animated
drawings. A classic example is The Lord of the Rings (1978).
• Live-action/animation: It is a procedure merging hand-drawn characters into live
action shots or live action actors into animated shots as shown in Figure 4.16. One of
the earlier uses was in Koko the Clown when Koko was drawn over live action
footage.

Figure 4.16 Live-action/animation


2. Stop-motion animation
It is used to describe animation created by physically manipulating real-world objects and
photographing them one frame of film at a time to create the illusion of movement. There
are many different types of stop-motion animation, typically named after the medium
used to create the animation. Computer software is commonly available to create this type
of animation; however, traditional stop motion animation is usually less expensive and
time-consuming to produce than current computer animation.
• Puppet animation: It usually involves stop-motion puppet figures interacting in an
assembled environment, in contrast to real-world interaction in model animation. The
puppets normally have a skeleton inside of them to keep them still and steady to
constrain their motion to particular joints.
• Clay animation: It is also known as Plasticine or Claymation, uses figures made of
clay or a similar soft material to create stop-motion animation as shown in Figure
4.17. The figures may have a skeleton or wire frame inside, similar to the related
puppet animation that can be manipulated to pose the figures. Otherwise, the figures
may be made completely of clay, where clay creatures morph into a variety of
different shapes.

Figure 4.17 Clay animation

• Cutout animation: It is a type of stop-motion animation produced by moving two-


dimensional pieces of material paper or cloth. Cutout animation is probably one of the
oldest forms of stop motion animations in the history of animation. Figure 4.18 shows
one example it.

Figure 4.18 Cut out Animation


o Silhouette animation: It is an alternative of cutout animation in which the
characters are backlit and only visible as silhouettes. The Adventures of Prince
Achmed is a good example of it.
• Model animation: It refers to stop-motion animation created to work together with as
a part of a live-action world. Intercutting, matte effects, and split screens are often
employed to blend stop-motion characters or objects with live actors and settings.
• Object animation: It states to the use of regular inanimate objects in stop-motion
animation, as opposed to specially created items.
o Graphic animation: It uses non-drawn flat visual graphic material such as
photographs, newspaper clippings, magazines, etc. which are sometimes
manipulated frame-by-frame to create movement. At other times, the graphics
remain stationary, while the stop-motion camera is moved to create on-screen
action.
Figure 4.19 Graphic animation

o Brickfilm: It is a sub-genre of object animation involving using Lego or other


similar brick toys to make an animation.
• Pixilation: It contains the use of live humans as stop motion characters. This allows
for a number of unreal effects, including disappearances and reappearances, allowing
people to appear to slide across the ground, and other effects. Examples of pixilation
include The Secret Adventures of Tom Thumb and Angry Kid shorts.
3. Computer animation
Computer animation involves a variety of techniques, the combining factor being that the
animation is created digitally on a computer. 2D animation techniques are likely to focus
on image manipulation while 3D techniques usually not only construct virtual worlds in
which characters and objects move and interact but also create images that seem real to
the viewer.
• 2D animation: In 2D animation figures are created or edited on the computer using
2D bitmap graphics or 2D vector graphics. This consists of automated computerized
versions of traditional animation techniques, such as interpolated morphing, onion
skinning and interpolated rotoscoping.
2D animation has several applications, including analog computer animation,
Flash animation and PowerPoint animation.

• 3D animation: It is the manipulation of three dimensional objects and virtual


environments with the use of a computer program. It is digitally modeled and
manipulated by an animator. The animator usually starts by creating a 3D polygon
mesh to manipulate. A mesh typically includes many vertices that are connected by
edges and faces, which give the visual appearance of form to a 3D object or 3D
environment. Sometimes, the mesh is given an internal digital skeletal structure called
an armature that can be used to control the mesh by weighting the vertices. This
process is called rigging and can be used in combination with keyframes to create
movement.
Other techniques under the 3D dynamics category can be applied; such as
mathematical functions (e.g., gravity, particle simulations), simulated fur or hair, and
effects, fire and water simulations.
4. Mechanical animation
• Animatronics: It is the use of mechatronics to create machines which seem animate
rather than robotic.
• Chuckimation: It is a type of animation produced by the creators of the television
series Action League Now! in which characters/props are thrown, or chucked from off
camera or wiggled around to simulate talking by unseen hands.
• Puppetry: It is a form of theatre or performance animation that involves the
manipulation of puppets. It is very ancient, and is believed to have originated 3000
years BC. Puppetry takes many methods; they all share the process of animating
inanimate performing objects. Most puppetry consists of storytelling.

Figure 4.20 Puppetry

• Zoetrope: It is a device that produces the illusion of motion from a rapid succession
of static pictures. A circle of paper with drawings on was used, each drawing slightly
different from the previous one. This paper was placed in the zoetrope, which had
small slits around the outside. When the zoetrope was spun round, the pictures would
pass each of the slits at speed, creating the illusion of movement.
Other animation approaches
• Hydrotechnics: It is a technique that contains lights, water, fire, fog, and lasers, with
high-definition projections on mist screens.
• Drawn on film animation: This technique involves scratching, etching directly on an
exposed film reel.
• Paint-on-glass animation: It is a technique for making animated films by
manipulating slow drying oil paints on sheets of glass.
• Pinscreen animation: It makes use of a screen filled with transferable pins that can
be moved in or out by pressing an object onto the screen. The screen is lit from the
side so that the pins cast shadows. The technique has been used to create animated
films with a range of textural effects difficult to achieve with traditional cel animation
• Sand animation: In this kind of method, sand is moved around on a back- or front-
lighted piece of glass to create each frame for an animated film. This creates an
interesting effect when animated because of the light contrast.

Figure 4.21 Sand animation


• Flip book: It is a book with a series of pictures that vary steadily from one page to the
next, so that when the pages are turned rapidly, the pictures appear to animate by
simulating motion or some other change. Flip books are often illustrated books for
children, they also be geared towards adults and employ a series of photographs rather
than drawings.
• Typography Animation: It is a combination of text in motion. Typography
animation is widely used during the titles part of a movie.
Check Your Progress 3
What is rotoscoping?
How to name various stop-motion animation?
Which traditional animation techniques does 2D animation have?
What is 3D animation?
What is Typography Animation?

4.5 SOFTWARE FOR ANIMATION


Animation is a type of art that has captured the dreams of humans for quite a long time. It has
given humans the choice to represent new possibilities and new worlds in a creative and more
engaging way. In our modern life we find animation in all aspects, from simple interactive
web pages to games, videos and movies.
Programmers continue blending out cool animation software, as animation technology
progresses. Though some of these software may be fit for experts, most are commonly user
friendly to beginners as well. In the following section you will learn some of 2D and 3D
animation software for both experts and beginners.
2D animation software
1. Anime Studio: The champion of 2D animation software is Anime Studio. It has all the
features for creating 2D animations. It stands apart from other software is due to its
capability to produce quality animations, its remarkable speed and it also comes with a
bunch of free plug-ins. You can add shadows and shading to characters in a couple of
easy mouse clicks.
Anime Studio also has a Pro version, which is even more brilliant and comes with a
host of features for the professional animator. For example you can save and reuse
animations you have previously created for your characters, change the hair, skin and
colors rapidly to suit your characters. It also works with all versions of windows as well
as Mac OS.
2. Adobe Animate: Adobe Animate formerly known as Adobe Flash Professional- is a
multimedia authoring and computer animation program developed by Adobe Systems.
The first version under the new name was released February 8, 2016.
Adobe’s animation tool is still one of the best 2D animation software on the market
even after being around for such a long time. It has coped to beat off competition from
some of the new software because it is easy to use and most animators find it very
flexible especially when it comes to web development. It is generally used to design
vector graphics and animation, and publish the same for websites, web applications,
television programs, online video, rich internet applications, and video games. The
program also offers support for raster graphics, rich text, audio and video embedding, and
ActionScript scripting. Animations can be published for various formats such as HTML5
and WebGL.
However it has some drawbacks, it lacks some key tool sets that would really help
animators especially in this era when animators have to create something fascinating.
Though it provides a great interactive experience, it is not perfect for making cartoons.
3. Toon Boom: Toon Boom is the most commonly used 2D animation software in the
professional industry as it is more user friendly. If you are searching for a software, which
will help you make professional animations, then Toon Boom is the best solution for it.
Toon Boom has a lot of features and it can produce amazing graphics. The drawing
tools of it are outstanding and it produces amazing final products. However,
understanding the fluid interface is little time consuming. When it comes to prize, it is
also quite costly but still a great tool for creating 2D animations.

3D Animation Software
1. Maya: Maya is owned by Autodesk. It is currently referred as the industry standard for
3D animated movies, games, television and computer generated 3D effects used in live
entertainment. Maya is always up to date and fully featured and is the flawless program
for those who are motivated to become proficient animators. It allows a huge array of
shading and lighting effects, because it Maya is fast becoming the famous choice of
software for many film makers. It’s another best feature is, easy customization. Means
you can easily integrate other third party software.
Features of Maya

• Fluid Effects: A realistic fluid simulator based on simplified, incompressible Navier–


Stokes equations for simulating non-elastic fluids was added in Maya 4.5. This is
effective for creating effects such as smoke, fire, clouds and explosions, as well as
many thick fluid effects such as water, lava or mud.
• Bifröst: Bifröst is a computational fluid dynamics framework based on fluid-implicit
particle simulation. It is available in Maya 2015 and later, following the acquisition of
Naiad fluid simulation technology from Exotic Matter. Bifröst allows liquids to be
modelled realistically, including details such as foam, waves and droplets.
• Fur: Fur simulation designed for large area coverage of short hairs and hair-like
materials. It can be used to simulate short fur-like objects, such as grass, carpet, etc. In
contrast to Maya Hair, the Fur module makes no attempt to prevent hair-to-hair
collisions. Hairs are also incapable of reacting dynamically to physical forces on a per
hair basis.
• nHair: Hair simulator is capable of simulating dynamic forces acting on long hair and
per-hair collisions. Often it is used to simulate computationally complex human hair
styles including pony tails, perms and braids. The simulation utilizes NURBS curves
as a base which are then used as strokes for Paint Effects brushes thereby giving the
curves a render time surface-like representation that can interact with light and
shadow.
• nCloth: It has been added in version 8.5, nCloth is the first implementation of Maya
Nucleus, Autodesk's simulation framework. nCloth offers artist with detailed control
of cloth and material simulations. Compared to its predecessor Maya Cloth, nCloth is
a faster, more flexible and more robust simulation framework.
• nParticle: Added in version 2009, nParticle is addition to Maya Nucleus toolset.
nParticle is for simulating a wide range of complex 3D effects, including liquids,
clouds, smoke, spray, and dust. nParticles also interact with the rest of the Nucleus
simulation framework without the need for costly work-arounds and custom scripting.
• MatchMover: Added to Maya 2010, this supports compositing of CGI elements with
motion data from video and film sequences, a process known as Match moving or
camera tracking. This is an external program but is shipped with Maya.
• Composite: Added to Maya 2010, this was earlier sold as Autodesk Toxik. This is an
external program but is shipped with Maya.
• Camera Sequencer: Added in Autodesk Maya 2011, Camera Sequencer is used to
layout multiple camera-shots and manage them in one animation sequence.

2. 3D Studio Max: This is another 3D-animation software from Autodesk. It is a


professional 3D computer graphics program for making 3D animations, models, games
and images. It is more popular with industrial designers and architects. Currently Film
and video game industry have started using it slowly. It is more suitable to the
development of 3D games. However, it’s one of the most expensive 3D animation
software and only big studios can probably afford it. There’s almost nothing it can’t do
that any other software on the market can accomplish.
Features of Max
• MAXScript: MAXScript is a built-in scripting language that can be used to automate
repetitive tasks, combine existing functionality in new ways, and develop new tools
and user interfaces, and much more. Plugin modules can be created entirely within
MAXScript.
• Character Studio: Character Studio was a plugin which since version 4 of Max is
now integrated in 3D Studio Max, helping users to animate virtual characters. The
system works using a character rig or "Biped" skeleton which has stock settings that
can be modified and customized to the fit character meshes and animation needs.
• Scene Explorer: Scene Explorer, a tool that provides a hierarchical view of scene
data and analysis, helps working with more complex scenes. Scene Explorer has the
ability to sort, filter, and search a scene by any object type or property (including
metadata).
• DWG import: 3ds Max supports both import and linking of DWG files. Improved
memory management in 3ds Max 2008 enables larger scenes to be imported with
multiple objects.
• Texture assignment/editing: 3ds Max offers operations for creative texture and
planar mapping, including tiling, mirroring, labels, angle, rotate, blur, UV stretching,
and relaxation; Remove Distortion; Preserve UV; and UV template image export. The
texture workflow includes the ability to combine an unlimited number of textures, a
material/map browser with support for drag-and-drop assignment, and hierarchies
with thumbnails.
• General keyframing: It has two keying modes- set key and auto key, which offer
support for different keyframing workflows. Fast and intuitive controls for
keyframing including cut, copy, and paste enable the user to create animations with
ease. Animation trajectories may be viewed and edited directly in the viewport.
• Constrained animation: Objects can be animated along given curves with controls
for alignment, banking, velocity, smoothness, and looping, and along surfaces with
controls for alignment. Weight path-controlled animation between multiple curves,
and animate the weight. Also objects can be constrained to animate with another
objects in many ways which includes look at, orientation in different coordinate
spaces, and linking at different points in time. All resulting constrained animation can
be collapsed into standard keyframes for further editing.
• Skinning: Skin or Physique modifier can be used to achieve precise control of
skeletal deformation, so the character deforms smoothly as joints are moved. Skin
deformation can be controlled using direct vertex weights, volumes of vertices
defined by envelopes, or both.
The rigid bind skinning option is useful for animating low-polygon models or
as a diagnostic tool for regular skeleton animation. Additional modifiers, such as Skin
Wrap and Skin Morph, can be used to drive meshes with other meshes and make
targeted weighting adjustments in tricky areas.
• Integrated Cloth solver: In addition to reactor’s cloth modifier, 3ds Max software
has an integrated cloth-simulation engine that allows the user to turn almost any 3D
object into clothing, or build garments from scratch. Collision solving is fast and
accurate even in complex simulations.
Local simulation enables user to drape cloth in real time to set up an initial
clothing state before setting animation keys.
• Integration with Autodesk Vault: Autodesk Vault plug-in, which ships with 3ds
Max, combines users’ 3ds Max assets in a single location, enabling them to
automatically track files and manage work in progress. Users can easily and safely
share, find, and reuse 3ds Max (and design) assets in a large-scale production or
visualization environment.
• Max Creation Graph: Introduced with Max 2016, Max Creation Graph (MCG)
enables user to create modifiers, geometry, and utility plug-ins using a visual node-
based workflow.
With MCG you can create a new plug-in for 3ds Max in minutes by simply
wiring together parameter nodes, computation nodes, and output nodes.
3. Lightwave 3D: Lightwave is very professional 3D animation software. It is mostly used
to create movies and special effects. The software comes with many features and requires
a lot of learning before you master the art of using it. The best part about it is that it
comes with a 30 day free trial and is compatible with all the latest operating systems. The
quality of the animations is also of high standard and the speed with which you can create
animations is excellent.
Features of Lightwave
• Dynamics: LightWave offers dynamics physics systems supporting hard and soft
body motion, deformation, constraint, motorization, environments, and particles. It
interacts with 3D object models, bones, and hair (FiberFX).
• Hypervoxels: Hypervoxels are a tool to render different particle animation effects.
Different modes of operation have the ability to generate appearances that mimic
water or mercury including reflection or refraction surface settings, Volume shading
for simulating clouds or fog type effects , Sprites which are able to reproduce effects
like fire or flocking birds
• Material shaders: LightWave comes with a nodal texture editor that comes with a
collection of special-purpose material shaders. Some of the types of surface for which
these shaders have been optimized include:
o General-purpose subsurface scattering materials for materials like wax or
plastics
o Realistic skin, including subsurface scattering and multiple skin layers
o Metallic, reflective, materials using energy conservation algorithms
o Transparent, refractive materials including accurate total internal reflection
algorithms
• Nodes: NewTek extended LightWave's parameter setting capabilities with a node
graph architecture (Node Editor) for LightWave 9. This Editor empowered broad
hierarchical parameter setting on top of its fixed and stack-based parameter setting
support. Example node types include mathematical, script, gradient, sample, instance,
group, and shader. Nodes are usable within the Surface Editor, Mesh Displacement,
and Virtual Studio features.
• Scripting: LScript is one of LightWave's scripting languages. It provides a wide-
ranging set of prebuilt functions you can use when scripting how LightWave behaves.
It also has Python support as an option for custom scripting.
4. Blender: If you are working on a budget or simply making 3D animations for your
projects, then Blender is best option for you. It is perfect for not only creating simple
cartoons but also other small 3D animation projects. As it is free software, in terms of
quality and features though it does not equal to the Maya and Lightwave but it is good
software in its own right.
Features of Blender
• It supports wide range of geometric primitives, including polygon meshes, fast
subdivision surface modeling, Bezier curves, NURBS surfaces, metaballs, icospheres,
multi-res digital sculpting (including dynamic topology, maps baking, remeshing,
resymetrize, decimation), outline font, and a new n-gon modeling system called B-
mesh.
• Internal render engine with scanline rendering, indirect lighting, and ambient
occlusion that can export in a wide variety of formats.
• A pathtracer render engine called Cycles, which can take advantage of the GPU for
rendering. Cycles supports the Open Shading Language since Blender 2.65.
• Integration with a number of external render engines through plugins.
• Keyframed animation tools including inverse kinematics, armature (skeletal), hook,
curve and lattice-based deformations, shape animations, non-linear animation,
constraints, and vertex weighting.
• Simulation tools for Soft body dynamics including mesh collision detection, LBM
fluid dynamics, smoke simulation, Bullet rigid body dynamics, ocean generator with
waves.
• The Blender Game Engine, a sub-project, offers interactivity features such as collision
detection, dynamics engine, and programmable logic. It also allows the creation of
stand-alone, real-time applications ranging from architectural visualization to video
game construction.
• It has Real-time control during physics simulation and rendering.
• It uses Python scripting for tool creation and prototyping, game logic, importing
and/or exporting from other formats, task automation and custom tools.
• It supports Basic non-linear video/audio editing.
• A fully integrated node-based compositor within the rendering pipeline accelerated
with OpenCL.
• In Procedural and node-based textures, it has texture painting, projective painting,
vertex painting, weight painting and dynamic painting.

5. SketchUp: Sketchup previously known as Google Sketchup, is a 3D modeling computer


program currently owned by Trimble Navigation, a mapping, surveying, and navigation
equipment company. It is used for a wide range of drawing applications such as civil,
architectural, interior design, film , mechanical engineering, , and video game design. It is
available in two versions - SketchUp Make- freeware version, and SketchUp Pro- paid
version with additional functionality. There is an online open source library of free model
assemblies (e.g. windows, doors, automobiles), 3D Warehouse, to which users may
contribute models. The program includes drawing layout functionality, allows surface
rendering in variable "styles", supports third-party "plug-in" programs hosted on a site
called Extension Warehouse to provide other capabilities (e.g. near photo-realistic
rendering), and enables placement of its models within Google Earth. It allows designers
to play with their designs in a way which is not possible with traditional design software.
Feature of SketchUp

• SketchUp 3D Warehouse: it is an open source library in which SketchUp users may


upload and download 3D models to share. The models can be downloaded right into
the program without anything having to be saved onto your computers storage. File
sizes of the models can be up to 50 MB.

Main advantage is anyone can make, modify, and re-upload content to and
from the 3D warehouse free of charge. All the models in 3D Warehouse are free, so
anyone can download files for use in Sketchup or even other software such as
Autocad, Revit and Archicad - all of which have apps allowing the retrieval of models
from 3D Warehouse.
Since 2014 Trimble has launched a new version of 3D Warehouse where
companies may have an official page with their own 3D catalog of products. Trimble
is currently investing in creating 3D developer partners in order to have more
professionally modeled products available in 3D Warehouse. According to the
Trimble, 3D Warehouse is the most popular 3D content site on the web.
6. ZBrush: Unlike other traditional 3D animation software, ZBrush is a digital sculpting
tool. It combines 3D/2.5D modeling, texturing and painting. It uses a proprietary "pixol"
technology which stores lighting, color, material, and depth information for all objects on
the screen.
ZBrush is used for creating high-resolution models (able to reach 40+ million
polygons) for use in movies, games, and animations, by various companies. ZBrush uses
dynamic levels of resolution to allow sculptors to make global or local changes to their
models.
ZBrush is most known for being able to sculpt medium to high frequency details that
were traditionally painted in bump maps. The resulting mesh details can then be exported
as normal maps to be used on a low poly version of that same model. They can also be
exported as a displacement map, although in that case the lower poly version generally
requires more resolution. Or, once completed, the 3D model can be projected to the
background, becoming a 2.5D image (upon which further effects can be applied). Work
can then begin on another 3D model which can be used in the same scene. This feature
lets users work with complicated scenes without heavy processor overhead.
Features of ZBrush
• Pixol: Like a pixel, each pixol contains information on X and Y position and color
values. Additionally, it contains information on depth (or Z position), orientation and
material. ZBrush related files store pixol information, but when these maps are
exported (e.g., to JPEG or PNG formats) they are flattened and the pixol data is lost.
• 3D Brushes: ZBrush comes with many features to aid in the sculpting of models and
polygon meshes. The initial ZBrush download comes with 30 default 3D sculpting
brushes with more available for download. Each brush offers unique attributes as well
as allowing general control over hardness, intensity, and size.
• Polypaint: It allows users to paint on an object's surface without the need to first
assign a texture map by adding color directly to the polygons.
• Transpose: ZBrush also has a feature that is similar to skeletal animation in other 3D
programs. The transpose feature allows a user to isolate a part of the model and pose
it without the need of skeletal rigging.
• ZSpheres: A user can create a base mesh with uniform topology and then convert it
into a sculptable model by starting out with a simple sphere and extracting more
"ZSpheres" until the basic shape of the desired model is created.
• GoZ: Introduced in ZBrush 3.2 OSX, GoZ automates setting up shading networks for
normal, displacement, and texture maps of the 3D models in GoZ-enabled
applications. Upon sending the mesh back to ZBrush, GoZ will automatically remap
the existing high-resolution details to the incoming mesh. GoZ will take care of
operations such as correcting points & polygons order. The updated mesh is
immediately ready for further detailing, map extractions, and transferring to any other
GoZ-enabled application.
• Best Preview Render: It also includes full render suite known as Best Preview
Render, which allows use of full 360° environment maps to light scenes using HDRI
images. BPR includes a new light manipulation system called LightCaps. With it, one
can not only adjust how the lights in the scene are placed around the model, but also
generate environments based on it for HDRI render later on. It also allows for
material adjustments in a realtime.
• DynaMesh: It allows ZBrush to quickly generate a new model with uniform polygon
distribution, to improve the topology of models and eliminate polygon stretching.
• Fibermesh: It a feature that lets users to grow polygon fibers out of their models or to
make various botanical items. It is also a way to edit and manipulate large amounts of
polygons at once with Groom brushes.
• ZRemesher: An automatic retopology system previously called QRemesher that
creates new topology based on the original mesh. The new topology is generally more
clean and uniform. This process can also be guided by the user to make the new
topology follow curves in the model and retain more detail to specified areas.
7. Houdini: It is 3D animation application software developed by Side Effects Software
based in Toronto. Side Effects adapted Houdini from the PRISMS suite of procedural
generation software tools. Its exclusive attention to procedural generation distinguishes it
from other 3D computer graphics software.
Features of Houdini
• Modeling: All standard geometry entities including Polygons, (Hierarchical)
NURBs/Bézier Curves/Patches & Trims, Metaballs
• Animation: Keyframed animation and raw channel manipulation (CHOPs), motion
capture support
• Dynamics: Rigid Body Dynamics, Fluid Dynamics, Wire Dynamics, Cloth
Simulation, Crowd simulation.
• Lighting: Node-based shader authoring, lighting and re-lighting in an IPR viewer
• Rendering: Houdini ships with its native and powerful rendering engine Mantra, but
the Houdini Indie license (Houdini version for indie developers) supports other 3rd
party rendering engines such as: Renderman, Octane, Arnold.
• Volumetrics: With its native CloudFx and PyroFx toolsets, Houdini can create
clouds, smoke and fire simualtions.
• Compositing: Full compositor of floating-point deep (layered) images.
• Houdini is an open environment and supports a variety of scripting APIs. Python is
increasingly the scripting language of choice for the package, and is intended to
substitute its original CShell-like scripting language, Hscript. However, any major
scripting languages which support socket communication can interface with Houdini.
4.5.1 Rendereres
Rendering is the final process of creating the actual 2D image or animation from the
prepared scene or you can say 3D rendering is the 3D computer graphics process of
automatically converting 3D wire frame models into 2D images with 3D photorealistic
effects or non-photorealistic rendering on a computer.
Rendering can be compared to taking a photo or filming the scene after the setup is
finished in real life. Several diverse, and often dedicated, rendering methods have been
developed. These ranges from the distinctly non-realistic wireframe rendering through
polygon-based rendering, to more innovative techniques such as: scanline rendering, ray
tracing, or radiosity. Time required for rendering may change from fractions of a second to
days for a single image/frame. In general, different methods are better suited for either photo-
realistic rendering, or real-time rendering.
Rendering has uses in movie, architecture, video games, simulators, visual effects, and
design visualization, each employing a different balance of features and techniques. As a
product, a wide variety of renderers are available. Some are integrated into larger modeling
and animation packages, some are stand-alone, some are free open-source projects.
Real time rendering:
Rendering for interactive media, such as games and simulations, is calculated and
displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time
rendering, the aim is to show as much information as possible as the eye can process in a
fraction of a second .The primary goal is to achieve as high as possible degree of
photorealism at an acceptable minimum rendering speed.
Rendering software may simulate such visual effects as lens flares, depth of field or
motion blur. The rapid increase in computer processing power has allowed a progressively
higher degree of realism even for real-time rendering, including techniques such as HDR
rendering.
Non real-time rendering:
Animations for non-interactive media, such as feature films and video, are rendered much
more slowly. Non-real time rendering enables the leveraging of limited processing power in
order to obtain higher image quality. Rendering times for individual frames may vary from a
few seconds to several days for complex scenes. Rendered frames are stored on a hard disk
then can be transferred to other media such as motion picture film or optical disk. These
frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per
second, to achieve the illusion of movement. When the goal is photo-realism, techniques such
as ray tracing or radiosity are employed. This is the basic method employed in digital media
and artistic works.
Let’s look few of them.
• Mental Ray:
It is a production-quality rendering application developed by Mental Images (Berlin,
Germany). As the name suggests, it supports ray tracing to produce images. Mental
Images was bought in December 2007 by NVIDIA.
The key feature of Mental Ray is the accomplishment of high performance through
parallelism on both multiprocessor machines and across render farms. The software uses
acceleration techniques such as scanline for primary visible surface determination and
binary space partitioning for secondary rays. It also supports caustics and physically
correct simulation of global illumination employing photon maps. Any combination of
diffuse, glossy (soft or scattered), and specular reflection and transmission can be
simulated.
It was designed to be integrated into a third-party application using an API or be used
as a standalone program using the .mi scene file format for batch-mode rendering.
Presently there are many programs integrating it such as Autodesk Maya, 3D Studio Max,
AutoCAD, Cinema 4D etc. Since 2010 Mental Ray also includes the iray rendering
engine, which added GPU acceleration to the product. In 2013, the ambient occlusion
pass was also accelerated by CUDA, and since 2015 the GI Next engine can be used to
compute all indirect/global illumination on GPUs.
Till today Mental Ray has been used in many feature films, including Hulk, The
Matrix Reloaded & Revolutions, Star Wars: Episode II – Attack of the Clones, The Day
After Tomorrow and many more. In 2003, Mental Images was awarded an Academy
Award for their contributions to the mental ray rendering software for motion pictures.

• RenderMan:
It is proprietary photorealistic 3D rendering software produced by Pixar Animation
Studio. RenderMan is used by Pixar to render all of their in-house 3D animated movie
productions and is also available as a commercial product licensed to third parties.
Previously it was known as PhotoRealistic RenderMan. In May 2014, Pixar declared it
would offer a free non-commercial version of RenderMan and from March, 2015,
RenderMan is available for non-commercial use.
RenderMan defines cameras, geometry, materials, and lights using the RenderMan
Interface Specification. This specification facilitates communication between 3D
modeling and animation applications and the render engine that generates high quality
images. Additionally RenderMan supports Open Shading Language to define textural
patterns.
RenderMan has been used to create digital visual effects for Hollywood
blockbuster movies such as Beauty and the Beast, The Lion King, Terminator 2:
Judgment Day, Toy Story, Jurassic Park, Avatar etc.
• V-Ray:
It is a commercial rendering plug-in for 3D computer graphics software applications. It is
developed by Chaos Group, a Bulgarian company based in Sofia, established in 1997. V-
Ray is used in media, entertainment, and design industries such as film and video game
production, industrial design, product design and architecture.
V-Ray is a rendering engine that uses advanced techniques, for example global
illumination algorithms such as path tracing, photon mapping, and directly computed
global illumination. The use of these techniques often makes it preferable to conventional
renderers which are provided standard with 3D software, and generally renders using
these techniques can appear more photo-realistic, as actual lighting effects are more
realistically emulated. It supports many 3D applications such as 3ds Max, Cinema 4D,
Maya, Modo etc.

• Octane Render:
It is a real-time 3D unbiased rendering application that was started by the New Zealand-
based company Refractive Software, Ltd and OTOY took over on March 2012. It is the
first commercially available unbiased renderer to work entirely on the GPU (Graphics
Processing Unit) and to be released to the public. It has the facility to work in real time.
This lets users to modify materials, lighting and render settings “on the fly” because the
rendering viewport updates immediately whenever a change is made. It uses the graphics
card to calculate all measures of light, reflection and refraction.
Check Your Progress 4
What is Adobe Animate’s previous name?
Which is the most commonly used 2D animation software?
What is the use of Maya fur?
What is the built-in scripting language of Max?
What is pixol?
What is Rendering?

4.6 DIFFERENCE BETWEEN TRADITIONAL


ANIMATION AND COMPUTER ANIMATION
Traditional animation is a very hands-on process. It's all about hand-drawing hundreds and
thousands of individual frames that are later transformed into clear plastic cells, hand-painted
and then filmed in sequence over a painted background image. This requires a team of
cleanup artists, artists, painters, directors, background artists and film/camera crews along
with the storyboard artists and script writers to work in tandem for the original concepts for
large-scale projects. The amount of effort, time and equipment involved can be
overwhelming.
There is a clear difference: traditional 3D animation is less 3D and more of still lives
of Claymation done by the use of stop-motion filming techniques. The concept of 3D
animation did not become a reality until the use of computers in animation became more
practical and achievable. Computer animation removed the clutter of many extra tools
required to create an animation.
The methods mentioned here describe the techniques of an animation process that
originally depended on cels in its initial stages, though painted cels are rare today as the
computer has moved into the animation studio, and the outline hand drawings are generally
scanned into the computer formats and filled with digital paint instead of transferring to cels
and then coloring it by hand. The drawings are created in a computer program on many
transparent layers, much the same way as they were used with cels and made into a sequence
of images that may then be transferred onto film or converted to a digital video format.
It is also possible for animators to draw directly using a graphics tablet in a computer
or a similar device, where the outline drawings are created in a similar manner as they would
appear on a paper.
Traditional animations are now commonly done with computers, though it is
important to distinguish computer-assisted traditional animation from a 3D computer
animation. Traditional animation and 3D computer animation are often used together, such as
in Don Bluth's Titan A.E. and Disney's Tarzan and Treasure Planet. Most anime in Japan still
use traditional animation today. DreamWorks executive Jeffrey Katzenberg coined the term
‘traditional animation’ to describe films produced by his studio, which incorporate elements
of traditional and computer animation with equal dignity, such as Spirit: Stallion of the
Cimarron and Sinbad: Legend of the Seven Seas.
Modern video games, such as Viewtiful Joe, The Legend of Zelda: the Wind Waker and
others still use ‘cel-shading’ animation filters to make their full 3D animation appearing as if
it were drawn in a traditional cel style. This method was also used in the famous animated
movie Appleseed, and was again integrated with cel animation in the Fox animated series
Futurama.

4.7 PIXAR AND DISNEY STUDIOS INTRODUCTION


Pixar
Pixar Animation Studios (Pixar) is an American computer animation film studio based in
Emeryville, California. Pixar is a subsidiary of The Walt Disney Company. Luxo Jr. as
shown in Figure 4.22, a character from the short film of the same name, is the studio's
mascot.

Figure 4.22 Pixar Studio and Luxo Jr.


Pixar began in 1979 as the Graphics Group, part of the Lucasfilm computer division,
before its spin-out as a corporation in 1986, with finance support by Apple Inc. co-founder
Steve Jobs, who became the majority shareholder. Disney purchased Pixar in 2006 at a
valuation of $7.4 billion, a transaction that resulted in Jobs becoming Disney's largest single
shareholder at the time.
Pixar is well-known for CGI-animated feature films created with RenderMan, Pixar's
own implementation of the industry-standard RenderMan image-rendering application
programming interface used to generate high-quality images.
Pixar has produced 17 feature films, beginning with Toy Story (1995), which was the
first-ever computer-animated feature film, and its most recent being Finding Dory (2016).
The studio has also produced several short films. As of June 2016, its feature films have
made over $10 billion worldwide, with an average worldwide gross of $622 million per film.
Finding Dory, along with its predecessor Finding Nemo (2003), and two other films Toy
Story 3 (2010) and Inside Out (2015), are among the 50 highest-grossing films of all time,
with Toy Story 3 being the third all-time highest animated film with a gross of $1.063 billion,
behind Walt Disney Animation Studios' Frozen (2013) and Illumination Entertainment's
Minions (2015), which grossed $1.276 billion and $1.159 billion respectively in their initial
releases as of 2016. Fourteen of Pixar's films are also among the 50 highest-grossing
animated films of all time.

Figure 4.23 Toy Story


Till now the studio has earned sixteen Academy Awards, seven Golden Globe
Awards, and eleven Grammy Awards, among many other awards and acknowledgments.
Most of Pixar's films have been nominated for the Academy Award for Best Animated
Feature, since its inauguration in 2001, with eight winning; this includes Finding Nemo, Toy
Story 3, and Inside Out, along with The Incredibles (2004), Ratatouille (2007), WALL-E
(2008), Up (2009), and Brave (2012).Remaining Monsters, Inc. (2001) and Cars (2006) are
the only two films that were nominated for the award without winning it, while Cars 2
(2011), Monsters University (2013), and The Good Dinosaur (2015) are the only three not to
be nominated. Up and Toy Story 3 were also the second and third animated films to be
nominated for the Academy Award for Best Picture, the first being Disney's Beauty and the
Beast (1991).
Walt Disney
Disney was established on October 16, 1923 by animator Walt Disney and Roy O.
Disney as the Disney Brothers Cartoon Studio and well-known itself as a leader in the
American animation industry before diversifying into live-action film production, television,
and theme parks. Walt Disney and the Walt Disney Studios have played a pivotal role in the
progression of animation.

Figure 4.24 Walt Disney Studio and Micky Mouse


Walt Disney created a short film entitled Alice's Wonderland, which featured child
actress Virginia Davis interacting with animated characters. Later Disney developed an all-
cartoon series starring his first original character, Oswald the Lucky Rabbit, which was
distributed by Winkler Pictures through Universal Pictures. The distributor owned Oswald, so
Disney only made a few hundred dollars. In 1928, to make progress from the loss of Oswald
the Lucky Rabbit, Disney came up with the idea of a mouse character named Mortimer,
drawing up a few simple drawings. The mouse was later renamed Mickey Mouse the star of
animated shorts such as Steamboat Willie from 1928, and still a popular character and
cultural icon today.
In 1932, Disney signed a private contract with Technicolor to produce cartoons in
color, beginning with Flowers and Trees. The popularity of the Mickey Mouse series
endorsed Disney to plan for his first feature-length animation Snow White and the Seven
Dwarfs. This animation film became highest-grossing film of that time by 1939.

Figure 4.25 Snow White and the Seven Dwarfs


The studio continued releasing animated shorts and features, such as Pinocchio
(1940), Fantasia (1940), Dumbo (1941), and Bambi (1942). Few films such as Sleeping
Beauty (1959) and One Hundred and One Dalmatians (1961), which introduced a new
xerography process to transfer the drawings to animation cels. In 1960s Disney's most
successful film was a live action/animated musical adaptation of Mary Poppins, which was
one of the all-time highest-grossing movies and received five Academy Awards.
The Walt Disney Studios joined with Pixar in 2006, and the combined studios
continue to produce films including WALL-E (2008) and Up (2009). Walt Disney was a
visionary who not only changed the face of animation, but also created Disneyland, a theme
park in California based on the creations of his studios. On July 18, 1955, Walt Disney
opened Disneyland to the general public. The Disney theme parks have been replicated
around the world, including Disneyland Paris and Hong Kong.
Check your progress 5
What members are required for a 2D Animation team?
Who uses traditional animation till date?
Pixar is well-known for?
Which is the first-ever computer-animated feature film?
What was the Walt Disney’s first original character?

4.8 SUMMARY OF THE UNIT


• Computer animation or CGI animation is the art of creating moving images with
computers. It is a subfield of computer graphics and animation.
• Increasingly, it is being created by means of 3D computer graphics, though 2D computer
graphics are still widely used for stylistic. Low-bandwidth and faster real-time rendering.
• Sometimes, the target of the animation is the computer itself but at times the target is
another medium, such as film. It is also referred to as CGI or computer-generated imagery
or computer-generated imaging, particularly when used in films.
• Computer animation is essentially a digital successor to the art of stop-motion animation
of 3D models and frame-by-frame animation of 2D illustrations. For 3D animations,
objects or models are built on the computer monitor. Where the 3D figures are rigged
with a virtual skeleton.
• The frames may also be rendered in real-time as they are presented to the end-user or
audience.

4.9 KEY TERMS


• Python: It is a scripting language.
• Traditional animation: Also known as cel animation or hand-drawn animation, this was
the process used for most animated films of the 20th century.
• Full animation: The process of creating high-quality traditionally animated films that
regularly use possible movements by detailed drawings.
• Limited animation: The use of less detailed and/or more stylized drawings and different
methods of movement.
• HDRI: High Dynamic Range Image
• 2.5D: It refers to a surface which is a projection of a plane into 3rd dimension.
• Radiosity: It is a method of rendering based on detailed analysis of light reflections of
diffuse surface.
• Rotoscoping: A technique patented by Max Fleischer in 1917 which uses animators to
trace live-action movement, frame-by-frame.

4.10 END QUESTIONS:


1. Discuss some of the earlier animation techniques.
2. Discuss the use of xerography in animation.
3. Write a note on Multiplane camera and Cel animation.
4. What are the different types of animation?
5. Explain difference between full and limited animation.
6. Write a note on stop motion animation.
7. What is computer animation? Explain it’s types.
8. Give on overview of Maya. What are its key features?
9. Note on 3D Studio Max
10. What is rendering? Give its two examples.
11. Explain the different between traditional animation and computer animation.
Answer to check your progress questions
Check your progress -1:
Animation is the method of making the illusion of motion and change by means of the
rapid display of a sequence of static images that slightly differ from each other.
The illusion of movement in animation is created by a physiological phenomenon
called persistence of vision.
The advantages of zoetrope over the basic phenakistoscope were it did not involve the
use of a mirror to view the illusion, and because of its cylindrical shape it could be
viewed by several people at once.
Early film animators mentioned flip books as their inspiration more often than the
earlier devices
Pauvre Pierrot, at the Musée Grévin in Paris. This film is notable as the first known
instance of film perforations being used.
Fantasmagorie is the first animated film which used the traditional animation creation
methods.
Father of character animation is Winsor McCay, a successful newspaper cartoonist.
Check your progress -2:
A horizontal camera was invented in 1933 by Ub Iwerks, former Walt Disney Studios
animator/director.
Xerography or electrophotography is a dry photocopying technique invented by
Chester Carlson.
CAPS stands for Computer Animation Production System
CAPS was the first to use digital ink and paint system in animated films.
CAPS process was used for the first time for Mickey standing on Epcot's Spaceship
Earth for The Magical World of Disney titles.
Check your progress -3:
Rotoscoping is a method where animators trace live-action movement, frame by
frame.
Typically stop-motion animation named after the medium used to create the
animation.
Techniques such as interpolated morphing, onion skinning and interpolated
rotoscoping.
3D animation is the manipulation of three dimensional objects and virtual
environments with the use of a computer program.
Typography Animation is a combination of text in motion.
Check your progress -4:
Adobe Animate formerly known as Adobe Flash Professional.
Toon Boom is the most commonly used 2D animation software in the professional
industry as it is more user friendly.
Maya fur can be used to simulate short fur-like objects, such as grass, carpet.
MAXScript is a built-in scripting language of Max.
The best part about Lightwave 3D is that it comes with a 30 day free trial and is
compatible with all the latest operating systems.
Each pixol contains information on X and Y position and color values and depth.
Rendering is the final process of creating the actual 2D image or animation from the
prepared scene
Check your progress -5:
2D Animation team consists of cleanup artists, artists, painters, directors, background
artists and film/camera crews along with the storyboard artists and script writers.
Most anime in Japan still use traditional animation today.
Pixar is well-known for CGI-animated feature films created with RenderMan.
Toy Story was the first-ever computer-animated feature film.
Walt Disney’s first original character was Oswald the Lucky Rabbit.

4.11 FURTHER READING


www.wikipedia.org
History of Computer Graphics and Animation- by www.utdallas.edu
www.brainpickings.org
www.webneel.com

UNIT 5 INTERACTIVE MEDIA


Program Name: BSc(MGA)
Written by: Srajan
Structure:
3.14 Introduction
3.15 Unit Objective
3.16 Introduction to Interactive Media
3.17 World Wide Web
3.17.1 Evolution of WWW;
3.17.2 Functions of WWW
3.18 Internet Forums
3.19 Computer Games
3.19.1 History of Computer Games
3.20 Mobile Telephony
3.21 Interactive Television
3.22 Hypermedia
3.22.1 Hypermedia Development Tools
3.23 Summary
3.24 Key Terms
3.25 Questions and Exercises
3.26 Further Reading

5.0 INTRODUCTION
In this unit, You will learn about interactive media. It is a type of collaborative media that
allows dynamic involvement by the recipient. You will also learn about the development of
the World Wide Web, which is a network of interconnected hypertext documents contained
on the Internet, the safety and security of the Web, and Internet forums. An Internet forum, or
message board, is an online discussion website, From a technological point of view, forums
or boards are web applications dealing user-generated content.
In addition, you will learn about computer or PC games. Computer games have
developed from the simple graphics and gameplay of former titles such as Spacewar, to a
wide range of more visually advanced titles. This unit also explains the concept of mobile
telephony. Since the debut of mobile phones, concerns (both scientific and public) have been
evoked about the potential health impacts from regular use. In this, unit, you will also be
learning about interactive television. This form of television ranges from low interactivity
(e.g., volume, TV on/off, channel changing) to moderate interactivity (e.g., movies on
demand without player controls) and high interactivity where, for example, an audience
member can influence the program that is being telecast. Finally, you will learn about
hypermedia. It is applied as a logical annexe of the term ‘hypertext’. In the latter, graphics,
audio, hyperlinks, plain text and video interlace to create a generally non-linear medium of
information.

5.1 UNIT OBJECTIVES:


After going through this unit, you will be able to :
• Understand the concept of interactive media
• Describe the terminology used in interactive media
• Explain the World Wide Web
• Know about Internet Forums
• Discuss the different types of computer games
• Understand mobile telephony
• Appreciate interactive television.
• Explain the concept of hypermedia

5.2 INTRODUCTION TO INTERACTIVE MEDIA


Interactive media is a type of combined media that permits dynamic involvement by the
recipient. Conventional data hypothesis portrays interactive media as the media that
establishes two-way communication. The area of human-computer relation concerns with
facets of interactivity and pattern in digital media. Other areas that cover interactive media
are interactive advertizing, video game production and new media art.
Although a good deal of traditional analogue electronic media and print media are
characterized as interactive media, the term is sometimes misinterpreted as exclusive to
digital media. The substantial addition in theories for interactivity (particularly across huge
lengths) contributed by the Internet encouraged the accessibility of digital interactive media.
Still, for example, language in two-way communication would officially belong to the class
of interactive media.
Any form of user interface among the end user/audience and the medium may be
regarded as interactive. Interactive media is not confied to digital or electronic media . Pop-
up books, constellation wheels, game books, flip books and board games are all instances of
printed interactive media. Books with a simple table of contents or index may be viewed as
interactive because of the non-linear control mechanics in the medium, but are usually
considered non-interactive as the bulk of the user experience is non-interactive sequential
reading.

5.3 WORLD WIDE WEB


The system of interconnected hypertext documents contained on the Internet is known as the
World Web (WWW). With a web browser, you can look at web pages containing images,
videos, other multimedia and text and navigate between them using hyperlinks. In March
1989, English Physicist Sir Berners-Lee wrote a proposal to use a databace and software
project he had established in 1980 and depicted a more detailed information management
system. Afterwards, he linked up with Belgain computer scientist Robert Cailliau when both
were working at CERN in Geneva, Switzerland, in 1990, they suggested using a ‘hypertext
project’ known as ‘World Wide Web’ as a ‘Web’ of hypertext documents to be looked at by
browsers using a client-server computer architecture, and published that web in December.
The WWW was planned as a conglomerate of human knowledge; to allow people in
different areas to contribute their ideas and all aspects of a common project.
5.3.1 Evolution of WWW
By Christmas in 1990, Berners-Lee had worked out all the tools necessary for a working
Web; that is the first web browser (which was a web editor also) the first web server and the
first web pages that showed the project itself. In August 1991, he placed a short summary of
the WWW project on the alt.hypertext newsgroup.
Berners-Lee’s purpose was to link hypertext ti the Internet, which had potential to
members of both technical communities.
WWW had many differences with former hypertext systems. The Web demanded
only one-way links rather than two-way ones. The Web made it possible for anyone to link to
some other resource. It also substantially cut down the trouble of carrying out web servers
abd browsers but it introduced the continuing problem of link rot.
The launching of the Mosaic Web Browser, in 1993, was a landmark for WWW.
After leaving, Tim Berners-Lee left the European organization for Nuclear Research)
CERN) in 1994 and instituted the World Web Consortuim (W3C). It was founded at the MIT
Laboratory for Computer Science (LCS) with help from the Defence Advanced Research
Projects Agency (DARPA) and the European Commission.
Helped by the Internet, other websites were developed around the world, thus
contributing international measures for domain names and the HTML. Since then, Berners-
Lee has played a dynamic part in leading the growth of web standards (such as the mark-up
languages in which web pages are written), and in recent years has urged his vision of a
Semantic Web.
5.3.2 Functions of WWW
Internet and WWW are generally used in everyday languages interchangeably. It should
however, be noted that the Internet and the WWW are not one. While the Internet is a global
arrangement of interrelated computer networks, the Web is one of the services that work on
the Internet. It is an essemblage of interconnected documents and other resources, connected
by web links and universal resource locators (URLs). In short, the Web is an applications
programme running on the Internet.
First, the server-name portion of an URL is adjudicated into an IP address applying
the global distributed Internet database called domain name system (DNS). This IP address is
essential to reach the Web server. The browser then calls for the resource by directing a
hypertext transfer protocol (HTTP) petition to the Web server at that special address.
While picking up these files from the web server,, browsers may increasingly deliver
the page onto the screen as assigned by its CSS, HTML and other web languages. Any
images and other resources are integrated to make the on-screen web page that the user sees.
The following are the main characteristics of the Web:
• A network protocol (HTTP) applied by native WWW servers giving performance and
features not otherwise present.
• A hypertext markup language (HTML) that every WWW client is required to
understand and is used for the transmission of basic things like text, menus and
simple online help information across the Net.
• The address system, URL, implemented by the project to make this possible, in spite
of many different protocol
• A limitness information world where all items have a reference by which they can be
recovered.
Fig. 5.1 Graphic Representation the WWW.
In the course of time, many web resources shown by hyperlinks vanish, move or are
substituted with different content. This process is referred to in some circles as ‘link rot’ and
the hyperlinks affected by it are often called ‘dead links’. The transient nature of the Web has
motivated many efforts to archive websites. The Internet archive is one of the Web has
motivated many efforts to archive websites. The Internet archive is one of the best-known
attempts, which has been alive since 1996 (Figure 5.1)
WWW prefix
Due to the long-standing pattern of naming Internet hosts (servers) as per the services they
provide many web addresses begin with www. Thus, for a web server , the host name is
usually www as it is ftp for an FTP server, nntp or news for a USENET news, server, etc.
These host names will then appear as DNS subdomain names, such as in
‘www.abcd1234.com’. When a word is typed into the address bar and the return key pressed,
some web browsers automatically try and add ‘www’ to the beginning of the word. Other
choices are possibly. ‘.com’, ‘net’, and ‘org’ at the end. For instance typing ‘orange’ may
automatically resolve to http://www.orange.com.
The ‘http://’ or ‘https://’ part of web addresses refer to HTTP secure and so fix the
communication protocol! That will be used to request and obtain the page and all its images
and other resources.
Safety of the Web: Privacy and security
Worldwide, move than a half billion people uses social network services which has raised the
question of privacy. The Web has become criminals’ preferred track for spreading malicious
software. Cybercrime carried out on the Web can cause fraud, identity thett, espionage and
intelligence gathering. Web-based exposures now add up traditional computer security
concerns and as measured by Google, about one out of ten web pages may hold malicious
code. Most Web based assaults take place on illicit websites, and most, as evaluated by
Sophos, are hosted in China, Russia and the United States. The most common of all malicious
software menaces is SQL injection attacks against websites.
Availability of the Web: Standards and accessibility
Many conventional standards and other technical specifications define the procedure of
different faces of the WWW, the Internet and computer information exchange. Many of the
documents are the work of the W3C, guided by Berners-Lee but some are made by the
Internet Engineering Task Force (IETF) and other organizations.
Usually, when web standards are talked about, the following publications are seen as
foundational:
• Recommendations for the Document Object Model, from W3C.
• Recommendations for style sheets, especially CSS from the w3c
• Recommendations for mark-up languages, especially HTML and XHTML from the
W3C. These define the structure and interpretation of hypertext documents.
• Standards for ECMA Script (usually in the form of HavaScript) from Ecma
international.

Extra publications provide definitions of other essential technologies for the WWW,
including but not restricted to, the following:
• HyperText Transfer Protocol (HTTP)
• HTTP Authentication

Uniform Resource Identifier: The Web is used for obtaining information as well as providing
information and interacting with society. Therfore, it essential that the Web is approachable
and provides equal access and equal opportunity to people with impairments. According to
Tim Berners-Lee, ‘The power of the Web is in its universality. Access by everyone regardless
of disability is an essectial aspect.’ Many countries govern Web accessibility as a requirement
for websites. International practice in the W3C Web Accessibility Initiative resulted in simple
guidelines that web content authors as well as software developers can expend to make the
Web accessible to persons who may or not be using helpful technology.
Internationalization and statistics
The W3C Internationalization activity ensures that web technology will figure out in all
languages, scripts and cultures. Beginning in 2005, Unicode gained ground and eventually in
December 2007 exceeded both ASCII and Western European as the Web’s often-used
character encryption. Originally RFC 3986 permitted resources to be identified by URL in a
subset of US-ASCII. RFC 3987 grants more characters any character in the Universal
Character Set – and now a resource can be discovered by IRI any language.
According to a 2001, study, there were 550 billion text files on the Web, mostly in the
inconspicuous Web. A 2002 survey of 2,024 million web pages found that by far the most
web content was in English: 56.4 per cent; were pages in German (7.7 per cent), French (5.6
per cent) and Japanese (4.9 per cent). A latest study using web searches in 75 different
languages to sample the Web found out that there were more than 11.5 billion web pages in
the publicly indexable Web.
Technology of the www: Speed issues and caching
Frustration over overcrowding issues in the Internet base and the high response time resulting
in slow browsing has led to people calling it the World Wide Wait. Speeding up the Internet is
an ongoing discussion over the use of peering and quality of service (QoS) technologies.
Other answers to minimize the wait can be found at W3C. The standard roadmap for ideal
Web reaction times is as follows:
• 1 second. Highest satisfactory response time. Download times above 1 second cut off
the user experience.
• 10 seconds, Unacceptable response time. The user experience is interrupted and the
user is likely to leave the site or system.
• 0.1 second (one tenth of a second). Ideal response time. The user does not sense any
interruption.

If a user returns to a web page after a short interval, the page data usually will not
demand to be re ob-tained from the main web server. Web browsers tend to hoard freshly
received data, typically on the local hard drive. HTTP petitions shipped by a browser
generally only ask for the data that has changed following the last download Current locally
cached data are usually reused. Vaching helps bring down Web traffic on the Internet. The
decision to terminate the downloaded file is taken independently, whether it be image,
JavaScript, style sheet. HTML etc. Thus, even sites with extremely active cintent, need not
frequently refresh their basic resources. Website architects tend to compare resources, like
JavaScript and CSS data, into some site-wide files so that they can be cached quickly. This
tends to lower demands on the web server and helps in minimize page download times.

5.4 INTERNET FORUMS


An Internet forum, or message board, is an online discussion website. From a technological
point of view, forums or boards are web applications dealing with user-generated content.
People entering in an Internet assembly may cultovate social adherences.
Earlier Internet forums could be named as a web version of a newsgroup or electronic
mailing list (many of which were commonly called Usenet): allowing people tp place
messages and comment on other contents. Later evolutions comprised the different
newgroups or individual lists, furnishing more than one forum, devoted to a particular topic.
Forums execute a function alike dial-up bulletin board systems and Usenet networks
that were common from the late 1970s to the 1990s. Early web-based forums date back to
1996. A feeling of community often frows around forums that have regular users.
Forum software systems are widely available in the Internet and are scripted in a
number of programming languages, such as PHP, Perl, Java and ASP. The configuration and
records of posts can be put in text files or in a database. Each package extends different
features from the most basic, rendering text-only postings, to more latest packages, offering
multimedia backup and formatting code (usually known as BBCode).
Registration or anonymity
In some parts of Europe as well as the United States. Internet formus require registration in
order to be able to post. Only registered members are allowed to send electronic messages via
the web application. By simply filling the application form, a person may be able to open an
account. However, the default status lable ‘inactive’ staysuntil the registered user affirms that
the e-mail address he/she gave during registration is his/hers. Until then , the registered user
can log on to his account without having the facility of using the forum for communication ;
for e.g., threads, posts, private messages,etc.) (see Figure 5.2)
Fig. 5.2 Internet Forums are Used Frequently in Conjunction with Multiplayer Online
Game Sites
In Japan and China, registration is often non-mandatory and namelessness is sometimes even
promoted. On these forums, a tripcode system may be used to permit check of an identity
without the demand for formal registration.
Rules and policies on forums
Forums are typically ruled by a group of individuals who are collectively known as staff. The
staff includes, administrators and moderators capable of handling technical maintenance,
policies, and the enforcing of the forum. Most forums have rules stating the objectives,
wishes and guidelines of the creators.
Rules on forums typically apply to the user body in its entirety and tend to have preset
exclusions, the most common being showing a specific section as an exception: for instance,
in an IT forums, discussions concerning anything but computer programming languages are
against the principles, even though general chat may be allowed.
Forum rules are held and put into practice by the moderation team even though users
can help by way of a ‘report system;. Most American forum software have such a facility. In
consists of a small function that is applicable to each post.
When rules are breached several steps are commonly carried out. First, a warning is
usually given; this is common in the form of a private message but recent development has
made it possible for it to be unified into the software.
Troll
A troll is a user that repeatedly and deliberately goes against netiquette, often posting
disparaging or otherwise incendiary messages about sensitive topics in an established online
community to allure users into responding, often starting arguments.
Sock Puppet
The term sock puppet refers to someone who is simultaneously registered under different
anonyms on a special message board or forum. The inference of a sock puppet is of a
puppeteer holding up both hands and supplying dialogue to both puppets at the same time.
A sock puppet creates multiple accounts over a period of time, using each user to
debate or agree with each other on a forum. Sock puppets are usually found when an IP check
is exercised on the accounts in a forum.
Spamming
Forum e-mailing is a transgress of netiquette where users repeat the same word or phrase
over and over but differs from multiple posting in that spamming is usually a willful act
which sometimes has malevolent purpose. This is a common trolling technique . It can also
be conventional spam, unpaid advertisements that are in breach of the forum’s rules.
Spammers employ a number of illicit techniques to post their spam, including the use of
botnets.
Double posting
One common solecism on Internet forums is to post the same message twice. Users
sometimes post versions of a message that are only somewhat different, especially in forums
where they are not allowed to edit their earlier posts. Multiple posting instead of editing prior
posts can unnaturally inflate a user’s post count. Multiple posting can be unplanned; a user’s
browser might show an error message even though the post has been carried or a user of a
slow forum might become impatient and continuously hit the submit button.
Word censor
A word censoring system is commonly admitted in a forum software package. The system
will catch words in the body of the post or some other user editable forum element (like user
titles) and if they party match a certain keyword (commonly no case sensitivity) they will be
banned.
Forum structure
A forum consists of a tree-like directory structure that holds at the lowest end topics (also
known as threads) and inside the threads are posts. Forums normally tend to have a finite set
of generic topics with one primary topic. Forums are driven and modified by ‘members’ and
presided over by ‘moderators’.
User groups
Western-style forums tend to divide logged in members and visitors into user groups . Rights
and privileges are accorded based on these groups. A forum’s user can be elevated into a
more privileged group based on administration standards.
Moderator
The moderators (or mod, in singular) are employees of the forum who have access to posts
and threads of all members and have the responsibility of keeping the forum clean
(neutralizing spam, spambars, etc.) Since they have access to all content, it is not uncommon
for someone close to the site owner to be closen as moderator for such a task.
Administrator
The administrators look after the technical aspect of the forum. As such, they promote or
demote members, see that rules are being followed, as well as perform database operations
(database backup, etc.) Adminstrators are often also moderators.
Post
A post is a user- presented message that is enclosed into a block that holds the user’s details,
as well as the time of its submission. Members are given the power to edit and delete their
posts. Posts appear in boxes lined one after another and are contained in threads. The first
post (also the original post) starts the thread.
Thread
A thread (or a topic) is a collection of posts that are exhibited from oldest to newest. The
option of a threaded view (tree-like view which applies coherent reply structure before
sequential order) is a available. A thread has a title, an extra description that summarizes a
specific discussion and an opening (OP) dialogue or making any announcement the poster
wishes (see Figure 5.3).

Fig. 5.3 Thread (viewing as Moderator) and Forum (Viewing as Moderator)


Discussion
Forums encourage free and open discussion and often implement real criteria. Common
topics include questions, comparisons, opinion polls and debates. Unsocial behavior is
commonplace due to its open neture, and people tend to lose their temper, especially when it
comes to debates. Since responses are directed at someone else’s viewpoint, discussions tend
to go off the tangent as validity, sources, etc. are questioned.
Flame wars
In unstable threads, irrepressible spam in the form of complaints and abuse of the report
system is common. When discussions go off the track and people start complaining more and
not accepting each other’s differences in standpoint , it is known as a flame war; Flaming
someone is to assault an individual rather than his nation.
Common features
By default to be an Internet forum, the web application needs power to present threads and
replies. Forum software may sometimes permit categories or subforums. The chronological
older-to-newer view is mostly associated with forums (the newer to older being related more
akin to blogs).
Tripcodes and capcodes
In a tripcode system, a secret password is contributed to the user’s name following a
separator character (often a number sign). This password, or tripcode, is hashed into a special
key, or trip, distinct from the name by HTML styles. Tripcodes cannot be forged but on some
types of forum software they are unsafe and can be imagined.
Private message
A private message (PM) is a message sent in secret from a member to one or more other
members. The power to send so-called carbon copies is sometimes available. When sending a
carbon copy (cc), the users to whom the message is sent directly will not be cognizant of the
recipients of the cc or even if one was sent in the first place.
Attachment
A file attachment can be of any type. When a file is attached to a post, the file is being
uploaded to the forums server.Forums usually have very strict limit on what can be attached
and what cannot (among which the size of the files is also a question).
BBCode and HTML
Hypertext Makeup Language (HTML) is sometimes permitted but usually its use is
admonished or when allowed it is extensively filtered out. When HTML is invalided .Bulletin
Board Code (BBCode) is the most common favoured choice.
Emoticon
Emoticon or smiley is a symbol or combination of symbols used to express emotional
message in written or message form. Forums implement a system through which some of the
text representations of an emoticon (e.g. ,XD, :p) are furnished as a small image .

Poll
Most forums carry out an opinion poll system for threads. Most implementations grant for
single-choice or multo-choice (sometimes limited to a certain number) when selecting
alternatives as well as private or public display of voters.
RSS and ATOM
RSS and ATOM feeds permit a minimalistic way of signing to the forum. Common
implementations only allow RSS feeds numbering the last few threads updated for the forum
index and the last posts in a thread.
Other forum features
An ignore list permits members to conceal posts of other members that they do not want to
see or have a trouble with. In most implementations they are cited as foe list or ignore list.
Normally, the posts are not hidden, but minimized with only a small bar indication a post
from the user on the ignore list is there.
Comparison with other web applications
One major difference among forums and electronic mailing lists in that the latter
automatically hand over new messages over to the subscriber; forums on the other hand want
the member to visit the website and look for new posts. Members sometime tend to miss
threads they are concerned with, so forums have an ‘e-mail notification’ feature through
which members be notified of new posts in a thread. The main difference between
newsgroups and forums is that extra software called a newsreader is required to take part in
newsgroups. Visiting and taking part in forums generally has no need for additional software
outside of the web browser.

5.5 COMPUTER GAMES


A game played on a personal computer, rather than on a video game cabinet or arcade
machine, is known as a personal computer game (also known as a computer game or PC
game). Computer games have developed from the simple graphics and game play of former
titles like Spacewar to a wide range of more visually advanced titles.
One or more game developers, oftenin conjunction with other specialists (such as
game artists), create PC games and either publish independently or through a third-party party
publisher. They may then be circulated on physical media, such as CDs and DVDs, as
Internet-downloadable, possibly freely redistributable, software or through online delivery
services such as Direct2Drive and steam.
5.5.1 History of Computer Games
Spacewar!,formulated for the PDP-1 in 1961, is often accredited as being the first ever
computer game. The game consisted of two player-controlled spaceships maneuvering just
about a central star, each trying to demolish other.
The first generation of PC games was often text adventures or interactive fable, in
which the player communicated with the computer by inscribing commands through a
keyboards. The first text adventure, Adventure, was developed for the PDP-11 by Will
Crowther in 1976, and elaborated by Don Woods in 1977.
By the mid-1970s, games were built and disseminated through amateur froups and
gaming magazines, such as Creative Computing Later Computer Gaming World. One of the
first games for microcomputers – Microchess – first sold in 1977. Microchess finally sold
over 50,000 copies on cassette tape.
Industry crash
As the cideo game market became inundated qith poor-quality cartridge games produced
by numerous companies trying to enter the market, and overproduction of high-profile
releases sush as the Atari 2600 adaptation of ET and Pacman (refer to Figure C10) grossly
underachieved, and the popularity of personal computers for education rose dramatically .
New genres
Growing acceptance of the computer mouse, driven partially bu the success of games
such as the highly successful King’s Quest series, and high-resolution electronic image shows
permitted the industry to let in progressively high-graphical ports in new releases.
Further, improvements to game artwork were artwork were made possible with the
foundation of the first sound csrds, such as ADLib’s Music Synthesizer Card, in 1987. These
cards allowed IBM PC – compatible computers to develop complex sounds using FM
synthesis, where they had formerly been fixed to simple tones and beeps.
Contemporary gaming
By 1996, the boost of Microsoft Windows and success of 3D console titles such as Super
Mario 64 set off great interest in hardware-accelerated 3D graphics on the IBM PC-
compatible, and soon ensued in attempts to make low-cost answers with the ATI Rage,
Matrox Mystique and Silicon Graphics ViRGE, Tomb Raider, which was brought out in 1996,
was one of the first third-person shooter games and was praised for its radical graphics
libraries such as Direct X and OpenGL ripened and tapped proprietary interfaces out of the
market, these programmes gained greater acceptance in the market . particularly with their
established gains in games such as Unreal.
PC game development
Game development, as with console games, is generally contracted by one or more game
developers, using either standardized or proprietary tools. While games could formerly be
developed by very small groups od people, as in the former example of Wolfenstein 3D,
many popular computer games today demand large development teams and budgets to the
tune of millions of dollars. PC games are usually built around a central piece of software,
called as a game engine that alters the development process and enables developers to easily
port their projects among platforms. Unlike most consoles, which generally only run major
engines such as Unreal Engine 3 and Render Ware due to limitations on homebrew software,
personal computers may run games formulated using a larger range of software.
User-created modifications
The multipurpose nature of personal computers often permits users to alter the content of
installed games with relative ease. Since console games different to change without a
proprietary software development kit, and are often saved by legal and physical roadblocks
against fiddling and homebrew software, it is generally easier to change the personal
computer version of games using common, easy-to-obtain software .
The inclusion of map editors such as UnrealED with the retail versions of many
games, and others that have been made available onlone such as GtkRadiant, allowing users
to make changes for games easily, using tools that are maintained by the games actual
developers.
Physical distribution
Computer games are typically sold on standard storage media, such as compact discs,
floppy disks and DVDs . These were earlier handed on to customers through mail order
services, although retail distribution has substituted it as the main distribution channel for
video games because of higher sales.
Shareware
Shareware marketing, whereby a fixed or demonstration variant of the full game is issued
to potential buyers without charge, has been used as a method of disseminating computer
games since the early years of the gaming industry and was seen in the early days of Tanarus
as well as many others.
Online delivery
With the Internet’s growing popularity, online distribution of game content is now
commonplace. Retail services like Direct2Drive and Download.com offer users the
opportumity to download large which is otherwise only possible to be distributed on physical
media, e.g., DVDs and furnish cheap distribution of shareware and ‘demo’ games.
Companies like Real Networks furnish a service that allows other websites to use their game
catalogue and ecommerce services to bring out their own game download distribution sites.
PC game genres
The real-time strategy genre, accounts for more than twenty-five per cent of all PC games
sold, and has had scant success on video game consoles, with products like Scarcraft 64
going failing in the marketplace. Strategy games normally tend to lose from the pattern of
console controllers, which do not facilitate fast, precise motion.
PC gaming technology
Fig. 5.4 PC Gaming
An opened up view of a modern personal computer:
1. Display
2. Motherboard
3. CPU (Microprocessor)
4. Primary storage (RAM)
5. Expansion cards (graphics cards, etc.)
6. Power supply
7. Optical disc drive
8. Secondary storage (Hard disk)
9. Keyboard
10. Mouse

Hardware
Modern computer games place great demand on the computer’s hardware, often
necessitating a fast central processing unit (CPU) to run properly. CPU makers
historically trusted mainly on increasing clock rates to ameliorate the performance of their
processors, but had started to proceed steadily to wards multi-core CPUs by 2005. These
processors permit the computer to simultaneously do multiple tasks, called threads, letting
the use of more complex graphics, artificial intelligence and in games physics.
Sound cards are also available to render better audio in computer games.
These cards supply improved 3D audio and audio enhancement that is mostly not
available with integrated alternatives, at the cost of marginally lower overall performance.
Physics processing units (PPUs) , such as the Nvidia PhysX (formerly AGEIA
PhysX) card, are also available to speed up physics simulations in modern computer
games. PPUs allow the computer to appendage more complex interactions among objects
than is realizable applying only the CPU, potentially allowing players a much greater
degree of control over the world in games designed to use the card.
Software
Computer games also depend on third-party software such as device drivers, an
operating system (OS), libraries and more to run. These days, the vast bulk of computer
games are designed to run on the Microsoft Windows OS. Whereas earlier games written
for MS-DOS would include code to transmit directly with hardware, today application
programming interfaces (APIs) furnish an interface between the game and the OS,
simplifying game design.
Multiplayer
Local area network gaming
Multiplayer gaming was largely fixed to local area networks (LANs) before cost-
effective broadband Internet access became available, due to their typically higher
bandwidth and lower latency than the dial-up services of the time. These vantages
permitted more players to link up any given computer game, buy have remained today
because of the higher latency of most Internet connections and the costs related with
broadband Internet.
Online games
Online multiplayer games have reached popularity largely as a result of growing
broadband adoption among consumers. Affordable high-bandwidth Internet connections
permit large numbers of players to play together, and thus have found special use in
massively online RPGs, Tanarus and persistent online games such as World War II
Online.
Although it is possible to take part in online computer games using dial-up
modems, broadband internet connections are generally considered essential to minimize
the latency between players (commonly known as ‘lag’), Such links need a broadband
compatible modern connected to the personal computer through a network interface card
(generally incorporated into the computer’s motherboard), optionally separated by a
router.
Emulation
Emulation software, used to run software without the original hardware, is popular for
their power to play legacy video games without the consoles or operating system for
which they were projected. Console emulators such as NESticle and MAME are
relatively commonplace, although the complexity of advanced consoles such as the Xbox
or Playstation makes them far more different to emulate, even for the original
manufactures.
Controversy
PC games have long seen a lot of arguments, particularly related to the violence that
has become commonly associated with video gaming in general. The argument
beleaguers the influence of objectionable content on the social development of minors,
with organizations, such as the American Psychological Association concluding that
video game violence increases children’s agreession, a concern that motivated a further
investigation by the Center for Disease Contril.
Video game habituation is another cultural aspect of gaming to draw criticism
as it can have a negative influence on health and on social relations. The problem of
habituation and its health risks seems to have grown with the rise of Massively
Multiplayer Online Role Playing Games )MMORPGs). Beside the social and health
problems associated with computers, game addiction has grown similar worries about the
effect of computer games on education.
CHECK YOUR PROGRESS
1. What is the difference between the Internet and WWW?
2. What is caching?
3. What is spamming?
4. What is a private message?
5. Why is online gaming popular?

5.6 MOBILE TELEPHONY


Most current mobile phones, except satellite phones, link to a cellular network of base
stations (cell sites), which is in turn interrelated to the public switched telephone network
(PSTN).
There are more than 4.1 billion mobile cellular subscriptions in the world.
Evolution
In 1947, Bell Labs was the first to offer a cellular network. The primary invention was
the development of a network of small overlapping cell sites supported by a call-switching
infrastructure that tracks users as they moved though a network and pass their call from one
site to another without dismissing the connection. Bell Labs set up the first commercial
cellular network in Chicago in the 1970s.
Cellular systems

Fig. 5.5 Graph Showing Increasing Mobile Phone Subscribers


Mobile phones send out and pick up redio signals with any number of cell site base
stations fitted with microwave antennas. These sites are usually mounted on a tower , pole or
building , placed all around populated areas, then connected to a cabled communication
network and switching system.
When a mobile phone or data device is turned on, it registers with the mobile
telephone exchange, or switch, with its unique symbols and can then be changed by the
mobile swith when there is an incoming telephone call.
Cell sites have relatively low-power (often only one or two watts) radio transmitters
broadcasting their presence and relay communications between the mobile handsets and the
switch.
The exchange between a handset and cell site comprise a stream of digital date that
includes digitized audio (except for the first generation analodge networks). The technology
that achieves this depends on the system that the mobile phone operator has installed.
The nature of cellular technology makes many phones susceptible to ‘cloning’
anytime a cell phone moves out of coverage (for example, in a road tunnel), when the signal
is re-established, the phone sends out a ‘re-connect’ signal to the nearest cell tower,
identifying itself and signaling that it is again ready to transmit.
In 2001, third-generation (3G) networks commenced in Japan. They are all digital and
offer high-velocity data access in addition to voice services and include WCDMA ( known
also as UMTS) and CDMA2000 EV_DO, China in launching a third generation technology
on the TD-SCDMA standard. Operators use a mix of predetermined frequency bands fixed
by the network requirements and local regulations.
Tariff models
When celluar telecom services were set up, phones and calls were very costly. Early
mobile operators (carriers) determined to charge for all air time consumed by the mobile
phone user. This resulted in the concept of charging callers for all outbound and receiving
calls. As mobile phone call charges decreased and phone acceptance rates skyrocheted
operators resolved not to charge for incoming calls.
The European market took over a ‘Calling throughout the GSM environment and soon
various other GSM markets also started to copy this model. As Receiving Party Pays systems
have the unsought impression of phone owners keeping their phones switched off to avoid
receiving unwanted calls, the total voice usage rates ( and profits) in Calling Party Pays
countries surpass those in Receiving Party Pays countries.
Impacts on human health and behavior
Mobile ohone use has evoked concerns of possible health impacts from constant use. By
2008, American mobile phones obtained more text messages than phone calls . Though
studies have not indicated any overt health issue regarding mobile phone use, the effect of
mobile phone usage is still an area of public fear.
Upon customer request, Varizon made usage controls capable of monitoring service
and switching phones off, so that people (especially children) get some sleep. Attempt have
been made to determine use by people operating automobiles or moving trains, coaches when
writing to player on their teams, movie theatre audiences, etc.
According to Reuters, the British Association of Dermatologists reported of a certain
type of skin rash occurring on people’s cheeks as well as ears that was caused by an allergic
reaction from the nickel surface usually found on mobile device exteriors. Another unproven
hypothesis is that the allergy could occur on the fingers if someone spends a lot of time text
messaging.
Safety concerns
Airlines have been attempting with base station and antenna systems that are installed in
the aeroplane , letting low power, short-range connection of any phones aboard to remain
connected in the aircraft’s base station. Thus, they would not attempt to like to the ground
base stations as during take off and landing. At the same time, airlines are looking at offering
phone services to their passengers, though initially only as text messaging and similar
services.
On 20 March, 2008, an Emirates flight allowed voice calls in-flight on commercial
airline flight, for the first time in aviation history. This was after the European Aviation
Safety Agency (EASA) and the United Arab Emirates-based General Civil Aviation
Authority (GCAA) allowed gave the approval for the Aero Mobile system to be used on
Emirates. Passengers were accorded full phone facilities – making and receiving calls as well
as text messaging.
However, ther exits discrepancies between practices that are allowed by different
airlines and sometimes even on the same airline in different countries ; for instance,
Northwest Airlines may allow mobile phone use immediately after landing on a domestic
flight within the country, whereas they may say ‘not until the doors are open ‘ on an
international flight that is arriving in some other country.
Studies have also been carried out to determine the link between cell phones and brain
cancer. According to reviews of these studies, cell phone use of a decade or more gives a
uniform from of an increased risk glioma and acoustical neuroma . Startlingly, these tumours
are detected at the side of the head the phone is in most contact with. However, these studies
are not yet conclusive.
Etiquette
Most schools in the US, Canada, Europe and India have forbidden mobile phones in the
classroom or in school due to the large number of class interruptions that result from their use
and the possibility for cheating via text messageing. In United Kingdom, acquiring a mobile
phone in an examination can ensue immediate disqualification from that subject or from all
that student’s subjects.
A working group made up of Finnish telephone companies, public transport operators
and communications authorities has set up a crusade to remind mobile phone users of
courtesy, especially when using mass transit – what to talk about on the phone and how to. In
particular, the campaign wishes to impact loud mobile phone usage as well as calls regarding
sensitive matters.
Use by drivers
The habit of using mobile phones among people while driving has become increasingly
common, eith as part of their job, as in the case of delivery drivers who are calling a client or
by commuters chatting with friends or peers. While many drivers have hugged the gadget of
using their cell phone while driving, some jurisdictions have made the practice against the
law, such as Australia, the Canadian provinces of quebec, Ontario, Nova cotia and
Newfoundland and Labrador as well as United Kingdom, consisting of a zero-tolerance
system functioned in Scotland and a warning system operated in England, Wales and
Northern Ireland. In India also people are being warned against using a mobile phone while
driving. Several studies have cautioned people about driving and talking on mobile phones. A
study concluded that cell-phone drivers showed greater impairment than indoxicated drivers.
Environmental impacts
Fig. 5.6 Cellular Antenna Disguised to Look Like a Tree
Like all high structures, cellular antenna poles pose a hazard to low-flying aircrafts.
Tpwers over a certain height or towers that are close to airports or aerodromes usually require
to have warning illuminations. There have been reports that warning lights on cellular poles.
TV towers and other high structures canattract and confuse. U.S authorities guess that
millions of birds are killed near communication towers in the country each year. Some
cellular antenna towers have been masked to make them less obvious on the horizon and
make them look more like a tree. (see Figure 5.6)
As an example of the way mobile phones and mobile networks have sometimes been
perceived as a threat is the widely reported and later disbelieved claim that mobile phone
masts are associated with the Colony Collapse Disorder (CCD) that has cut down bee hive
numbers by up to 75 per cent in many arease, especially near cities in the US.
Usage
Fig. 5.7 Some Amtrak Trains in North America Use Cellular Technology
An increasing number of countries , especially in Europe, now have more mobile phones
than the number of people residing there. According to the estimates from Eurostat, the
European Union’s in-house statistical office, Luxembourg had the highest mobile phone
incursion rate at 158 mobile subscriptions per 100 people, closely followed by Lithuania and
Italy.
There are roughly more than five hundred million active mobile phone accounts in
China. However, the total penetration rate stands below 50 per cent. Mobile phones have seen
an increase in usage in developing countries with less than adequate landline infrastructure,
during the last decade. This development is often seen as an example of the progress effect.
Many remote regions in these countries went from having zero telecommunications
infrastructure to possessing satellite-based communications systems.
India is easily the biggest market in this regard, with over 6 million mobile phones in
the market every month. As per a recent survey taken on 256,55 million total landline and
mobile phones, market penetration was still quite low at 22,52 per cent. It is predicted that the
country will reach 500 million subscribers by 2010 end.
Conversely, landline phone possession is going down gradually and accounts for
about 40 million connections. All in all, there are three main technical criteria for the present
generation of mobile phones and networks, and two for the next-generation 3G phones and
networks. All Eiropean and African countries and many Asian countries have followed one
system, GSM, which is hitherto the only technology that can be used in all countries and in
most countries – it covers 74 per cent of all mobile subscribers. In countries like Australia,
Brazil, Canada Rica ,India South Korea the US and Vietnam, GSM co-exists with
internationally adopted criteria, like CDMA and TDMA, as well as other national standards,
like PDC in Japan and iDEN in the US. The last five years have witnessed several dozen
mobile operators (carriers) deserting networks on TDMA and CDMA technologies and
switching allegiance to GSM.
With third-generation (3G) networks (IMT-2000 networks), there out of four
networks are on the W-CDMA (UMTS) standard, generally seen as the natural development
path for TDMA and GMS metworks . One in four 3G metworks is founded on the
CDMA2000 1x EV-DO technology.
Culture and customs

Fig. 5.8 Using Cellular Phones in Leisure


Cellular phones allow people to communicate from almost anywhere at their leisure (see
Figure 5.8).
Between the 1980s and the 2000s, the mobile phone has gone from being a costly
item used by the business people to a pervasive, personal communication tool for the general
population. In most countries, mobile phones number more than landline phones, with fixed
landlines numbering 1.3 billion but mobile subscriptions 3.3 billion at the end of 2007.
In many markets from Japan and South Korea, to Scandinavia, to Malaysia,
Singapore, Taiwan and Hong Kong, most children between eight and nine have mobile
phones and new accounts are now opened for customers between six and seven. In Japan,
where mostly parents tend to give hand-me-down used phones to their youngest children
already new camera phones are on the market whose target age group is under 10 years of
age, introduced by KDDI in February 2007. The US also falls behind on this measure, as
about half of all children have mobile phones there. In many young adults families, it has
replaced the land-line phone. Mobile phone usage is prohibited in some countries, such as
North Korea and restricted in some other countries, such as Burma.
The mobile phone can be a style totem custom-decorated to showcase the owner’s
personally. This aspect of the mobile telephony business is, in itself, an industry.
e.g. ring tone sales amounted to $3.5 billion in 2005.

Fig. 5.9 Use of Mobile Phone is Prohibited in Some Train Company Carriages
Mobile phone use can be a significant matter of social offence: phones ringing during
funerals or weddings; in toilets, cinemas and theatres. Some libraries, book shops, bathrooms,
cinemas ,places of worship and doctor’ offices forbid their use, so that other patrons are not
disturbed by conversations. Some facilities install singal-jamming gadgets to prevent their
use, although in many contries, including the US such equipment is illegal.
Trains, particularly those involving long-distance services, often offer a ‘quiet
carriage’ where phone use is banned, much like the specified non-smoking carriage of the
past (see Figure 5.9). In United Kingdom, however, many users tend to dismiss this as it is
rarely imposed, especially if the other carriages are crowed and they have no choice but to go
in the ‘quiet carriage’.
Mobile phone use on aircraft is starting to be permitted with several airlines already
offering the ability to use phones during flights.Mobile phone use during flights used to be
banned, and many airlines still claim in their in-plane announcements that this prohibition is
due to possible intervention with aircraft radio communications.
Law enforcement has use mobile phone evidence in a number of different ways. Proof
about the physical location of an individual at a given time can be obtained by measuring the
individual’s cellphone among several cellphone towers. This surverying proficiency can be
used to show that an individual’s cellphone was at a certain location at a certain time. The
concerns over terrorism and terrorist use of technology motivated an inquiry by the British
House of Commons Home Affairs Select Committee into the use of evidence from mobile
phone devices, inciting leading mobile telephone forensic specialists to distinguish forensic
available in this area.
In United Kingdom in 2000 it was claimed that recordings of mobile phone
conversations made on the Omagh bombing were important to the police investigation. In
detail, calls made on two mobile phones which were tracked from south of the Irish border to
Omagh and back on the day of the bombing, were thought of vital significance. Further
example of criminal probes using mobile phones is the initial location and quality
identification of the terrorists of the 2004 Madrid train bombings. In the attacks, mobile
phones had been used to set off the bombs.
Disaster response
The finnish government determined in 2005 that the quickest manner to warn citizens of
disasters was the mobile phone network. In Japan, mobile phone companies render quick
notice of earthquakes and other natural disasters to their customers free of charge. In the
event of an exigency, disaster crews can find out trapped or injured people by using the
signals from their mobile phones. An interactional menu approachable through the phone’s
Internet browser apprises the company if the user is safe or in distress.
However, most mobile phone networks function close to capacity during normal
times, and spikes in call volumes caused by widespread emergencies often overload the
system just when it is needed the most. Examples reported in the media where this have taken
place include the 11 September 2001 attacks, the 2003 Northeast blackouts, the 2005 London
Tube bombings, the 2007 Minnesota bridge collapse, the 2006 Hawaii earthquake, and
Hurricane Katrina.

5.7 INTERACTIVE TELEVISION


Interactive television (iTV) describes a number of techniques that allow viewers to interact
with television content as they view it.
Interactive television began low interactivity (TV on/off, volume, changing Channels)
to moderate interactivity (simple movies on demand without player controls) and high
interactivity in which, for example, an audience member impacts the program being
telecasted.
To be truly interactive, the viewer must be capable of altering the viewing
experiences, e.g., choose which angle to watch a football match or give information to the
broadcaster. This may be through SMS (text messages), telephone, radio, cable or
asymmetrical digital subscriber lines (ADSL).
Cable TV viewers obtain their programmes through a cable and in the integraned
cable return path enabled plat-forms use the same cable as a return path.
Satellite views (mostly) give data to the broadcaster through their regular telephone
lines. They are charged for this service on their regular telephone bill. An Internet connection
through ADSL, or other data communications technology, is also being progressively
employed.
Interactive TV can also be linked through a terrestrial antenna (Digital Terrestrial TV
such as ‘Freeview’ in the United Kingdom). In this case, there is often no ‘return path’ – so
data cannot be remitted to the broadcaster (so you could not, for instance, vote on a TV show,
or order a product sample). Progressively, the return path is becoming a broadband IP
connection, and some hybrid receivers are now capable of exhibiting video from either the IP
connection or from traditional tuners.
Types of interaction
The term ‘interactive television’ is applied to denote a variety of rather different types of
interactivity – both as to usage and as to technology . At least three very different levels are
important.
1. Interactivity with a TV set

Interactivity with a TV set is already very common, beginning with the use of the remote
control to change channel-browsing behavior and developing to admit video-on-demand,
video cassette recorder (VCR) like pause, rewind and fast forward and digital video recorders
(DVRs) commercial skipping and the like. It does not alter any content or its inherent
linearity, but only how users control the viewing of that content. DVRs permit users to time
shift content in a way that is impractical with VHS. Though this form of iTV is common,
many people claim that this does not exactly make a television interactive. In the not too
distant future, however, the question of what is real interaction with the TV will be hard.
Panasonic already has face-identification technology carried out its prototype Panasonic Life
Wall. The Life Wall is literally a wall in the house that doubles as a screen.
2. Interactivity with TV programme content

Interactivity with TV programme content essentially is the one that is ‘interactive TV’ but
it is also the most ambitious to develop (see Figure 5.10 and 5.11).

Fig. 5.10 Open TV Program Browser Set-Top Box Resident Software


Fig. 5.11 Open TV E-mail Browser Set Top Box Resident Software
The premise is that the programme might alter on the basis of viewer input. Advanced
forms, still not mainstream, include drams where viewers have the power to influence plot
details and endings, Simpler, and more successful, forms include programmes that directly
incorporate cimments, polls, questions, and other various forms of (virtual) audience response
back into the show. How popular and effective this kind of truly interactive TV can be is
debatable. Though it seems likely that some forms of this idea will be popular, viewing of
predefined content will remain a key component of the TV experience for a long time to
come.
Commercial broadcasters and other content suppliers serving the US market are
tightened up from following advanced interactive technologies because they are dependent on
the penetration of interactive technology into viewers’ homes, earn a level of return on
investment for their investors, and must serve the desires of their customers in association
with many factors such as follows:
• Consumer acceptance of the pricing structure for new TV-delivered services. Over the
air (broadcasted) TV is FREE in the US, free of taxes or usage fees.
• The competition from Internet-based content and service providers for the consumers’
attention and budget.
• Requirements for backward compatibility of TV content formats, form factors and
Customer Premise Equipment (CPE)
• The ‘cable monopoly’ laws that are in force in many communities served by cable TV
operators.
• Proprietary coding of set top boxes by cable operators and box manufacturers
• Technical and business road blocks.
• The ability to implement ‘return path’ interaction in rural areas that have low, or no
technology infrastructure.

Interactivity with TV-related content


The least realized interactivity with TV-related content may have most hope to alter how
we watch TV over the next decade. Examples include getting more information about what is
on the TV, beit movies, news, sports, or the like.
Similar to this is getting more information about what is being publicized, and the
ability to buy it – this is called television commerce (commerce) This kind of multitasking is
already happening on large scale – but there is currently little or no automated support for
relating that secondary interaction to what is on the TV compared to other forms of
interactive TV. Partial steps in this area are already a great phenomenon, as web sites and
mobile phone services align with TV programmes. Others argue that this is more of a ‘web-
enhanced’ television viewing than interactive TV.
Many think of interactive TV mainly in terms of ‘one-screen forms that require
interaction the TV screen, using the remote control, but there is another important form of
interactive TV that makes use of Two-Screen Solutions, such as NanoGaming . In this case ,
the second screen is typically a PC linked to a website application.
Noteble two-screen solutions have been extended for specific popular programmes by
many US broadcast TV networks. Today, two-screen interactive TV is named either 2-screen
(for short) ‘Synchronized TV’ and is widely soread around the US by national broadcasters
with the help of technology offerings from certain companies.
One-screen interactive TV generally demands special back up in the set-top box, but
two-screen solutions, synchronized interactive TV applications mostly do not, relying rather
in Internet or mobile phone servers to align with the TV and are most often free to the user.
Interactive TV services
Famous interactive TV services are listed as follows:
• BBC Red Button
• T-commerce – Sales transaction through the television.
• Ensequence – Provides solutions that enable programmers, advertisers and
distributors to produce and spread interactive TV experiences that increase
programming ratings, advertising response and audience reach. The company
extenuates the technical complexities of interactive TV implementation and enables
its customers to rapidly and affordably bear a high volume of robust interactive TV
experiences across cable, satellite and IPTV.
• TiVo
• Philips Net TV – Solution to view Internet content designed for TV; directly
integrated inside the TV set. No extra subscription costs or hardware costs involved.
• ATVEF – Advanced Television Enhancement Forum’ is a group of companies that
are established to create HTML based TV products and services. ATVEF’s work has
ensued in an Enhanced Content Specification , making it possible for developers to
make their content once and have it display properly on any compliant receiver.
• MSNTV –MSN TV supplies computerless Internet access. It requires a set top box
that sells for $100 to $200 , with a monthly access fee.

User Interaction
Interactive TV is often depicted by cunning marketing gurus as ‘lean back’ interaction, as
users are typically relaxing in the living room environment with a remote control in one hand.
This is a very simple explanation of interactive television that is less and less descriptive of
interactive television services that are in several stages of market introduction. This is in
contrast to the similarly slick marketing invented descriptor of personal computer- oriented
‘lean forward ‘ experience of a keyboard, mouse , and monitor. This depiction is becoming
more deflecting than useful as video game users. For example, do not run forward while they
are playing video games on their television sets, a precursor to interactive TV. A more useful
mechanism for categorizing the deviations between PC and TV-based user interaction is by
measuring the distance the user is from the device. Typically a TV viewer is ‘leaning back’ in
the sofa, using only a remote control as ameans of interaction. While a PC user is 2 feet or 3
feet from his high-resolution screen using a mouse and keuboard .
In the case of two-screen solutions interactive TV, the distinctions of ‘lean-back’ and
‘lean-forward’ interaction turn more and more identical. There has been an increasing leaning
to media multitasking , in which multiple media devices are used at the same time (especially
among younger viewers). This has modified interest in two-screen services, and is making a
new level of multitasking in interactive TV.
For one-screen services , interactivity is provided by the handling of the API of the
particular software installed on a set-top box , denoted as ‘middleware’ due to its mediator
place in the operating envitonment. Software programs are passed around to the set-top box
in a ‘carouse’.
Interactive TV Sites have the necessary to bear interactivity directly from Internet
servers, and there fore need the set-top box’s middleware to support some sort of TV
browser, content translation system or content-rendering system. Middleware examples like
Liberate are established on a version of HTML/ Javascript and have revdering capabilities
built in, whereas others such as Open TV and DVB-MHP can load microbrowsers and
applications to present content from TV Sites.
Typically the distribution system for Standard Definition digital TV is established on
the MPEG-2 specification, whereas high-definition distribution is likely to be founded on the
MPEG-4 meaning that the delivery of HD often requires a new device or set-top box, which
typically is then also able to decrypt Internet Video through broadband return paths.
Interactive television projects
Some interactive television projects are consumer electronics boxes, providing set-top
interactivity whereas other projects other projects are furnished by the cable television
companies (or multiple system operator, or MSO) as a system-wide solution. Even other ,
newer, approaches mix the interactive functionality in the TV, thus nullifying the need for a
separate box. Some examples of interactive in the TV, thus nullifying the need for a separate
box. Some examples of interactive television include the following:
• Hospitality and healthcare solutions.
• MSOs
o Cox Communications (US)
o Time Warner (US)
o Comcast (US)
o Cablevision (US)
• Two-screen Solutions or ‘enhanced TV’ solutions
o Enhanced TV
• Previous MSO trials or demos
o TELE TV from Bell Atlantic, Nynex, Pac Tel, CAA (US) – no longer in
operation.
o Full Service Network from Time Warner (US) – no longer in operation
o Smarbox from TV Cabo, Novabase and Microsoft (OT) – this is also no longer
in operation , although some of the equipments are still used for the digital TV
service. This was the pioneer project.
o LodgeNet
o Consumer electronics solutions
o TiVo
o Replay
o UltimateTV
o Miniweb
o Microsodt Windows XP Media Center
o Philips Net TV

Mobile phone interaction with the STB and the TV


• ITEA WellComProject using Bluetooth and NFC pairing, authentication, and
seamless service access.

Interactive video and data services


IVDS is a wireless execution of interactive TV, utilizing part of the VHFTV frequency
spectrum.

5.8 HYPERMEDIA
Hypermedia is applied as a logical annexe of the term ‘hypertext’ in which graphics,
audio, hyperlinks, plain text and video interlace to create a generally non-linear medium of
information. This demarcates with the broader term multimedia, which may be used to depict
non-interactive linear presentations as well as hypermedia. It is also linked to the field of
electronic literature – a term first used in a 1965 article by Ted Nelson.
The WWW is a classic instance of hypermedia whereas a non-interactive cinema
presentation is an example of standard multimedia because of the absence of hyperlinks.
The first hypermedia work was, arguably, the Aspen Movie Map. Atkinson’s
HyperCard generalized hypermedia writing . Most modern hypermedia is handed over
through electronic pages from a variety of systems , including web browsers, media players
and stand-alone applications. Audio hypermedia is rising with voice-command devices and
voice browsing.
5.6.1 Hypermedia Development Tools

Hypermedia may be developed in a number of modes . Any programming tool can be


expended to write programs linking data from internal variables and modes for external data
files. Multimedia development software, such as Adobe Director, Adobe Flash, Micromedia
Authorware and March Ware Mediator may be used to make complete hypermedia
applications with stress on intertainment subject, Some database software such as Visual
FoxPro and FileMarker developer may be used to formulate stand=alone hypermedia
applications with stress on educational and business-content management.
Hypermedia applications may be made on engrafted device for the mobile and the
digital signage industries using the scalable vector graphics (SVG) stipulation from W3C.
Software applications, such as LKivo Animator and Inkscape modify the growth of
hypermedia content on the basis of SVG. Embedded devices, such as iPhone (refer to Figure
C14) natively back up SVG specifications and may be used to make mobile and distributed
hypermedia applications.
Hyperlinks may also be brought to data files amploying most business software
through the limited scripting and hyperlinking features inherent. Documentation software,
such as the Microsoft Office Suite grants for hupertext links to other content within the same
file, other external files and URL links to files on external file servers. For more stress on
graphics and page layout, hyperlinks may be added using most modern desktop publishing
tools. This includes presentation programs, such as Microsoft Powerpoint, add-ons to print
layout programs, such as Quark Immedia and tools to include hyperlinks in PDF documents,
such as Adobe InDesign for making and Adobe Acrobat for editing. Hyper publish is a tool
specifically planned and optimized for hypermedia and hypertext management. Any HTML
editor may be used to build HTML files, approachable by any web browser. CD/DVD
authoring tools, such as DVD Studio Pro may be used to hyperlink the content of DVDs for
DVD players or web links when the disc is played on a personal computer connected to the
Internet.
CHECK YOUR PROGRESS
6. When did mobile telephony start?
7. Define interactive TV.
8. What is ATVEF ?
9. What is hypermedia ?
10. Name some hypermedia development tools ?

5.9 SUMMARY
• A system of interconnected hypertext documents contained on the Internet is known
as the World Wide Web.
• An Internet forum, or message board, is an online discussion website. It arose as the
modern equivalent of an orthodox bulletin board, and a technological evolution of the
dialup bulletin board system.
• Most current mobile phones link to a cellular network of base stations (cell sites),
which is in turn interrelated to the public switched telephone network (PSTN) (the
exception is satellite phones).
• In the case of computer games, there is acoustic, visual and haptic communication
between the user (player) and the game.
• In mobile telephony, communication occurs among two people and is strictly accustic
at the first glimpse.
• Interactive television constitutes a time from low interactivity (TV on/off, volume,
changing channels) to moderate interactivity (simple movies on demand without
player controls) and high interactivity in which, for example, an audience member
impacts the program being telecasted.
• Hypermedia is applied as a logical annex of the term ‘hypertext’ in which graphics,
audio, hyperlinks, plain text and video interlace yop create a generally non-linear
medium of information.

5.10 KEY TERMS


• Interactive media: A form of collaborative media that denotes the media which
permits dynamic involvement by the recipient.
• World Wide Web: A system of interconnected hypertext documents contained in the
Internet .
• Internet Forum: Also called message board, this is an online discussion website. In
arose as the modern equivalent of an orthodox bulletin board, and a technological
evolution of the dialup bulletin board system.
• Personal computer game: A game played on a personal computer, rather than on a
video game cabinet or arcade machine.
• Interactive television (iTV0): Techniques that allow viewers to interact with
television content as they view it.

5.11 END QUESTIONS


Short-Answer Questions
1. What do you understand by interactive media?
2. What is WWW?
3. What are Internet Forums? What are its rules and policies?
4. Define computer games?
5. How does mobile telephone work?
6. How much is user interaction important in interactive TV?
7. What is hypermedia ? What are its functions?

Long answer Questions


1. Explain the origin and functions of the WWW.
2. What is the structure of Internet forum?
3. Trace the development of PC games.
4. What is mobile telephony? What is its usage?
5. Explain interactive television. What is difference between interactivity with a TV set
and interactivity with TV program content?

5.12 BIBLIOGRAPHY
‘Tim Berners Lee: Tome 100 People of the Century’, Time Magezine http:// www.
Time.com/time/time/100/scients/profile/bernerslee.html.
Wardrip-Fruim, Noah and Nick Montfort )ed). 2003. The New Media Reader.
Cambridge, MA: The MIT Press.
Kunert, Tibor,2009. User –Centered Interaction Design Patterns for Interactive Digital
Television Applications. London: Springer.

UNIT 6 MULTIMEDIA HARDWARE


Program Name: BSc(MGA)
Written by: Srajan
Structure:
6.0 Introduction
6.1 Unit Objective
6.2 Multimedia Computers
6.2.1 Design and Technical Information
6.2.2 Origin of Tramel Technology
6.2.3 Operating Sustem
6.3 Debut of the ST
6.3.1 Port Connections
6.3.2 Mega and Later Models
6.3.3 Applications
6.3.4 Other Models
6.4 Input Devices
6.4.1 Video Material
6.4.2 Voice Input
6.4.3 Digital Cemera
6.4.4 Helping People with special Needs
6.4.5 Data Accuracy
6.5 Output Devices
6.5.1 Screen Size
6.5.2 Imaging Technologies
6.5.3 Performance Measurements
6.5.4 Display Interface
6.5.5 Modern Technology
6.5.6 Flexible Display Monitors
6.5.7 Configuration and Usege
6.5.8 Additional Features
6.5.9 Touchscreen Technologies
6.5.10 Construction of a Touchscreen
6.5.11 Development of Touchscreens
6.5.12 Ergonimics and Usage
6.5.13 Tablet Screens
6.5.14 Speakers
6.6 End User Hardware Issues
6.6.1 Description of a Bus
6.6.2 Bus Topology
6.7 Summary
6.9 Key Terms
6.9 Questions and Exercises
6.10 Further Reading

6.0 INTRODUCTION
In this unit, you will learn about multimedia computers. A computer which is optimized
for high multimedia performance is known as multimedia computer. The Amiga 1000 was
the first multimedia computer from Commodore International.
In addition, you will learn about input and output devices. In order to capture sound
and image or any graphical data, special input devices are rewuired. For personal computers,
such inout devices consist primarily of electronics on a separate card, such as a sound card or
a video card, which is normally installed in the computer. A monitor or display, sometimes
also called a visual display unit, is a piece of electrical device that displays images generated
by other devices such as computers, without producing a permanent record. The components
of a monitor are the display device.
This unit will also introduce you to end user hardware issues. In earlier times,
computer buses were literally parallel electrical buses that had many connectors these days
the term is used for any physical arrangement capable performing the same logical
functionality as a parallel electrical bus.

6.1 UNIT OBJECTIVES:


After going through this unit, you will be able to:
• Understand multimedia computers
• Describe the version of the Amiga 1000.
• Explain the features and technical specifications of the Atari ST
• Discuss inout and output devices
• Know about configuration and usages of input and output devices
• Explain end user hardware issues

6.2 MULTIMEDIA COMPUTERS


Earlier desktop computers lacked the power and storage necessary for multimedia.
Games or demo scences in these systems were able to achieve high sophistication and
technical polish using only simple, blockly graphics and digitally generated sound. The
Amiga 1000 (A 1000) was the first multimedia computer from Commodore Intenational.
Animation, graphics and sound technologies of this computer enabled multimedia content to
devlope.
Until the advent of Windows 3.0 and the MPC standards in the early 1990s, most
IBM PCs were not developed with multimedia capabilities. The earlier PCs were only
developed for business machines and colourful graphics. Powerful sound abilities were not a
priority in those days. Thus, there were only few games available that offered slow video
hardware, low PC speaker sound and a limited colour palette when compared to the modern
version.
Nowadays PCs have developed many good multimedia features. PCs now have single
or dual core CPUs with a processor speed of 3.0 GHz or faster; 1 GB of RAM; a 128 MB or
higher video card and TV tuner card.
6.2.1 Design and Technical Information
A 1000 was different from the later Amigas. It was the only model that featured the short-
lived-Amiga checkmark logo on its case. Also, there was a ‘keyboard garage ‘ for the
keyboard when not in use. Thus, the case was slighrly elevated. The inside of the case was
engraved with the signatures of the Amiga designers.
A 1000 had a frequency of 7.15909 MHz and the CPU was 68000. However, for the
phase alternate line (PAL) machines the frequency was 7.09 MHz . This is just double the
National Television System Committee (NTSC) colour carrier frequency which is 3.58 MHz.
This was needed by the Amiga chipset when outputting NTSC video. All frequencies in the A
1000 are derived from this frequency, as it simplified glue logic. It has helped the A 1000 to
be manufactured with a single mass-produced crystal, which is also cost effective. The design
of the chipset was made to synchronize all operations so the hardware always ran in 100 per
cent real-time without any waitstate delays.
A 1000 has two versions. The first version had NTSC display. However, it lacked the
EHB video mode which all other models of the Amiga had. The later version of A 1000 had
the in-built EHB mode and the second version had a PAL display and the EHB video mode.
6.2.2 Origins of Tramel Technology
Jack Tramiel, the founder of Commodore International, resigned and started a holding
company, Tramel Technology Limited . He vosoted different computer companies including
Mindset, as he wanted to buy a company. Mindset was run by Roger Badersher, who was the
former head of Atari’s Computer Division and Amiga where Tramiel worked earlier.
Tramiel’s chief engineer, Shiraz Shivji, was given the task of developing a new low-cost and
high-end computer system. National Semiconductor could not supply the chip in the numbers
or price that the project required, so the original design was made using the NS32032.
Atari’s computer division developed a home computer based on the 6502 CPU before
the introduction of the ST computers. The custom VLSI processors, such as ANTIC (DMA),
CTIA?GTIA (Graphics), POKEY (AUDIO) and PIA (A/O) were used and were sold from
1979 till 1982 as the Atari 400 (16K) amd Atari 800 (48K) . Atari introduced the 1200XL in
1982, which was designed to be able to replace the 600XL/800XL series.
The two home computer rivals, Atari and commodore, swapped the 16/32-bit
platforms, with the ST designed by ex-Commodore engineers, and the Amiga by ex Atarians
. Atari had developed several prototypes of computers which were superior to that of Amiga
and ST. This had led to the development of rivalry between Atari and Amiga owners.
6.2.3 Operating System
After completing the hardware design, the team looked for solutions regarding the
operating system. Following the buyout, Microsoft approached Tramiel and offered them the
suggestion of porting Windows to the platform. However, the delivery date was two years
later, a time too, far for Microsoft’s needs. It was then that the Digital Research team. Then,
Digital Research team was approached. The latter were working on a new GUI-based system
then known as Crystal, which was later on known as GEM.
As Digital Research was fully committed to the Intel platform, a team from Atari was
sent to the Digital Research headquartwers to work along with the Monterey Team which
included people from Atari and Digital Research engineers. In the mature operating system,
essentially CP/M-68K was a direct port of CP/M’s original.

Fig. 6.1 Mature Operating System


By 1985, the mature operating system was becoming increasingly outdated in comparison
with the MS-DOS 2.0 (see Figure 6.1). For instance, the CP/M neither supported sub-
directories nor did it have a hierarchical file system.

6.3 DEBUT OF THE ST


Atari ST was part of the 16/32 bit generation of home computers. This was based on the
Motorola 68000 CPU, with 512 KB of RAM or more, and 3 ½ “ single-density double-sided
floppy disks as storage unit (nominally 720 KB). This was similar to other machines that
were using the Motorola 68000, such as the Commodore Amiga and the Apple Machintosh.
The letter was the first widely available computer with a graphical user interface (GUI):
howerve, the interface was limited to a monochromatic display on asimilar built-in monitor.
Before the Amiga’s commercial release, the Atari ST was the first computer with a fully bit-
mapped colour GUII, and had the version of a Digital Research’s GEM. This was the first
home computer with integrated MIDI support.
In 1985, the Atari 520ST was officially launched in Las Vegas (see Figure 6.2) . Since
the original Apple Macintosh was similar to that of the Atari 520ST, it was quickly nick-
named the Jackintosh. In mid 1985, it was ready for general retail sales.

Fig. 6.2 The Atari 520ST


The ST could support either amonocrome or a colour monitor. Moreover, the
monochrome monitor was less expensive and had higher resolution (649 x 400). The
hardware could support two different colour resolutions, 320 x 200 with 16 colours, or 640 x
200 with four colours. The monochrome monitor was better suited to business applications
and colour monitor was required by many games.
The 520ST, which was all in one unit , was similar to the earlier home computers such
as the Commodore 64. By the time the 520ST reached the market, consumers demanded a
keyboard with cursor keys and a numeric keypad. This was the reason why 520ST was not
accepted in the market and became an awkward computer console.
This required large number of cables to connect to peripherals. Due to this problem ,
some new changes were made in the follow-on models that included a built-in floppy disk
drive. However, there were still drawbacks , such as the awkward placement of the mouse
and joystick ports that wewre cramped below the keyboard.
Ira Valenski, Atari’s chief industrial designer, created the case design, whose aesthetic
sensibility trought an attractive look. This was also a factor which helped sales.

Fig. 6.3 Atari 520ST Ports


6.3.1 Port Connections
The rear end of the ST-featurned machine had a large numbers of mounted ports (see Figure
6.3).
Due to the bi-directional design, the centronics port could be used for joystick input
and several games that were made use of available adaptors that plugged into the printer
socket, providing two additional 9-pin joystick ports.

Fig.6.4 Atari 1040STF


Later Atari upgraded the basic design in 1986 with the 1040STF , which was also written
as STF (see Figure 6.4) . The machine was almost similar to the earlier 520ST but moved the
power supply and a double-sided floppy drive into the rear end of the housing of the
computer. This added to the size of the machine but reduced cable clutter in the back.
6.3.2 Mega and Later Models
Initially, the sales of these computers were strong, especially in Europe where Atari sold
almost 75 per cent of its computers. Most of the Atari’s computers were sold in Germany;
and it became the strongest market for these type of computers. Most of the small business
users used them for desktop publishing and CAD. Later on, Atari focused on manufacturing
problems and distribution, thus no major design change was undertaken for about four years.
6.3.2.1 ST enhanced
In 1989, Atari released the ST E which was also written as STE, a version of the ST with
improvements to the multimedia hardware and operating system. The STE featured an
increased colour palette of 4096 colours from the ST’s 512, though the maximum displayable
palette of these colours without programming tricks was still limited to 16 in the lowest 320 x
200 resolution and even fewer in higher resolutions.
At the beginning, the STE model had software and hardware issues that resulted in
some applications and games that were originally written for the ST line being, in the worst
cases, completely ineffectual and chiefly caused by programming hardware calls which was
round the operating system. Expanding the RAM can at times solve incompatibility.
6.3.2.2 The 68030 machines
Atari released a high-end workstation-oriented TT in 1990 with 32 MHz frequency 68030-
based TTO30 CPU, with continuation of the nomenclature system with the 030 chip which
being a full 32 bit chip with 32 bit internal and external registers, hence it was known as TT.
Using a 68030 CPU, the TT included improved graphics and more powerful supports chips.
In the new design the case was integrated with hard drive enclosure.
Aftermath
In 1993, inorder to focus on the Jaguar, Atari can called development on the ST
computers.
Despite the lack of a hsrdwsre supplier, there was a small active community dedicated
to keep the ST platform alive. There have been a lot of advancements in the operating system,
software and hardware emulators, such as Windows, Mac and Linux, Accelerator cards, such
as the CT60 and CT63, which is a 68060-based accelerator card for the Falcon and there is
the Atari Coldfire Project, which aimed at developing an Atari-clone based on the Coldfire
processor. Milan Computer of Germany also made 68060-based Atari clones that could run
either Atari TOS 4.5 or Milan Computer’s MultiOS operating system.
Software: Music/sound ST was the first home computer which had inbuilt MIDI ports,
and there were plenty of MIDI-related software for use which would professionally be used
the in music studios or by amateur enthusiasts. The popular Windows Macintosh applications
known as Cubase and Logic Pro originated on the Atari ST. KCS was the another popular
and powerful ST music sequencer application. It had a Multi-program environment allowing
ST users to run other applications like the synthesizer patch editing software XoR from
within the sequencer application. Atari ST is still being used today by some people, such as
Fatboy Slim for composing music.
Music tracker software, such as the TCB tracker, was popular on the ST that helped in
the production of quality music from the Yamaha synthesizer (chiptunes) .
Musicians mainly used the ST system as it was cheaper, had build-in MIDI ports, and
was very fast. It also had low-latency response.
6.3.3 Applications
ST was also used in professional desktop publishing software, such as PageStream and
Calamus; word processors, such as WordPerfect, Microsoft Write. AtariWorks, and so on;
spreadsheets, such as LDW Power, LDW Power 2, PowerLedger ST, VIP Professional, etc;
turnket programs, such as Mail-Pro, Sales-Pro 6, Video-Pro, and so on; and database
programs such as Data Manager, Data Manager Professional DBMan V, Base Two, Informer
II, DB Master One, SBT Database Accounting Library, and so on; and various other CAD
and CAM tools from amateur hobbyist to professional grade. All were largely targeted or
even limited to high resolution monochrome- monitor owners.
6.3.3.1 Software development
The Atari ST had a wide variety of languages and tools for development, such as the 68000
assemblers, Pascal, Modula-2, C compilers, Prolog, Logo and many others.
The initial development kit from Atari consisted of a computer and manuals. At about
$5,000, many were not interested in developing software for the ST. Later on the Atari
Developer’s Kit included the software and manuals except the hardware for $300. The kit
included a resource kit, C computer, debugger and 68000 assembler along with the non-
disclosure agreement.
6.3.3.2 Games
The ST was a success in gaming because of its low cost, fast performance and colourful
graphics.
People who contributed their afforts in developing the games on the ST were Peter
Molyneux, Doug Bell, Jeff Minter, Jez San and David Braben. Dungeon Master was the first
to develop the real-time 3D role-playing computer game, which was first developed and
released on the ST, and was also the best-selling software ever produced for the platform.
Simulation games like Falcon and Flight Simulator II used the enhanced graphics found in
the ST machines, as did many arcade ports. One of the games known as MIDI Maze used the
MIDI ports to connect with other machines for interactive network play. This was inspired
with the modern LAN games which became popular in the early 1990s. Games which were
simultaneously related on the Amiga had identical graphics and sound but were often accused
by computer game magazines of simply using ST ports.
6.3.3.3 Utilities/misc
Utility software such as Video Digitieser was available to drive hardware add-ons. Office
productivity and graphics software was also added with the ST. ST was also featured with
HyperPaint II by Dirmitri Koveos, HyperFraw by David Farmborough,3D-Cale spreadsheet
by Frank Schoonjans, and several others commissioned by Bob Katz, later of Electronic Arts.
Long before public Internet access, there was steady progress in the output of public
domain and shareware software that was distributed by public domain software libraries and
were advertised in magazines and on popular dial-up bulletin board systems.
6.3.4 Other Models
The first computer model of the Macintosh was Apple Macintosh II the Macintosh II
series. This was not similar to and must never be confused with the Apple II family of non-
Macintosh computers.
The Macintosh II was the first modular Macintosh model that was retailed for
US$3,898 base price (for the CPU unit only). It was called so as a result of a horizontal
desktop case like many PCs prevalent at the time, and all previous Macintosh computers had
an all-in-one design that came with a built-in black-and-white CRT.
The Macintosh II also has space for an internal hard disk of about 20 or 40 MB as
well as an optional second floppy disk drive. Along with the Macintosh SE, this was the
brand’s first computer which used the Apple Desktop Bus (ADB), with the Apple HGS, for
mouse and keyboard interface where the two systems would communicate.
The Macintosh II was designed by hardware engineers Michael Dhuey, who
developed the computer and Brian Berkeeley who developed the monitor. The cost of the
basic system with about 20 MB of drive and monitor cost was about $5200. A complete
system which had a colour-capable system along with colour monitor, video card, hard disk
and keyboard and RAM would could cost as much as $10,000.
Mac II was introduced in 1987. It featured a Motorola 68020 processor operating at
16 MHz teamed with a Motorola 68881 floating point unit. The machine contained a socket
for an MMU, however, the Apple HMMU Clip (VLSI VI475 chip) was installed that did not
implement virtual memory. The software for virtual memory was not released still 1990.
Standard memory was about 1 megabyte expandable up to 68 MB. This was not possible
without the special FDHD upgrade kit. The maximum memory that could be upgraded was
20 MB.
The first Apple computers , the Macintosh SE, (since the Apple I was introduced )
were sold without a keyboard. The customer was offered with a choice of the new ADB
Apple Keyboard or the Apple Extended Keyboard.
Fig. 6.5 Macintosh II Motherboard

Fig.C.I Blurring and Pixelation of a Rasterized Image when Magnified


Fig. C.2 One of the Earliest Video Game Characters Mario, Shown as Pixel Art

Fig C.3 A Screenshot of the User Interface in Blender, a Completely Free 3D Modeling
and Rendering Solution
Fig. C4 A Scene from Lotte Reiniger’s 1926 Film The adventures of Prince Achmed. One
of the First Animation Films to be Made

Fig. C5 Walt Disney Cartoons Were Very Popular in the Golden Age of Animation
Fig. C.6 The Lord of the Rings Trilogy Made Extensive Use of 3D Modeling and
Rendering Techniques

Fig. C.7 A Scene From Pixar’s Toy Story (1995). The First 3D Animated Film to be
Made
Fig. C.8 Bose Speakers, a Music Lover’s Delight

Fig. C.9 Second Life, a Virtual 3D World on the Internet Where Users can Interact With
One Another
Fig. C.10 Pacman, One of the Earliest Computer Games to be Developed

Fig. C.11 Screnashot From the Groundbreaking 3D Person Shooter Game QUAKE
Fig. C.12 Screenshot form Microsoft Flight Simulator 2004, a Detailed and Realistic
Flight Simulation Program

Fig. C.13 Google Earth Makes Satellite Imagery of Every Location on Earth Available to
Internet Users for Free
Fig. C.14 The iPhone
Fig. C.15 The Matrix (1999) Dealt With Topies Related to Simulation and Vitule Reality
A serices of related models came up after the Macintosh II (see Figure 6.5) these were the
Macintosh IIx and Macintosh IIfx, all of them used the Motorola 68030 processer. Macintosh
II was upgraded to a Micintosh IIx or IIfx swapping the motherboards. The Macintosh II was
the first system that had the Chimes of Death along with the Sad Mac logo whenever a
serious hardware error occurred.
The defects that occurred until about November 1987 were developed with certain
upgradations. The original ROMs in the Macintosh II were developed with a bug which
prevented the system from recognizing more than one megabyte of memory address space on
a Nubus card. For example, if a video card with four megabytes of video RAM was installed,
only one megabyte of video RAM would be recognized by the system. The new extensions
which were featured for the Macintosh II during those days were A/ROSE and Sound
Manager.

CHECK YOUR PROGRESS


1. Which computer is known as the first multimedia computer?
2. What were the differences between the A 1000 and later Amigas?
3. How many versions of A 1000 are there ?
4. What is 520ST ?
5. Which was the first computer with inbuilt MIDI ports?
6. Why was the ST successful in gaming?

6.4 INPUT DEVICES


The mixture of sound and images with text and graphics is known as multimedia. In order to
capture sound and image or any graphical data, special input devices are required. In PCs,
input devices consisted primarily of electronics on a separate card, such as sound cards or
video card, which is already installed in the computer.
6.4.1 Video material
Earier, the video material recorder was put into the computer using a video camera or
a video recorder. Since the video data requires large storage space, were the video
segments in personal computer applications were often limited to only a few seconds.
With the improvement in video electronics and software and larger capacity storage
devices, the space required for movie or any video data could be increased and eventually
become available.

6.4.1.1 Input devices

The input devices are summarized in Table 6.1 . While using any of these devices to
input data, it is important to ensure that the data is entered accurately. Various procedures
are used to help ensure data accuracy.
Table 6.1 Summary of Input Devices

DEVICE DESCRIPTION

Keyboard Most commonly used input device; special keys may include
numeric Keypad, cursor control keys and function keys.

Mouse of Use to move pointer and select options


trackball

Joystick Stem device often used as input device for games


Pen input Uses pen to input and edit data and select processing options

Touch screen User interacts with computer by touching screen with finger.

Light pen Used to select options or draw on screen.

Digitizer Used to enter or edit drawings

Graphics tablet Digitizer with special processing options built into tablet

Image scanner Converts text, graphics or photos ito digital input.

Optical Uses light source to read codes, marks and characters


recognition

MICR Used in banking to read magnetic ink characters on checks

Data collection Used in factories and warehouses to input data at source.

Sound input Converts sound into digital data.

Voice input Converts speech into digital data

Digital camera Captures digital image of subject or object

Video input Converts video into digital data

6.4.2 Voice Input


Voice input is also referred to as speech or voice recognition. This allows the user to
enter any data and also issue commands to the computer with spoken words. Some
experts consider that voice input may gradually be the most common and easier way to
ioerate a computer. They feel that people can speak much faster than they can type. It is
known that a person can speak approximately 200 words per minute and can type only 40
words per minute. They also feel that speaking is a more natural way of communicating
than using a keyboard since it takes some time to learn and become efficient in using a
keyboard.
6.4.2.1 Four areas for voice input

Areas where voice input data is used are data entry, command and control, speakes
recognition and speech to text. Questions can be verbally completed with a limited number of
acceptable responses, for example, the product inspection in manufacturing companies use
the voice data entry systems. Rather than manually recording data or using a keyboard, the
inspector dictates the completed product information into a microphone. This can become
successful only if the inspector is focused on the item being inspected. Command and control
applications used a limited vocabulary of words that could cause the computer to perform a
specific action such as save or print a document. Command and control applications can be
used to operate certain types of industrial machinery only.
6.4.2.2 Voice input system
Various hardware and software system are used to convert spoken words into data for
entry into the computer system. This process of conversion of voice input is as follows:
(i) The user’s voice consists of sound waves that are converted into digital form by
digital signal processing (DSP) circuits that are usually on a separate board added
to the computer.
(ii) The digitized voice inout is compared with the patterns stored in the voice
system’s database.
(iii) Word conflicts are solved by using grammar rules. Based on the fact that on how a
word is used, the computer can usually identify the correct word in cases of words
that sound similar, such as to, too and two.
(iv) Words that are not recognized by the computer are left for the user to identify,
especially in lower cost systems with limited vocabularies. In many voice input
systems, the user has to train the system to recognize his or her voice accordingly
systems, the user has to train the system to recognize his or her voice accordingly.
6.4.2.3 Natural Language voice interface

A natural language voice interface is the interface in which a user is allowed to ask a
question and have the computer not only convert the question to understandable words but
also interpret the question and give an appropriate response. Beyond continuous voice
recognition is known as natural language voice interface. For example, think how easy it
would be to use a system if you could simply ask. ‘How soon can we slip 200 red stainless
steel widgets to Delhi?’ Imagine how many different pieces of information the computer
might have needed to generate a correct response. These types of natural language voice
recognition system are not commercially available these days. However, they are in the
process of development using powerful computers and sophisticated software.
6.4.3 Digital Camera
Digital cameras not only record photographs in the form of digital data, these data can
also be stored on a computer. Digital cameras have now almost replaces chemical based film.
Some digital cameras are portable and look similar to traditional film cameras. Some other
digital cameras are stationary and are connected directly to a computer. Most companies have
now started using digital cameras to record images of their products for computer-based
catalogues and also to record photos of their employees for their personnel records.
6.4.4 Helping People with Special Needs
It is impossible for the physically challenged and disabled individuals to work with standard
computers. However, keeping these things in mind, developers have developed a special
software called edaptive or assistant technology to enable many of these individuals to make
use of computers productively. A wide range of hardware and software products were
developed by the adaptive technology which helped the user make the computer meet his or
her special needs. For people with motor disabilities who cold not use a standard keyboard, a
number of alternative input devices were developed. Most of these devices involved the use
of a switch which is controlled by any reliable. Moreover, one type of switch is even
activated by breathing into a tube.
6.4.4.1 For build individuals
In case of blind individuals, voice recognition programs allow for verbal input of the
data. Various software have also been developed to convert data into Braille. This software
converts text to Braille and sends it to Braille printers. Both blind and non verbal people use
speech synthesis equipment to convert text documents into spoken words. For people who
have low vision, i.e., they cannot see properly, several programs are available which
magnifies the information on the screen.
6.4.5 Data Accuracy
Data must be accurately in the system. The Procedures developed for controlling input of
data are important because accurate data must be entered in to a computer to ensure data
integrity. The computer jargon term GIGO states that inaccurate information caused by
inaccurate data is often worse than no information at all. It stands for Garbage Out.
Procedures and documentation must be clear since users often interact directly with the
computer during the input process. Computer programs and procedures must be designed in
such a way to keep a check for accurate data and also should specify the steps to take if the
data is not valid.
Different application has specific criteria for validating input data, and several tests are
performed before the data is processed by the computer. Some of these tests are as follows.
(i) Tests for data type and format: If data is of a particular type, such as alphabetic
or numeric , then its test is carried out for the same data. Sometimes, a data type
test is usually combined with a data format test. For example, in India, the PIN –
the postal index number – is six digits, where as in the US, the ZIP postal code is
either five digits or five digits followed by a dash followed by four digits. If the
data did not fit the same it will be rejected.
(ii) Tests for data reasonableness: A reasonableness check makes sure that the data
entered is within normal or accepted boundaries. In a company an employee can
work for 80 hours only per week. If the value entered in the hours worked field is
greater than 80, then an error will be shown and the value in the field would be
indicated as a probable error.
(iii) Tests for data consistency: In some cases, the date entered cannot, by itself, be
found to be invalid. If however, the data is examined in relation with the other
data entered for the same transaction, discrepancies will be found. In the similar
way, in a hotel reservation system, both the check is and checkout dates are
entered.

6.5 OUTPUT DEVICES


A monitor or display, also sometimes called as a visual display unit, is a piece of electrical
device that displays images generated by other devices, such as computers, etc, without
producing a permanent record. The parts of a monitor are the display device, circuitry and an
enclosure. Earlier, monitors were made up of eathode ray tube (CTR) while these days
display device in monitors are typically a thin film transistor liquid crystal display (TFT-
LCD) .
6.5.1 Screen Size
For any rectangular section on a round tube, the diagonal measurement is also the diameter of
the tube. The screen size is the distance between the two opposite screen corners (see Figure
6.6) However, this methods does not distinguish between the aspect ratios of monitors with
identical diagonal sizes, in spite of the fact that a shape of a given diagonal span’s area
decreases as the number of squares decreases; for example, a 4:3 21 inches (53.3 cm) monitor
has an area of about 211 square inches (1361 cm2 ). Where as a 16: 9213 widescreen has an
area of only about 188 square inches (1213 cm2 ) .

Fig. 6.6 Distance between Two Opposite Screen Corners


This method was developed by using the measurement from the first type of CRT
television, when round picture tubes were used. Since it was of circular form, they only
needed to use their diameter to describe the tube size. When circular tubes were used to
display rectangular images, the diagonal measurement was equivalent to the circular tube’s
diameter. This method still continued even when CRT tubes were manufactured as circular
rectangles.
6.5.2 Imaging Technologies

Fig. 6.7 A 19.3 “ (48.3 cm tube. 45.9 cm viewable viewSonic CRT Computor Monitor
Many hardware technologies are used for displaying computer –generated output. They
are as follows:
• TFT Liquid Crystal Display (LCD) is the most popular display device for
computers.
o Passive LCDs are known for poor contrast and slow response. They were
generally used in laptops until the mid 1990s.
o All modern LCD monitors are thin film transistors (TFT).
• Cathode ray tube (CRT) (see Figure 6.7)
o The most popular display for older computes is pixels. Raster scan
computer monitors produce images using pixels.
o Vector displays, as used mostly used on the vectrex, scientific and rader
applications and several other early machines; for example, asteroids use
CRT display because of requirement for a deflection system, although a
raster-based display may be used.
o Television sets were connected with the personal and home computers.
These were connected by using a composite video to the television set
using a modulator. The image quality and resolution of the display of the
television had some limitations.
o Penetron had military aircraft displays.
• Plasma display
• Video projectors used CRT, DLP, LCoS, and various other technologies to emit
light to a projection screen. Projectors had two parts – front projector and rear
projector. The function of the front projector was to use screens as reflectors and
send light back, while the function of the rear projector was to use screens as
diffusers to refract light forward. Rear projectors are often combinedly kept into
the same case as their screen.
• Surface – conduction electron-emitter display (SED) and field emission display
(FED)
• Organic light-emitting diode display (OLED).

FIG. 6.8 Comparison of a 21” CRT TV Monitor with a 17” CRT PC Monitor.
The CRT is the generally the picture tube of a monitor. The end of the tube has a
negatively charged particle called the cathode. There is an electron gun that shoots electrons
down to the tube and onto a charged screen. The screen which is coated with a pattern of
phosphor dots glow when an electron strikes the stream. Each cluster has three dots of
different colour and each dot is known as pixel (see Figure 6.8).
The image that you see on the monitor is usually made up from at least tens of
thousands of such tiny glowing dots. Lasser the distance between the pixels sharper the image
is on screen. The distance between two pixels on a computer monitor screen is known as dot
pitch and is measured in millimeters (mm). Most of the monitors have a dot pitch of 0.28
millimetters which is eqyivalent to 0.011 inches or less.
6.5.3 Performance Measurements
The parameters by which the performance of the monitor can be measured are as follows:
• The first thing to be measured is the luminance. This is measured in candelas per
square metre.
• The image size viewed is measured is the luminance. This is measured in candelas per
square metre.
• The image size viewed is measured diagonally. For CRT’s , the viewable size is
mostly 1 inch (125 mm) smaller than the tube itself.
• The ratio of the horizontal length to the vertical length or the aspect ratio is 4:3, which
is the standard aspect ratio. For example., a screen with a width of 1024 pixels will
have a height of 768 pixels . If a widescreen display has an aspect ratio of 16:9, then a
display has 1024 pixels wide and a height of 576 pisels.
• Display resolution is the number of distinct pixels in each dimension that can be
displayed.
• Maximum resolution is limited by dot pitch. Dot pitch is the distance in millimeters
between two pixels. Generally, the smaller the dot pitch, the sharper the image.
• The number of tomes in a second the display is illuminated on the screen is the refresh
rate. Maximum refresh rate is proportional to the response time.
• Response time is the time that a ray takes to hit the screen in a monitor and go back
to the intial position again. This is measured in milliseconds. Lower the number
means faster the transition and therefore fewer visible image artefacts.
• Contrast ratio is the ratio of the luminosity i.e., producing light of the brightest colour
(white) to that of the darkest colour (black) which the monitor is capable of
producing.
• Power consumption by the system is measured in watts.
• Viewing angle is measured in degrees horizontally and vertically. It is the maximum
angle at which images on the monitor can be viewed, without excessive degradation
to the images.
6.5.3.1 Comparison

CRT monitors
The advantages and disadvantages of CRT monitors are as follows:
Advantages
• High contrast ratio of about 20000: 1 or greater, which is much higher than many
modern LCDs and plasma displays.
• High speed response
• Excellent additive colour, wide range of scales and low black level.
• Original display in almost any resolution and refresh rate.
• Nearly zero colour, saturation, contrast or brightness distortion and excellent viewing
angle.
• No input lag.
• A reliable and proven display technology

Disadvantage
• Large size of about 40” and weight over 200 lbs
• Geometric distortion in non-flat CRTs.
• Older CRTs are more prone to burning
• Time consuming as warm up time required prior to peak luminance and proper colour
rendering.
• More power consumed than similar size of LCD displays
• More effective at only highest resolution.
• Intorent of dump conditions with dangerous wet failure characteristics
• Small risk of implosion, due to internal vacuum, if the picture tube is broken in aging
sets.
• Noticeable flickers under lower refresh rates
• Can even cause death in case of high voltages
• Flyback transformer produces a very high-piched noise when set too close
• Increasingly difficult to obtain models at HDTV resolutions
• Spare parts of monitor are not readily available. Standard electronic components
suppliers rarely carry stock of these parts.
• Maximum brightness possible but not as high as LCD.
• Lower contrast ratio in under bright conditions due to gray phosphor and limited
brightness

LCD Monitors
The advantages and disadvantages of LCD monitors are as follows:
Advantages
• Very compact and klight to carry
• Low power consumption
• No geometric distortion
• Very little or no flickering depending on the back light

Disadvantages
• Contrast ratio is low in older LCDs
• The viewing is limited, causing colour, saturation, contrast and brightness to vary,
even within the intented viewing angle, by variations in posture.
• Uneven back lighting is some monitors, which causes distortion in brightness,
especially towards the edges
• The response time is slower, which causes smearing and ghosting artefacts However,
many modern LCDs have response times of 8 ms or even less
• Only one native resolution. In order to display other resolutions it requires a video
scaler, which degrades image quality at lower resolutions
• Many cheaper LCDs are incapable of true colour and have fixed bit depth
• Lag in input
• Dead pixels which occurred during manufacture may be seen.

Plasma
The advantages and disadvantages of plasma monitors are as follows:
Advantages
• Compact in size and light in weight
• High contrast ratio; i.e., 10000: 1 or greater
• High speed response
• Excellent colour, wide range of scales and low black level
• Excellent viewing angle with nearly zero colour, saturation and contrast or brightness
distortion
• No geometric distortion
• Highly scalable, with less weight gain per increase in size (from less than 30 inches
wide to the world’s largest at inches)
• Inputs include DVI, VGA, HDMI or even S-Video

Disadvantage
• Larger pixel pitch for low resolution or large screen
• Flickering can be noticed when viewed at close range
• The temperature required for operating in higher.
• Costlier than LCDs
• Power consumption is comparatively higher
• Only one native resolution. In order to display other resolutions it requires a video
scaler, which degrades image quality at lower resolutions
• Fixed bit depth
• Lag during input
• Older PDPs can burn out easily
• Dead pixel which occurred during manufacture may be seen

Penetron monitors
The advantages and disadvantages of penetron monitors are as follows:
Advantages
• Although LCDs are transparent , they are not self-lighting . See-through effect for
transparent HUDs
• Very high contrast ratios
• The image is extremely sharp

Disadvantages
• Limitation in the colour display. Has about four tints.
• The order of magnitude is more expensive than other display technologies.

6.5.3.2 Miscellaneous Problems with display devices

Dead pixels
Some LCD monitors are produced with dead pixels . Manufactures tend to sell monitors with
dead pixels owing to the need for affordable monitors . Manufactures generally tent not to
take any responsibility and have warranty clauses that claim monitors with less than a set
number of dead pixels are not broken and are replaceable. The dead pixels are typically syuck
with green , red and/or blue sub-pixels
Stuck Pixels
LCD monitors lack phosphors screens and are thus resistant to phosphor burn-in hey however
have a condition called image persistence where the monitor’s pixels can remember any
specific colour, become stuck and is incapable of changing it. Unlike Phosphor burn-in image
persistence here can be reversed partially or, at times, even completely. This is possible
through rapid displaying of varying colours in order to wake up the stuck pixels.
Phosphor burn-in
This condition is localized ageing of the phosphor layer of a CRT screen in which it has
displayed a still bright image for several years. What happens as a consequence is the
presence of a faint permanent image on the screen even after the monitor has been turned off.
In extreme cases, it is even possible to read some of the text, However, this occurrence is
only due to the displayed text remaining the same for many years.
It was common to see this in single purpose business computers. It still is a problem
with the CRT displays. However, latest computers are not developed in this fashion anymore,
and so this problem is more significant now. Only systems which were playing the same for
years suffered from this defect. With these effects burn-in was not in an abvious effect when
in use, since it corresponded with the displayed image completely, as seen in three situations:
1. When some heavily used monitors were reused at home.
2. Monitors were re-used for display purposes.
3. In some high-security applications, where the high-security data are displayed, the
image did not change for years at a time.

In order to avoid burn-in screen savers were developed, but are pointless for CRTs today,
notwithstanding their popularity.
Plasma burn-in
Plasma burn-in was an issue with early plasma displays, which are noticeably weaker than
CRTs. Screen savers with moving images can be used to control and minimize localized
burn. The periodic of the colour and scheme in use also aids in reducing burn-in.
Glare
Glare occurs due to the relationship between lighting and screen or throught use of monitors
in bright sunlight . The Matte finish LCDs and flat screen CRTs are less likely to reflect glare
than traditional curved CRTs . These are curved in just one axis and are more glare resistant
than other CRTs that are curved on both the axes.
Colour misregistration
Apart from correctly aligned video projectors and stacked LEDs. Display technologies
especially LCD, have an intrinsic misregistration of colour channels. The centres of the red,
green and blue dots are not aligned perfectly . Sub-pixels and performance depend on the
technology’s misalignment. Technologies making use of this application include the Apple II
from 1976 and more recently Microsoft (Clear Type, 1998) and X Free 86 (X Rendering
Extension).
Incomplete spectrum
Not all colours are seen by RGB display. It only produces most of the visible colour of the
spectrum. This can cause a problem where good colour matching to a non-RBG image is
needed. This problem is common to all monitor technologies with three colour channels.
6.5.4 Display Interfaces
Computer terminals
Early the CRT-based visual display units (VDUs), such as the DEC VT05 were without any
graphics capabilities and gained the lable teletypes, because of the functional similarity to
their electromechanical predecessors. Some of the historic computers had no screen display,
using a teletype, modified electric typewriter, or printer instead.
Composite signal
Earlier home computers such as the Apple II and the Commodore 64 used a composite signal
output to drive a CRT monitor or TV. This resulted in a low resolution due to compromises in
the broadcast TV standards. This method is still being used with video game consoles. The S-
Video input of the Commodore monitor had to improve resolution.
Digital monitors
Early digital monitors are now sometimes called TTLs, because the voltages on the red, green
and blue inputs are compatible with TTL logic chips. Later digital monitors were supported
with LVDS or TMDS protocols.
TTL monitors
Fig. 6.9 IBM PC with Green Monochrome Splay
In early IBM PC the monitors used with the MDA, Hercules, CGA graphics adepters
and clones were controlled via TTL logic (see Figure 6.9). Such monitors are usually
identified by a male DB9 connector used in a video cable. The disadvantage of TTL
monitors was that a limited number of colours was available due to the low number of
digital bits used for video signaling.
Modern monochrome monitors use the same 15-pin SBGA connector as a
standard colour monitor. Making an interface with the modern computers they were
capable of displaying 32-bit grayscale at 1024 x 768 resolutions.
Only five pins out of nine were used by the TTL Monochrome monitor . One
pin was used as a ground and two other pins were used for horizontal/vertical
synchronization. The electron gun was controlled by two separate digital signals, like a
video bit and an intensity bit, which is used to control the brightness of the drawn pixels.
Four shades which were possible were black, dim, medium or bright.
In a signaling method known as red, green and blue, plus intensity (RGBI),
CGA monitors used four digital signals to control the three electron guns used in colour
CRTs. Each of these three RGB colours can be switched on or off independently as per
requirement. As the intensity bit increases, the brightness of all guns are switched on, or if
no colours are switched on the intensity bit will switch on all guns at a very low
brightness to produce a dark grey colour. ACGA monitor is only capable of transforming
16 colours so this was not exclusively used by PC-based hardware. Many CGA monitors
were capable of displaying composite video via a separate jack. Therefore, Commodore
128 could also utilize CGA monitors for their purpose.
Single colour screens
In 1980, display colours other than white were popular on monochrome monitors. These
colours were more comfortable on the eye as they did not give any strain or stress. This
problem was a big issue at that time due to the lower refresh rates which caused
flickering, and also the use of less comfortable colour schemes used with most of today’s
software.
Green screens with amber displays were the most popular colour available .
Paper white was also in use, and was known as a warm white.
6.5.5 Modern Technology
Analogue Monitors
Most modern computer displays can show various colours of the RGB colour space by
changing red, green and blue analogue video signals in continuously variable intensities.
These scans been exclusively progressive since the middle of 1980s. Earlier many plasma and
liquid crystal displays had exclusively analogue connections. All signals in such monitor
passed through a completely digital section prior to the display.
The IBM PC and compatible system were standardized on the VGA connector, while
many similar connectors such as 13W3, BNC, etc., were used on other platforms.
Digital and analogue combination
The first popular external digital monitor connectors, such as DVI-I and the various breakout
connectors based on it, included both analogue signals compatible with VGA and digital
signals. These signals are also compatible with new flat-screen in the same connector.
Digital monitors
Newer connectors havew only digital video signals. Many of these, like HDMI and
DisplayPort, also feature integrated audio and data connections. One of the less popular
feature most of these connectors share are DMR encrypted signals. Although the HDCP
technology responsible for implementing the protection was necessarily developed to meet
the cost constraints, and was primarily a barrier aimed towards dissuading a verage
consumers from creating exact duplicates without a noticeable loss in image quality.
6.5.6 Flexible Display Monitors
Flexible display monitors can be text and are much thinner and consume less power. The
Flexible Display Centre (FDC) at Arizona State University and Universal Display
Corporation together carried out a research and introduced the first a –Si;H active matrix
flexible in organic light-emitting diode (OLED) . This flexible display is manufactured
directly on DuPont Teijin’s polyethylene naphthalate (PEN) substrate . Using the Universal
Display Corporation’s phosphorescent organic light-emitting diode (PHOLED) technology
and materials and the FDC’s technology of bond-debond manufacturing the 4.13
monochrome quarter video graphics array (QVGA) display represents a significant milestone.
This enables to echieve a manufacturability solution for flexible OLEDs.
6.5.7 Configuration and Usage
Multiple monitors
One or more monitors can be attached to the same device . Each display can function and
operate in two basec configurations.
• The simpler of the two is mirroring or cloning, in which at least two displays show
the same image. This is generally used for presentations . Hardware with only one
video output can be done with an external splitter device, which is commonly built
into many video projectors as a pass through connection.
• Extension allows each monitor to display a different image, which is the most
sophisticated part, so as to form a contiguous area of arbitrary shape. This requires
software support and extra hardware, and can be obtained from the low end
products by crippleware.
• A primitive software is not capable of recognizing multiple displays, so spanning
must be used, in which a very large virtual display is created, and then pieces are
splitted into multiple video outputs for separate monitors. Hardware with only
single video output can be tricked for doing this with an expensive external
splitter device. This is most often used for very large composite displays made
from many smaller monitors placed edge to edge.

Multiple video sources


Video switch is the device required to connect multiple devices. In case of computers, it
usually takes the form of a keyboard video mouse switch (KVM switch). This is designed to
switch all of the user interface devices for a single workstation between different computers
at once.

Fig. 6.10 Virtual Displays


Workspaces is Apple’s implementation of virtual displays (See Figure 6.10). Many
software and video hardware supports the ability to create additional, virtual pieces of
desktop, which are commonly known as workspaces.
6.5.8 Additional Features
Power saving
Most modern monitors switch to a power-saving mode if no video input signals is received.
This allows modern operating systems to turn off a monitor after a specified period when the
system is inactive. This also results in extension of the mother’s service life. Some monitors
switch off themselves after a time period on standly mode.
Most modern laptops provide a method of dimming of the screen after a period of
inactivity or when the battery is in use. This extends the life of the battery life and also
reduces its wear and tear.
Integrated accessories
Many monitors have other accessories or various other connections which are integrated .
Integrated accessories are often of substandard quality and this places the standard ports
within easy reach and eliminates the need for another separate hub, camera, microphone or
set of speakers.
Glossy screen: Some displays, especially newer LCD monitors, have replaced the traditional
anti-glare matte finish monitors with glossy ones. This only increases saturation and
sharpness but reflections from lights and windows are very visible.
Directional screen: In some security conscious applications narrow viewing angle screens
are used.
Autopolyscopic screen: This is a directional screen which generates 3D images without
headgear, distortion or eyestrain.
Touch screen: In touch screen monitors touching on the screen is considered as an input
method. The required items or icons on the screen can be selected or moved with a finger.
Finger gestures may also be used to convey commands. In this type of monitor, the screen
will need frequent cleaning due to image degradation from fingerprints.
A touchscreen device is a display that can detect the presence and location of a touch
within the display area. The term generally refers to a touch or content with the display of the
device by a finger or hand. Touchscreens can also sense other passive objects, like a stylus.
However, if the screen is touched with an object like a light pen then the touch screen is not
applicable . The ability to interact directly with a display of the system typically indicates the
presence of a touchscreen.
There are two attributes of a touch screen. First, it enables one to interact directly with the
system, i.e. , on the screen, where it is displayed rather than indirectly with a mouse or
touchpad. Second, it enable one do so without interference of any intermediate device, such
as a stylus that needs to be held in the hand. Such displays can be attached either to the
computers or as terminals, to networks.
6.5.9 Touchscreen Technologies
There are various types of touchscreen technology.
Resistive
A resistive touchscreen panel is composed of several layers. The most importand of which
are two thin matellic, electrically conductive layers separated by a narrow gap. When an
object, such as a finger, is pressed down on a point on the panel’s outer surface, the two
metallic layers become connected at that point. The panel behaves as a pair of voltage
dividers with connected outputs. As a result it cause a change in the electrical current which
is reguistered as a touch event and sent to the controller for processing the data entered by
touchscreen.
Surface acoustic wave
When a panel is touched , a portion of a wave is absorbed. This is surface acoustic wave
(SAW) sumit technology which uses ultrasonic waves that pass over the touchscreen panel.
This change in the ultrasonic waves registers the position of the touch event and sends this
information to the controller for further processing. Surface wave touch screen panels can be
damaged by outside elements like contaminants on the surface which can also interface with
the functionality of the touchscreen.
Capacitive
A capacitive touchscreen panel consists of an insulator, such as glass, which is coated with a
transparent conducter such as indium tin oxide (ITO). As the human body is also a good
conductor of electricity, touching the surface of the screen results in a distortion of the local
electrostatic field, which is measured as a change in capacitance. There are various
technologies that may be used to determine the location of the touch and this can be sent to a
computer running a software application which can calculate how the user’s touch relates to
the computer software.
Surface capacitance
Using a basic technology, one side of the insulator is coated with a conductive layer. When a
small voltage is applied to the layer, a uniform electrostatic field is formed. When a
conductor, i.e., a human finger touches the uncoated surface, a capacitor is dynamically
developed. It is also prone to false signals from parasitic capacitive coupling and hence needs
calibration during manufacture. It is, therefore, most often used in simple applications such as
industrial controls and small shops or stores.
Projected capacitance
The projected capacitive touch (PCT) technology is a capacitive technology that by etching
the conductive layer permits more accurate and flexible operation. By etching either a single
layer to form a grid pattern of electrodes by etching two separate perpendicular layers of
conductive material with parallel with parallel lines or tracks to form the grid an XY array is
formed. This is comparable to the pixel grid found in many LCD displays.
On bringing a finger or conductive stylus close to the surface of the sensor changes the
local electrostatic field. Therefore, applying voltage to the array creates a grid of capacitions.
In order to determine a touch location the capacitance change at every individyal point on the
grid can be measured.
PCT is used in a wide range of applications, such as a sale systems smartphones and
public information kiosks. One of the examples of a kiosk PCT product is the Visual Planet’s
VIP Interactive Foil,where a gloved hand can register a touch on a sensor surface through a
glass window.
Infrared
Conventional optical-touch systems use an array of infrared (IR) light-emiyying diodes
(LEDs) on two adjacent bezel edges of a display. The photosensors placed on the two
opposite bezel edges are used to analyse the system and determine a touch event. When an
object such as finger or pen touches the screens an interruption occurs in the light beam and
this causes a measured decrease in light at the corresponding Photosensors. Hence the LED
and photosensor pairs create a grid of light beams across the display. The measured
photosensor outputs enable us to locate a touch point coordinate.
The adopyion of infrared touchscreens has been hampered by two factors: first, the
relatively high cost of the technology compared to competing touch technologies and second,
the issue of performance in bright ambient light. This issue of performance in bright ambient
light problem is a result of background light increasing the noise on the floor at the optical
sensor, sometimes to such a degree that the touchscreen’s LED light cannot be detected at all,
and couse a temporary failure of the touch screen. This is mostly found in the direct sunlight
conditions where the sun has a very high energy distribution in the infrared region.
Strain gauge
In the strain gauge configuration which is also known as force panel technology, the screen is
spring-mounted on the four corners and strain gauges are used to determine the deflection
when the screen is touched. This technology was known in 1960s but new advances by
Vissuma and F-Origin have made the solution commercially viavle in the market. This can
also measure the Z-axis and the force of an individual’s touch. Such screens are used mostly
in the exposed public systems such as ticket machines due to their resistance to vandalism.
Optical imaging
When two or more image sensors are placed around the edges or mostly at the corners of the
screen a relatively modern development in touchscreen technology is seen. In the same way,
the infrared backlights are placed in the camera’s field of view on the other sides of the
screen A touch on the screen shows a shadow and each pair of cameras can then be
triangulated to locate the touch or even measure the size of the touching object.
Dispersive signal technology
The system which uses sensors to detect the mechanical energy in the glass that occurs due to
a touch was introduced in 2002 by 3M. In order to provide the exact location of the touch and
information complex algorithms were interpreted.
Acoustic pulse recognition
The acoustic pulse recognition system was introduced by Teco International’s Elo division in
the year 2006 . It uses more than two piezoelectric transducers located at some positions of
the screen to turn the mechanical energy of a touch (vibration) into at electronic signal. The
screen hardware uses an algorithm to determine the location of the touch based on the
transducer signals . This process is similar to triangulation which used in GPS. The
touchscreen is made of ordinary glass, but it gives a good durability and optical clarity.
6.5.10 Construction of a Touchscreen
Touchscreens can be built in several principal ways. The key goal is recognize one or more
fingers touching a display and to interpret the command that this represents. It is also
required to communicate the command to the appropriate application.
There are four layers in the most popular techniques of the capacitive or resistive
approach:
1. The top polyester layer coated with a transparent metallic conductive coating on the
bottom is the first layer.
2. The second layer is the adhesive spacer.
3. The third layer is the glass layer coated with a transparent metallic conductive coating
on the top.
4. The fourth layer is the adhesive layer on the backside of the glass for mounting. When
a user touches the surface, the system records the change in the form of the form of
the electrical current which flows through the display.
6.5.11 Development of Touchscreens
All the significant touchscreen technology patents were filed in the 1970s and 1980s, and
have now expired . The touchscreen component manufacturing and product design were no
longer made difficult by royalties or legalities with regard to patents and the manufacturing of
touchscreen-enabled displays on all kinds of devices was widespread.
These devices also allow multiple users to interact with the touchscreen
simultaneously. Since the development of multipoint touchscreens facilitated the tracking of
more than one finger on the screen and thus operations that require more than one finger are
possible.
The technology of the touchscreen, hardware and software , has sufficiently matured
and have been perfected over more than three decades to the point where its reliability is not
questioned. The touchscreen displays are found these days in the airplanes, automobiles,
gaming consoles, machine control systems, appliances and handheld display devices of every
type. With the regular improvement of multi-touch enabled iPhone the Nintendo DS, the
touchscreen market for mobile devices is projected and was estimated to produce US$5
billion in 2009, with the emerging graphics tablet/screen hybrids the ability to accurately
point on the screen itself is taking yet another step.
6.5.12 Ergonomics and Usage
Finger stress
The most important problem of a touchscreen in the stress on human fingers when used for
more than a few minutes at a time. Sometimes, a significant pressure can be required for
certain types of touchscreen. This can be reduced to some extent for some users with the use
of a pen or other devices to add leverage and more accurate pointing.

Fig. 6.11 Pointed Nail for easier Typing


In 1950, the concept of using a finfernail trimmed to form a point , to be specifically
used as a stylus on a writing tablet for communication (Figure 6.11) was proposed. If the
user’s fingernails are either short or sufficiently long the ergonomic issues of direct touch
may be bypassed by using a different technique. Instead of pressing the screen with the
soft skin of an outstretched fingertip, the finger is curled over, so the=at the top of the
forward edge of a fingernail can be used instead. The thumb is optionally used to provide
support for the finger or for a long fingernail from the underneath.
Fingerprints
Touchscreen suffers from the problem of fingerprints on the display. This can be taken
care by using materials which had optical coatings designed to reduce the visible effects
of fingerprint oils, such as the oleophobic coating used in the Iphone 3G S. The skin on
the touchscreen is reduced by using fingernail or stylus.
Combined with haptics
Due to latency or other factor the user experience with touchscreens without tactile
feedback or haptics can be difficult . Research by Brewster, Chohan and Brown in 2007
from the University of Glasgow , Scotlant demonstrated that sample users reduce input
errors by about 20 per cent, increase input speed by about 20 per cent, and lower their
cognitive load by about 40 per cent when touchscreens are combined with haptics or
tactile feedback.
Gorilla arm
Despite a promising start in the early 1980s, gorilla arm was a side-effect that destroyed
vertically-oriented touch-screens as a mainstream input technology. Designers of this
system failed to notice that humans are nit built to hold their arms at waist-or head height,
making small and precise motions. After a short period of time, arm movement becomes
painful and clumsy and cramping may begin to set in.

Table 6.2 Comparison of Touch screen Technologies

Technology 4-wire SAW 5-wire Infrared Capacitive

Durability 5 years 5 years 3 years 3 years 2 years

Stability High Higher High High Ok

Transparency Ok Good Good Good Ok

Installation Built- Built-in/On Built in/on Onwall Built-in


in/On wall wall
wall

Touch Anything Finger/Pen Anything Sharp Conductive

Intense Good Good Good Bad Bad


Light-
resistant

Response <10 ms 10 ms <15 ms <20 ms <15 ms


time

Following Good Low Good Good Good


Speed

Excursion No Small Big Big Big

Monitor CRT CRT CRT or CRT or CRT or


Option LED LED LED

Waterproof Good Ok Good Ok Good

6.5.13 Tablet Screens


A tablet screen is the combination of a monitor and a graphics tablet. Such devices do not
respond to touch, but offer sensitivity to one or more special tools like pressure, tilt, Controls,
opposite ends and multiple tools.

Major Manufacturers
Some of the manufacturers of tablet screens are as follows:
• Acer
• Apple Inc.
• BenQ
• Dell
• Eizo
• Gateway
• Hewlett-Packard
• HannStar Display Corporation
• Iiyama Corporation
• LG
• NEC
• Samsung
• Sony
• Tyco Electronics
6.5.14 Speakers
Computer speakers or multimedia speakers are external to a computer (Figure 6.12) .
They are provided with a low-power internal amplifier. A plug and socket for a two wire
(signal and ground ) coaxial that is widely used connect analog audio and video components.
The standard audio connection is a 3.5 mm (1/8 inch) stereo jack plug which is colour-coded
lime green (following the PC 99 standard) for computer sound cards. It is also known as a
phono connector in which the rows of RCA sockets are found on the backs of sterco amplifier
and numerous A/V products. The dimension of the prong is 1/8” thick and 5/15” long ,
Sometimes the RCA connector is used for input. The USB speakers are powered from the 5
walts at 200 milliamperes and also provided by the USB port which allows about half a watt
of output power.
Common features
Features vary from one manufacturer to the other; however these may include the following:
• An LED power indicator.
• A 3.5-mm (1/8-inch) headphone jack
• Controls for volume and sometimes bass and treble
• A remote control volumn control.

Cost cutting measures and technical compatibility


In order to reduce the cost of computer speakers (unless designed for premium sound
performance), speakers designed for computers are not provided with AM/FM tuner and
other built-in sources of audio. However, some of the compromises can be done by rigging
the male 8th-inch plug jury with female 8th inch to female stereo RCA adapters to work with
stereo cassette players, turntables, etc.
Major computer speaker companies
Fig. 6.12 The Base of a Harman Kardon Speaker

The major components of the computer speaker are as follows:


• Altec Lansing
• Bose Corporation
• Creative Labs
• Cyber Acoustics
• Dell
• Edifier
• General Electric
• Harman Kardon
• Hewlift-Packard
• JBL
• Klipsch
• Logitech

CHECK YOUR PROGRESS


2. What is multimedia?
3. What is a natural language voice interface?
4. State some advantages and disadvantages of CRT monitors.
5. What is a touchscreen?

6.6 END USER HARDWARE ISSUES


Early computer buses were literally parallel electrical buses with multiple connections.
The term is now used for any physical arrangement that provides the same logical
functionality as a parallel electrical bus. In computer architecture, a bus is a subsystem that
transfers data between different computer components inside a computer or between two or
more computers. In modern computers, buses can use both parallel and bit-serial connections,
and can also be wired in either a multidrop (electrical parallel) or daisy chain topology. They
can also be connected by switched hubs, as in the case of USBs.
6.6.1 Description of a Bus
Initially, bus meant an electrically parallel system, with electrical conductors similar or
identical to the pins on the CPU, however, now and the modern systems are blurring the lines
between buses and networks.
Buses which are parallel carry data words in parallel on multiple wires, or the buses
which are serial carry data in bit-serial form. Most of the serial buses have more conductors
than the than the minimum of one used in the 1-Wire and UNI/O serial buses due to the
addition of extra power and control connections, differential drivers and data connections in
each direction.
There are internal and external buses in most computers. An internal bus is one which
connects all the internal components of a computer to the motherboard and to the CPU and
internal memory. Although the difference between network and buses is largely conceptual
rather than practical, the network connections such as Ethernet are not generally regarded as
buses. The arrival of technologies, such as the InfiniBand and Hyper Transport have blurred
the boundaries between networks and buses.
6.6.2 Bus Topology
If data is to be transferred, the requesting computer sends a message to the scheduler, which
puts the request into a queue. Therefore, in a network, the master scheduler has the control
over the data traffic. The message that comprises an identification code is broadcasting to all
nodes of the network. The scheduler works on the priorities and notifies receive the
availability of the bus.
The identified node when in receipt of the message performs the data transfer between
the two computers. On completion of the data transfer the bus becomes free for the next
request in queue of the scheduler. The advantage of the bus topology is that a computer can
be accessed directly and messages can be sent relatively in a simpler and faster wat. The
disadvantage of the bus topology is that a scheduler is required to organize the traffic by
assigning frequencies and priorities to each signal.

CHECK YOUR PROGRESS


1. What are computer buses?
2. What are the advantages of bus topology?

6.7 SUMMARY
• Earlier desktop computers lacked the power and storage necessary for multimedia .
Games or demo scences in these system were able to achieve high sophistication and
technical polish using only simple, blockly graphics and digitally generated sound.
• The Amiga 1000 (A 1000) was the first multimedia computer from Commodore
International. Animation, graphics and sound technologies of this computer enable
multimedia content to develop.
• Atari’s computer division developed a home computer based on the 6502 CPU before
the introduction of the ST computers.
• Atari ST was part of the 16/32 bit generation of home computers. This was based on
the Motorola 68000 CPU, with 512 KB of RAM or more, and 31/2 3 single-density ,
double floppy disks as storage unit (nominally 720 KB ). The mixture of sound and
images with effect of text and graphics is known as multimedia.
• Earlier, a video material recorder was put into the computer using a video camera or a
video recorder . Since the video data requires large storage space, therefore, the video
segments in personal computer application are often limited to only a few seconds.
• A monitor or display also sometimes called as a visual display unit is a piece of
electrical device which displays images generated by other devices, such as
computers, etc. , without producing a permamnent record.
• In computer architecture, a bus is a subsystem that transfers data between different
computer components inside a computer or between two or more computers.
• Modern computer buses can use both parallel and bit-serial connections, and can also
be wired in either a multidrop (electrical parallel) or daisy chain topology.

6.8 KEY TERMS


• Amiga 1000: The first multimedia computer from Commodore.
• ST SHIDTER: A video shift register chip that enabled bitmap graphics using 32 KB
of contiguous memory for all resolutions.
• Mega ST (MEGA 2, MEGA 4): A motherboard with a 2 MBor 4 MB of RAM in a
case with a detached keyboard.
• Graphic tablet: Digtizer with special processing options built into a tablet.
• Voice input: Also referred to as speech or voice recognition, this allows the user to
enter any data and also issue commands to the computer with spoken words.
• Natural language voice interface: The interface in which the user asks a question
and lets the computer not only convert the question into understandable wirds but also
interpret the question and give an appropriate response.
• Projected capacitive touch (PCT) technology : A capacitive technology that bu
etching the conductive layer permits more accurate and flexible operation.
• Computer buses: The term now used for any physical arrangement that provides the
same logical functionality as a parallel electrical bus.

6.9 END QUESTIONS


Short-Answer Questions
1. How were multimedia computers different from earlier computers ?
2. Write a note on Atari ST .
3. Which coputer became the first to offer inbuilt MIDI ? What wa its consequence ?
4. What is voice input ? Describe the four areas where vouce inputs are used .
5. What are output devices ?
6. What are the advantages and disadvantages of LCDs?
7. What are digital monitors ?
8. Write a note on touchscreen .
9. What are end user hardware issues ?
10. What is bus topology ?

Long-Answer Questions
1. What is a multimedia computer ? Trace its origin.
2. Explain the evolution of the Atari 520ST .
3. What are input devices ? Discuss.
4. Explain output devices .
5. What are display interfaces ? Discuss .
6. What are the end user hardware issues ? Give a description of a bus.

6.10 BIBLIOGRAPHY
Chen, Sao-jie, Guang-huei Lin and Pao-ann Hsiung . 2009. Hardware Software Co-
design in a Multimedia SOC Platform . New York: Springer.
Brice Richard. 1997. Multimedia and virtual Reality engineering . London: Newnes.

UNIT 7 MULTIMEDIA IN
EDUCATION
Program Name: BSc(MGA)
Written by: Srajan
Structure:
7.0 Introduction
7.1 Unit Objectives
7.2 Introduction to Multimedia in Education
7.2.1 Core Functions: Nature of Education and Training
7.2.2 Nature of the Sector
7.2.3 Multimedia of Education
7.2.4 The Current Situation
7.2.5 Usage of the Term Multimedia
7.3 Education Online
7.3.1 Advantages of Online Education
7.3.2 Goals and Benefits of E-Learning
7.3.3 Market
7.3.4 Approaches to E-Learning Services
7.3.5 E-Learning Technology
7.3.6 Content Issues
7.3.7 Technology Issues
7.4 Future to Interactive Media in Education
7.4.1 A Vision for the Feature
7.4.2 Meltdown Scenarios
7.4.3 Universities
7.4.4 Vocational/Further Education
7.4.5 Schools
7.5 Summary
7.6 Key Terms
7.7 Questions and Exercises
7.8 Further Reading

7.0 INTRODUCTION
In this unit, you will learn about multimedia in education. The development of
multimedia technologies in education offers new ways in which learning can be imparted in
schools and homes. Allowing teachers to have access to multimedia learning resources,
which support constructive concept development, help them to focus more on being a
facilitator of learning while working with individual students. Such provision has the
potential to reduce the need for subject-specific teaching expertise and traditional transfer of
student from primary to secondary schools. Extending the use of multimedia learning
resources to homes presents an opportunity for students to improve learning.
For children, transfer between schools can adversely affect the rate of learning. It is
essential to find ways of reducing the impact of transfer on students and ensure continuity
learning during the student’s transition from childhood to adolescence. Multimedia
technologies have the potential to support continuity by:
• Providing access to multimedia learning resources for both primary and secondary
schools that are based on a common curriculum.
• Giving access to an extensive knowledge base in the form of multimedia learning
resources that could be provided in any classroom.
• Providing a basis for cross-phase project work.
• Allowing data sharing across school phases.

In this unit, you will also learn about goals and benefit of e-learning , approaches to e-
learning services and a vision for the future.

7.1 UNIT OBJECTIVES:


After going through this unit, you will be able to:
• Understand the role of multimedia in education.
• Describe the importance of online education.
• Comprehend the goals and benefits of e-learning
• Explain computer-supported collaborative learning
• Discuss technology-enhanced learning
• Describe e-learning technology
• Understand learning management systems and learning content management systems.
• Appreciate the advantage of computer-aided assessment
• Explain communication technologies used in e-learning

7.2 INTRODUCTION TO MULTIMEDIA IN EDUCATION


With the burgeoning number of multimedia computer systems at home these days, students
have actually started to access enhanced facilities that what is being provided at schools.
Learning at home offers stability and certain advantages that the school cannot. Student
pressure demands that schools be as informative at home. Home environments that have
multimedia render going to school every day a debatable point . If adequate access to learning
resources is provided at home, which is a much quieter and learning-inducing environment,
learning could very soon, one day, become independent of place as well as time.
The educational and training sector is a foremost industry having social, economic
and political significance. The sector is typified by service delivery that can be a highly
personal experience. The role of multimedia is big here – it can bring noteworthy changes in
education in the individual as well as the general socio-economic spheres. However, the
education sector is tactical for the general development of multimedia and vice versa. Many
important technologies and practices are actually pioneered in the field of education. This
sector is a primary source of demand for various multimedia materials. Within the sector,
posterity’s generation of multimedia authors and developers are developed. More
significantly, multimedia users of the coming generation will be unveiled to the technology
and skills in their learning, thus facilitating them use of multimedia in other fields of life as
well. Major government funding under various initiatives, including Developing European
Learning through Technological Advance (DELTA3) and Teaching and Learning
Technology Programme (TLTP) in the UK and in the EU, have greatly contributed to the
expansion of technology and applications in this particular sector. Multimedia should not be
viewed as a factor of change in education, but rather as an enabling factor in total information
of education systems and other worldwide priorities.
7.2.1 Core Functions: Nature of education and Training
Unlike the activities of other sectors regulated by multimedia, education is not just about
extraditing ‘content’ to users in a packed form. It is about the individual growth of knowledge
and the learning procedure by which that knowledge is acquired. According to one definition
, ‘Knowledge is an emerging dimension which surpasses the fixed size – and – space
conceptions of media and information, just as it passes the feeling that you can carry it to
students by “filling” them up from the teacher’s vessel’ . The social and psychological
theories and pattern of teaching and learning or pedagogies and fundamental to the education
procedure. Teaching and learning is also highly dependent upon circumstances and
individuals, thus different teachings will fit different individuals, educational ends different
power balances among educator and learner: education aspires to render context-independent
strategic accomplishments, training implants the learning and the control of learning within
constraints of specific jobs or organizations.
In spite of the move towards ‘learner’ focused training and education, the ‘teacher’ is
a highly important link in the education process, accomplishing many different roles that
cannot simply be cut down to a technical solution. The college, school and the classroom also
have very significant broader purposes in socialization, as an area where you contribute
certain values and learn to act collectively, beyond the narrowly thought purpose of
education.
7.2.2 Nature of the Sector
By any criteria, education is a major sector. Due to its vast political and social implication,
much is public funded, but in common with any service sector, it holds a diverse array of
backing supplier activities, most of which are private. In particular, it is a major consumer
and supplier of information activities, ranging from intangibles (degrees, courses, knowledge,
etc.) to the specific material forms in which such goods are substantiated: videos, books,
articles, audio tapes, computer software, TV programmers, radio programmers, exhibitions,
etc. It thus overlaps substantially with several other sectors of multimedia interest, such as
publishing.
The number of roles that multimedia can raise, replace are produce is further
perplexed by the fragmentation of the sector throughout different learning groups served by a
range of institutions and each very differently motivated:
• Pre-school
• Primary
• Secondary
• Tertiary
• Special needs
• Vocational
• Continuing vocational/professional
• Recreational
• Self-development

These broad categories or constituencies overlap and the growth and dispersion of
multimedia learning materials will certainly step-up this overlap. This interaction among
components of different education sectors will be a crucial expression of the growth of
multimedia. Within these categories training and education is also furnished in many
different modes that are often mixed. These include:
• Individual learning
• Teacher mediated learning
• Classroom learning
• Group learning
• Distance learning
• Open and ‘Closed’ learning

The balance and mix of different modes of learning is associated with the different needs
for learning and the available resources for learning. New priorities and changing resources,
including technology, will alter the chances and motivations open to teachers, learners,
society and institutions.
7.2.3 Multimedia for Education
Multimedia has had a long chronicle in education and efficaciously, educational multimedia
already survives. Education is accomplished through genuinely multiple media already. There
are two primary ways that multimedia can be used in education and training. It step-ups the
handiness of information and resources in all media to learners and teachers, especially
throught the delivery of so far distributed resources and programmers of activity (such as
exhibition visits, field trips, laboratory experimants library tours,etc.) through an electronic
medium. It also permits the channeling of what already happens into an electronic, more
tightly technologically pachaged form. This process often lets in the concretization and
capture or crystallization of techniques presently occupying in people’s skills. There are then
distilled and concentrated for delivery. It thus offers new chances of several sorts not least for
commoditization of education in new terms.
Not only does interactive multimedia contribute together educational instruments but
it can attract directly on theoretical teachings. Increased machine ‘intelligence’ could grant
for programs that conform themselves to individuals demands. More and more user-friendly
technology will ease user-centered systems. Multimedia and hypermedia could even enable
the entire traditional learning and teaching procedure to be reformulated, substituting the
lecture-absorb-test model, and bestowing the ‘mega change’ that has happened in every field
of human effort excluding education. Current experience and research are that multimedia
facilities and packages can ameliorate learning times and holding substantially over many
traditional approaches and a great many evolutions and experiments of use of IT in all fields
of education have conclusive results. There is surely a great deal of anecdotal ground that
multimedia works. ‘It is much faster in using IT, so that you get more done. With exercises,
pupils will be capable of taking their work much farther than they ever would manually.
‘Networked multimedia extends even more opportunities to ease or raise learning access to
distant information sources of many types to remote tutors and virtual learning groups.
Multimedia not only encroaches on individual learning but also on the many
occasions of the education sector: assessment of students and accreditation of course material
teacher training, teaching, development of teaching materials and pedagogies, Teachers and
establishments have to promote their services and the education sector also has administrative
necessities . Finally, it impresses the backing service industry and other complementary
sectors. Multimedia may ameliorate the existing institutes of learning, but may also guide to a
change in the accessibility, quality and experience of education. Active economic,
organizational, and political pressures within education are probably entails that some
functions and institutions will change radically and new ones unite but equally some
technical possibilities may remain undeveloped. So far, multimedia has been regarded as a
tool for changing purposes of the education sector but it also has an external look. Education
develops people for the ‘outside’ world, which is likely to be heavily impressed by
multimedia, for citizenship, work, leisure, social functions, etc. The education sector will
have to react to the dispute of how to develop people for a progressively multimedia world
and find out to deal with the skills and attitudes that teachers and learners being from
‘outside’.
7.2.4 The Current Situation
Technology-enhanced learning (TEL) is concerned with any learning action through
technology.
TEL is often synonymously with e-learning even though there are important
deviations. The main difference between the two formulas is that TEL centers on the
technological backup of any pedagogical advance that utilizes technology. However, that is
demonstrated as admitting print technology or the exploitations around journals, libraries,
books in the centuries before computers.
A learning activity can be depicted in terms of the following features.
• Learning resources: Compilation, creation, distribution, access, tools and services,
consumption of digital content.
• Actions: Interaction worth software tools, communication and collaboration.
• Context: Surrounding people and location, time and duration.
• Roles: The various actors in changing roles (e.g. learning coach, human resource or
education manager, student, teacher, facilitator).
• Learning objective: To support every human in attaining her or his learning goals,
abiding by individual as well as organizational learning preferences.

Learning activities comply different pedagogical approaches and didactic concepts. The
main focal point in TEL is on the interaction between respective technologies and these
activities. This can very form easing approach to and authoring of a learning resource to
detailed software systems managing (e.g., learning content management systems, learning
management system, adaptive learning hypermedia systems, learning repositories, etc. ) and
managing (tools for self-directed learning, human resource management systems etc.) the
learning procedure of learners with technical means.
The persisting definitions for technology enhanced learning cover very broadly and
alter endlessly because of the active nature of this developing research field. Hence, the
definition of TEL must be as extensive and universal as possible to encompass all facts:
‘Technology-enhanced learning (TEL) has the aim of furnishing socio-technical
inventions (also ameliorating skillfulness and cost effectiveness) for discovering exercises
regarding individuals and organizations, autonomous of pace, time and place. The field of
TEL thus depicts the backup of any learning activity through technology. ‘
7.2.5 Usage of the Term Multimedia
Significant anecdotal ground proposes that discourses about multimedia become futile on a
regular basis because those taking part are not using the same renditions for their words.
Much of this trouble seems inescapable because concrete definitions of frequently used terns
like online, multimedia, new media and others qre invariably being challenged by utilization
ahead of widespread realizing is in place. To help lucidity of communication in this paper, let
us discuss some of these terms. The purpose is not to induce argument about their sheer truth
but merely to elucidate their meaning in this paper to the readers.
Multimedia
The term generally depicts multiple media types being received at interactively via computer.
It is often fixed to CD-ROM (though it needs not be) and is impelled more by
commercializing goals than utility.
Interactive media
This terms is used because it is autonomous of the distribution mechanism (World Wide
Web, CD-ROM, etc.) and holds with it the most crucial attribute, interactivity, without the
prerequisite for multiple media types. There are many practicable uses of interactive media
that utilize one media type only.
Online
This term is used to refer material that is approachable via a computer approachable networks
or telecommunications instead of material accessed on paper or other non-networked
medium.
New media
In many cases, the conversion from analogue to digital media domains permits greater
practicality and lends new features to the media type (such as, compression, image
manipulation, etc.) This term is used to ponder that difference.
A multimedia learning environment implies a number of components or elements to
facilitate learning to take place. Hardware and software are only part of the requirement.
Having the correct type of equipment including software will not inevitably produce the
most appropriate environment to facilitate learning to take place.
Access to multimedia technologies including online systems is presently a major trouble
for both primary and secondary schools. However, there are some potential connection
solutions commencing to come out from the IT industry, which need to be mixed with the
exploration of new systems of resource preparation for schools.
Although there is current concern regarding security and access to unsuitable materials
via the internet, technical solutions are commencing to become usable to get over these
issues.
There is a requirement to enable software producers to formulate a consciousness of the
array of learning styles spread in primary and secondary schools.

7.3 EDUCATION ONLINE


What precisely is online education? As per definition on the Web, it is fundamentally credit-
granting cousses or education training extradited chiefly via the Internet to students at distant
places, including their homes. The Online courses may or may not be deported synchronously
. An online course may need that students and teachers meet once or periodically in a
physical setting for examinations, lectures or laboratories so long as the time exhausted in the
physical setting does not exceed 25 per cent of the total course time. Online education covers
several degrees and courses. Through online education, one can choose many online degrees
or online courses from various online universities that provide this facility.
7.3.1 Advantages of Online Education
Some of the advantages of online education are as follows:
• There is no need to handicap one’s present job. Getting an online degree may even
help in altering career prospects.
• One study in one of the top colleges in any state of the country or abroad. Without
having to move and pay for boarding.
• One can get a recognized degree from universities recognized worldwide.
• For those with financial restraints, studying online is a beneficial option.

Electronic learning (E-learning) covers various forms of technology-enhanced learning


(TEL) or specific types of TEL, including online or Web-based learning. Still, the term does
not have a unanimously accepted definition and differences exist in the e-learning industry on
if a technology-enhanced system can be termed e-learning if there is no set pedagogy as some
define e-learning as ‘pedagogy empowered by digital technology.
The term e-learning is unclear to those outside the e-learning industry, as well as
those inside its variety of fields. In companies, for instance, e-learning frequently denotes the
strategies that deploy the company network to pass on training courses to employees; and in
most universities these days, e-learning defines a specific mode to attend a course or program
of study in which students seldom or never meet face to face, nor access on-campus
educational benefits, as their education is achieved online.
7.3.2 Goals and Benefits of E-Learning
E-learning can furnish facilities for both organizations and individuals.
• Improved performance: A 12-year meta-analysis of research by the US Department
of Education determined that higher education students in online learning mostly did
better than those in face-to-face courses.
• Increased access: Instructors of the highest quality can impart their knowledge across
borders, permitting students to take care courses across economic, physical and
political limits. Distinguished experts have the chance of making information
available internationally to anyone concerned at minimum costs. The MIT Open
Course, for example, has made significant components of that university’s curriculum
and lectures available for free online.
• Convenience and tractability to learners: In many contexts, e-learning is self-paced
and the learning sessions are available round the clock. Learners are not attached to a
specific day/time to physically attend classes. They can also break learning sessions at
their convenience. High-end technology is not essential for all online courses. Basic
internet access, video and audio capacities are usual necessities.

A major debate for e-learning is that it empowers learners to grow essential skills for
knowledge-based workers by embedding the use of information and communications
technologies within the curriculum. Using e-learning in this way has major significances for
course design and the assessment of learners.
7.3.3 Market
The universal e-learning industry is estimated to be worth over 38 billion Euros: although in
the European Union only about 20 per cent of e-learning products are developed within the
common market. Growths in Internet and multimedia technologies are the basic facilitator of
e-learning, with support, consulting, content, technologies and services being named as the
five key sectors of the e-learning industry.
7.3.3.1 Higher education
There are more than 3.5 million online students learning at institutions of higher education in
the United States. As per the Sloan Foundation reports, there has been a growth of around 12-
14 per cent per year on average in registrations for fully online learning over the five years
2004-09 in the United States post-secondary system, compared with a mean of about 2 per
cent increase per year in enrollments or registrations overall. According to another study
almost a quarter of all students in post-secondary education were talking fully online courses
in 2008. A report by Ambient Insight Research estimated that in 2009, 44 per cent of post-
secondary students in the United States were taking some or all of their courses online and
figured that this figure would ascend to 81 per cent by 2014. Thus, it can be observed that e-
learning is moving quickly from the borders to being a predominant form of post-secondary
education at least in the United States.
Many higher education’s now provide online classes. By counterpoint, only about half of
private, non-profit schools offer them. The Sloan report, on the basis of a poll of academic
leaders, states that students generally look to be at least as fulfilled with their online classes
as they are with conventional ones. Private institutions may turn more involved with online
presentations as the cost of instituting such a system diminishes Properly trained staff must
also be employed to work with students on-line. These staff members need to realize the
content area, and also be highly trained in the usage of the Internet and computer. Online
education is quickly increasing and online doctoral programs have even formulated at leading
research universities.
7.3.4 Approaches to E-Learning Services
E-learning services have developed since computers were first used in education. There is a
curve to move toward intermingled learning services, where computer based actions are
incorporated with practical or classroom-based situations.
There are proposals that different types or forms of e-learning can be regarded as a
continuum, from e-learning , (i.e., no use of computers and/or the Internet for learning and
teaching), to classroom aids, such as making classroom lecture PowerPoint slides available to
students through a course website or learning management system. In includes laptop
programs, where students bring laptops to class and use them as part of a face-to-face class;
as well as hybrid learning, where classroom time is cut down but not abolished. More time is
given to online learning, through to fully online learning- a form of distance education. This
categorization is rather similar to that of the Sloan Commission reports on e-learning’s status,
which indicate web enhanced and web-dependent to ponder a growing intensity of
technology use.
7.3.4.1 Computer-based learning
Computer-based learning (CBL) denotes computer use as a primary component of the
educational environment, This indicates the use of computers in classrooms; however, the
term more generally indicates a structured environment where computers are used for
educational purposes. The idea is usually regarded as being somewhat different from the use
of computers where learning is at least a marginal element of the experience (e.g., computer
games, browsing, etc.).
7.3.4.2 Computer-based training
Computer-based trainings (CBTs) are self-paced learning actions approachable via a
computer or handheld device. CBTs, typically present content in a linear manner, much like
reading a manual or an online book. For this reason, they are oftentimes used to instruct static
processes, such as using software or finishing mathematical equations, The term computer-
based training is often used interchangeably with web-based training (WBT) with the primary
difference being the delivery method. CBTs are typically imparted via CD-ROM, whereas
WBTs are presented via the Internet using a web browser. Assessing learning in a CBT
usually comes in the manner of multiple-choice questions or other assessments that can be
easily marked by a computer.
CBTs can be a good choice to printed learning materials because rich media, including
animations or videos, can easily be engrafted to raise the learning. Another advantage to
CBTs is that they can be easily administered to a wide audience at a relatively low cost once
the initial development is finished.
7.3.4.3 Computer-supported collaborative learning (CSCL)
Computer-supported collaborative learning (CSCL) is one of the brightest conceptions to
ameliorate learning and teaching with the aid of modern information and communication
technology. Collaborative or group learning denotes to instructional methods whereby
students are promoted or required to work together on learning tasks. It is widely agreed to
differentiate collaborative learning from the traditional ‘direct transfer’ model in which the
instructor is accepted to be the distributor of knowledge and skills.
7.3.4.4 Technology-enhanced learning (TEL)
Technology-enhanced learning (TEL) has the target to furnish socio-technical innovations
(also improving efficiency and cost effectiveness) for e-learning practices, concerning
individuals and organizations, independent of pace, time and place. The field of TEL
therefore employs to the support of any learning activity by technology.
7.3.5 E-Learning Technology
A learning management system (LMS) is a software program for managing training
education, delivering and tracking. LMSs range from systems for managing training
educational records to software for allotting courses over the Internet and extending features
for online collaboration.
A learning content management system (LCMS) is a software program for enduing,
authoring and editing e-learning content (courses, reusable content objects). An LCMS may
be solely devoted to producing and publishing content that is entertained on an LMS or it can
host the content itself (remote AICC content hosting models)
7.3.5.1 Computer-aided assessment
Computer-assisted assessment (known less commonly as e-assessment), ranges from
automated multiple-choice tests to more intricate systems. This form of assessment is
becoming more and more common these days. In some high-end systems, feedback is geared
towards students’ mistakes or the computer can guide students through a series of questions
adjusting to whether a student has discovered or not found out something.
The best example of this type of system is perhaps ‘online formative assessment ‘. This
requires making an early formative assessment by filtering out the incorrect answers. The
author/teacher then figures out what the pupil should ideally have done with each question. It
will then offer the pupil practice at every slight variation of filtered out question. This can be
regarded as the formative learning stage. The subsequent stage is to make a quick assessment
by a new set of questions that covers only the topics already taught . Some will take this up a
level and duplicate the cycle, such as BOFA that caters to the eleven plus exam set in the UK.
The term learning design has sometimes come to denote to the type of activity modified
by software, such as the open-source system LAMS that backs up successions of activities
that can be both adaptive and cooperative. The IMS learning design stipulation is signified as
a standard format for learning intentions and IMS LD Level A is affirmed in LAMS V2. E-
learning has been substituting the traditional settings due to its cost-effectiveness.
7.3.5.2 Electronic performance support systems (EPSS)
Electronic performance support systems (EPSS) are a ‘computer-based system that amends
worker productivity by furnishing on-the-job access to incorporated learning, information and
advice experiences’.
7.3.6 Content Issues
Content is a core constituent of e-learning and includes issues such as pedagogy and learning
object re-use.
7.3.6.1 Pedagogical elements
Pedagogical components are an effort to fix structures or units of educational material. For
example, this could be an assignment, a lesson, a discussion group, a multiple-choice
question, a quiz or a case study. These units should be format independent, so although it
may be in any following methods, pedagogical structures would not include a video
conference, a textbook, a web page or Podcast.
When beginning to produce e-learning content , the pedagogical advances need to be
measured. Simple pedagogical approaches make it easy to create content but lack
downstream functionality, flexibility and richness. On the other hand, intricate pedagogical
approaches can be hard to establish and slow to formulate, though they have the potential to
allow more engaging learning experiences for students. Somewhere between these extremes
in an ideal pedagogy that permits a particular education to efficaciously produce educational
materials while at the same time allowing for the most engaging educational experiences for
students.
7.3.6.2 Pedagogical approaches or perspectives
It is possible to use various pedagogical approaches for e-learning. They include the
following:
• Instructional design: The traditional pedagogy of education which is curriculum
focused and is formulated by a centralized educating group or a single teacher.
• Social constructivist: This pedagogy is especially well-afforded by the use of WIKI,
discussion forums, blogs and online collaborative activities. It is a collaborative
aspect that opens educational content creation to a broader group encompassing the
students themselves. The One Laptop Per Child Foundation tried to use a
constructivist way in its project.
• Laurillard’s conversational model: This is especially applicable to e-learning, and
Gilly Salmon’s Five-Stage Model is a pedagogical method to the use of discussion
boards.
• Cognitive perspective: This centers on the cognitive processes involved in learning
as well as how the brain functions.
• Emotional perspective: This centers on the emotional views of learning, like
engagement, motivation, fun, etc.
• Behavioral perspective: This concentrates on the skills and behavioral effects of the
learning process. Role-playing and application on-the-job settings.
• Contextual perspective: This concentrates on the environmental and social scene
that stimulates learning: Interaction with other people, the importance of peer support
as well as pressure and collaborative discovery.
7.3.6.3 Reusability: Standards and learning objects
Much research has been made into the technical process of electronic teaching materials and
in making or re-using learning objects. These are self-contained units that are decently
marked with keywords or other metadata and often put in an XML file format . Creating a
course needs putting together a sequence of learning objects. There are proprietary and open,
non-commercial and commercial, peer-revised repositories of learning objects, such as the
Merlot repository.
A common standard data format for e-learning content is shareable content object
reference model (SCORM); whereas other stipulations permit for the channeling of learning
objects metadata (LOM).
These standards themselves are of recent origin. The post-Secondary Education Standards
Council (PESC) USA is also producing headroom in developing standards and learning
objects for the higher education space, whereas school interoperability framework (SIF) is
commencing to severely turn towards instructional and curriculum learning objects.
In the United States, there are numerous content standards that are critical as –well-the
NCES data standards are a key instance. Each state government’s content standards and
accomplishment benchmarks are decisive metadata for linking e-learning objects in that
space.
An excellent instance of e-learning that concerns knowledge management and reusability
is navy e-learning, which is available to desirable, active duty or retired military members.
This online tool supplier certificate courses to ameliorate the user in several subjects
concerned to military training and civilian skill sets. The e-learning system not only furnishes
learning objectives but also assesses the buildup of the student and credit can be made
towards higher learning institutions. This reuse is a splendid example of knowledge memory
and the cyclical procedure of cognition transfer and use of data and records.
7.3.7 Technology Issues
As early as 1993, W.D. Graziadei depicted an online computer-delivered assessment, lecture
and tutorial project using e-mail, two VAX notes conferences and Gopher/Lynx together with
various software programs that allowed students and instructor to make a virtual instructional
classroom environment in science (VICES) in research, education, service & teaching
(REST) . In 1997, he along with others brought an article entitled ‘Building Asynchronous
and Synchronous Teaching-Learning Environments: Exploring a Course/Classroom
Management System Solution’. They depicted a process at the State University of New York
(SUNY) of measuring products and formulating an overall scheme for technology-based
course development and management in teaching-learning. The project(s) had to be easy to
use and preserve, portable, immediately affordable, replicable, scalable, and they had to have
a high chance of success with long-term cost-effectiveness. Today many technologies can be,
and are, used in e-learning. From collaborative software, to blogs, e-portfolios and virtual
classrooms. Most e0learning situations use combinations of these proficiencies.
Along with the terms educational technology, learning technology and instructional
technology, the term e-learning is generally used to denote to the use of technology in
learning in a much wider sense than the computer-based training or computer-aided
instruction of the 1980s. It is also wider than the terms online learning or online education
which generally denote to strictly web-based learning. In cases where mobile technologies
are used, the term m-learning has become more common. E-learning, however, also has
significances outside the technology and denotes to the actual learning that takes place using
these systems.
E-learning is by nature fitted to distance learning and conciliatory learning but can also in
conjunction -with face-to-face teaching in which event the term blended learning is
commonly used. E-learning pioneer Bernard Luskin debates that the ‘E’ must be realized to
have broad meaning if e-learning is to be efficient. Lusk in notes that the ‘e’ should be
interpreted to mean emotional, exciting , energetic , enthusiastic, extended , excellent and
educational in addition to ‘electronic’ that is a traditional national rendition. This broader
rendition allows for 21st-century applications and adds learning and media psychology into
the equation.
In higher education particularly, the increasing propensity is to produce a virtual learning
environment (VLE) (which is sometimes mixed with a management information system
(MIS) to make a managed learning environment) in which all facets of a course are managed
through a uniform user-interface measure during the institution. A rising number of physical
universities, as well newer online-only colleges, have commenced to provide a select set of
academic degree and certificate programs through the Internet at a board range of levels and
in a broad range of disciplines. While some programs need students to attend some campus
classes or orientations, many are extradite completely online. Student support services, such
as student newspapers e-counseling, online textbook purchase, online advising and
registration and student governments.
E-learning can also denote to educational web sites such as those offering interactive
exercises for children, learning scenarios and worksheets. The term is also used extensively
in the business sphere it generally denotes to cost-effective online training.
The recent fashion in the e-learning sector is screen casting. There are many screen
casting tools available but the latest buzz is all about the web-based screen casting tools that
permit the users to produce screen casts immediately from their browser and make the video
available online so that the viewers can flow the video directly. The advantage of such tools
is that it gives the presenter the power to show his ideas and flow of ideas rather than simply
explain them, which may be more confusing when delivered via simple text instructions.
With the combination of video and audio, an expert can mimic the one-on-one experience of
the classroom and deliver complete and clear instructions. From the learner’s point of view,
this furnishes the power to rewind and pause and provides the learner the reward to impress at
their own pace. Something a classroom cannot always provide. One such example of e-
learning platform on the basis of on screen casts is YoHelpOnline.
7.3.7.1 Communication technologies used in e-learning
Communication technologies are normally categorized as asynchronous or synchronous.
Asynchronous activities use technologies like blogs, wikis and discussion boards. The
thought here is that players may employ in the exchange of ideas or information without the
dependency of other player’s engagement at the same time. E-mail is also asynchronous in
that mail can be sent or obtained without having both the participants’ involvement at the
same time.
Synchronous activities need the substitute of ideas and information with one or more
players during the same period of time. A face-to-face discourse is an example of
synchronous communications. Synchronous actions happen with all players joining in at
once, as with an online chat session or a practical classroom or meeting.
Virtual classrooms and meetings can frequently use a mix of communication
technologies.
In many models, the writing community and the communication channels associate with
the e-learning and the m-learning communities. Both the communities furnish a general
overview of the fundamental learning models and the activities needed for the participants to
link up the learning sessions across the virtual classroom or even across standard classrooms
enable by technology. Many activities, indispensable for the learners in these environments,
need frequent chat sessions on the form of virtual classrooms and/or blog meetings. Lately,
context-aware omnipresent technology has been furnishing an advanced way for written and
oral communications by using a mobile device with sensors and RFID readers and tags.

CHECK YOUR PROGRESS


1. What is the core function of education?
2. What are the ways in which multimedia can be used in education?
3. What is online education?

7.4 FUTURE TO INTERACTIVE MEDIA IN EDUCATION


7.4.1 A Vision for the Future
The future of multimedia in education is difficult to foretell, given the pace of change in both
multimedia technologies and in the education system. A range of forces are encouraging,
barricading and remolding these growths (and technology in only one and not inevitably the
key influence). Given this complex interaction of forces and the marked fluctuations across
the sub-sectors, there is unlikely to be a single flight. Simple scenarios are hard to give at this
stage and may come out to be a deceivingly simplistic approach. Instead, we can try to
distinguish some particular trends that can be awaited to occur in certain parts of the sector
and considerations.
Several factors seem to privilege the uptake of multimedia, especially the elaboration of
learning, distance, open and the force to train people to use multimedia in many aspects of
life. These trends involve the principal sub-sectors of education differently. Here, three cases
are considered – schools, universities and training. They are chosen as illustrating three poles
in the overall education system. The universities qualified by high levels of internal
expertness (technological, educational and substantive ) and resources, which outfit them to
take part in the development of multimedia products and its futures and thus determine
multimedia to their purposes. Schools, and particularly primary schools, are qualified by
much lower levels of internal resources and expertness.
They will chiefly be consumers of products based elsewhere; though this may be
constrained by lack of money and technical knowledge. Training is presently furnished by
firms and further education institutions; the application of multimedia is likely to alter the
relationship among these institutions and here the private sector appears to be in the forefront
of multimedia application. This exemplifies the diversity of patterns within the sectors. The
existing divisions of the sector are anticipated to persist at least for the next 15 years, as they
have functions and assess well beyond the reach of multimedia technology.
7.4.2 Meltdown Scenarios
Michael Gell, of Stafford University, and Peter Couchrane, BT Laboratories, propose a future
for education where most of the sector is engaged into the sector is engaged into the
international ‘experience’ industry, Alterationa, in the global economy and the growth of new
technolkogy will cause the ‘meltdown’ of educational organizations and the establishment of
many smaller and often virtual education organizations, as the education sector is on longer
restrained by distance time or country. In tele-universities, students will have access to expert
lectures without having to meet in an overcrowded hall and there learning will be supported
by local to distance tutors. Teaching institutions will mix more and more with other
organizations and communities. Learning sessions will be selected from many sources and
customers will collect many sessions over a lifetime.
The competitive advantage will fall to those countries that move most immediate into the
new learning structure and successful learning enterprises will be those that can lead their
markets nationally and internationally.
7.4.3 Universities
Universities encompass different fields of education and are equally focuses of research .
Many universities are pioneering are pioneering to provide more vocational training and take
more mature students – this will carry on, with the expansion moving into electronic fields.
This will also worsen the differences among teaching universities and research universities as
the power to put together good multimedia teaching packages will be even more of a
specialist time-consuming accomplishment than normal teaching. You can expect there to be
an increased divergence among research and regular teaching, while leading edge researchers
will impart much more widely to telecourses. Distances learning will be greatly ameliorated
making it lighter to follow courses. For distance learners, it will be easier to choose courses
from different universities – price mechanisms will come to decide access to the best
teachers. The growth of price mechanisms to gather telecourses will cause the market system
throughout the university system.
There will be a break up of teaching and certification and accreditation. Universities with
beneficial reputations and live commercial policy will fortify their hold on certification of
course material and on certification, whereas other institutions may align the actual teaching
and management of resources. For purposes of accreditation, the content of courses may get
much more standard. Standardization will beef up the hypotheses of commodification. With
commodifield packages, the style and relief of delivery will become more crucial. Here
commercial coactions will become significant in funding growth and distribution. However,
standardization of content and the attractiveness of existent local institutions.
There will be re-emergence of the role of tutor and teacher rather than lecturer lectures
will be delivered remotely by national and international experts and expert teachers and
students will have much heightened access to course materials.
This will be promoted by the squeezing of government funding per student as student
numbers step up. Learners will be more likely to use the facilities of local colleges: the
subsisting pattern of only the better off going away to university will be fortitied. One of the
key groups in take development of multimedia will the university technical support staff who
will become significant intermediaries in the development and exploitation of all types of
multimedia. They are significant in introducing the technology to the teaching staff and
students and will be significant in allowing for and commercializing solutions formulated for
internal use. This new professional group, which will admit multimedia authors, graphic and
video designers, and other with ‘media’ and pedagogic skills as well as computer and audio
visual specialists will become a significant new constituency and will call for professional
training and recognition.
7.4.4 Vocational/Further Education
Vocational education and training is the field most affected by the development of lifelong
learning, customer-oriented markets and multimedia , Further education colleges, will
determine their role frowing in the short term, but face extinction as individual entities as the
technology arises. As companies are able to bring the trainers in electronically, there will
diminution of need for these institutions.
Commercial training companies will keep on growing. Being more market-oriented and
with national economies of scale they will keep to be in the forefront of multimedia
development Current multimedia packages are befitted to training rather than ‘education’ and
will arise in use very rapidly. They will challenge further education colleges, especially with
the growth of voucher systems for payment of education.
A market could well growth in low quality computer –based learning products for mass
consumers. Any voucher system would impress his development depending on quality and
accreditation needs. In the short term, companies and collages may want to cream of the top
end of the market.
Schools and universities will get more involved in vocational training. They will charm
more to ‘portfolio’ workers who want quality education rather than specific training.
Networked multimedia will be very significant in getting access to these institutions. Virtual
and open higher education institutions will grow to serve these markets. Self-administered
education will grow in importance. Commercial interests will render this market via Internet
CD and iTV material. Established brands with a reputation for quality will still command the
market. Standard multimedia course formulated for schools and university course support
will be marketed through these dominant companies.
7.4.5 Schools
Schools accomplish an important socialization in child minding, furnishing a social
environment for individual development and furnishing a community focus. This is an
important cause why schools will not vanish into the ether.
There are a number of trajectories for schools, including the following:
1. In the short term, the use of technology is going vary a great deal. Schools in fortunate
areas step up their use of technology through accessibility of hardware and software,
skilled and motivated teachers and better more visionary management. Students will
be able to carry on schoolwork on computers at home. Poorer schools will have low
technology usage, and due to lack of training and resources, poorer usage of existing
technology. Providing hardware and network connections will be senseless unless
training and support for teachers goes with it. Students will gain their multimedia
know how at home through interactive TV and games.
2. Schools will become progressively open for use in ‘lifelong’ education. There will
surely be a trend towards sharing facilities, especially technical facilities with private
and public vocational training institutions. This will step up the role of the school and
community node. In deprived areas, the school may be the principal point of approach
to networked multimedia education, whish in feeder areas would be more likely to be
available at home.

Other issues in schools may be categorized as follows:


Schools will incline to be net consumers of multimedia products – both content and technical
solutions because to lower resources. In the longer term, some schools could become mere
child-minding organizations – but with the technology to link students to good education.
Rural areas will adopt the technology eagerly, because of the apparent benefit, but big
schools with all the necessary expertise in-house may see fewer advantages. Schools will
become hosts for teachers of ‘minority’ subjects, the richer , more fitted out schools having
the teachers based in their schools, with other schools buying –in the courses. In fact, for
these subjects, some teachers may then favour to become independent or work from provate
teleschools or multimedia publishing firms. Multimedia networks will certainly assist with
teacher education and teacher support, It will be assumed by heads wishing to issue on sosts
of issuing staff for training.

CHECK YOUR PROGRESS


4. What is technology-enhanced learning?
5. What is a learning management system (LCMS)?
6. What is a learning content management system (LCML)?

7.5 SUMMARY
• Multimedia is expected pioneer new opportunities for educational activity and new
forms of delivery. However, these opportunities will only be part of the wider scale
changes in education and training.
• In general multimedia is an add-on to the existing formal structure in education.
• Multimedia will enable the opening up of the education system, but it may reduce the
chances for many people to go through learning in a real institution. It will provide a
vehicle fro researching more comprehensively than the hitherto learning process and
its ingredients centering on the effectiveness of alternative teaching and delivery
modes.
• The effects of multimedia in education change widely across the sector, making any
overall summary unmanageable . The methodology demands major uncertainties
,retardation factors and drivers, and constituencies to be covered.
• The most significant components of multimedia education constituency are not the
lagre institutions, although they mey act as poles of attraction, but the decision market
and the educators in schools and training companies , as these are the people who
make the decision whether multimedia technology can allow for better education and
at a reasonable personal or institutional ‘cost’.
• The education and training purchasers may have their own schedule in terms of
technology, but will be largely reactive to the professional, in spite of the efforts of
technology and network firms.

7.6 KEY TERMS


• Technology-enhanced learning: The backup of any learning action with the help of
technology.
• Online: Material that can be accessed via a computer approachable network or
telecommunications instead of material accessed on paper or other non-networked
medium.
• New media: Conversion from analogue to digital media domains permiting greater
practicality and new features to the media type, such as compression, image
manipulation, etc.
• LMS: A software program for managing training/education, delivering and tracking.
It ranges from systems for managing training/educational records to software for
allotting courses over the Internet and extending features for online collaboration.
• LCMS: A software program for indexing, authoring, and editing e-learning content,
courses, reusable content objects. It may be solely devoted to producing and
publishing content that is entertained on an LMS or it can host the content itself.

7.7 END QUESTIONS


Short-Answer Question s
1. What are the potential of multimedia technologies?
2. What is technology-enhanced learning?
3. What is e-learning? Write a note on online education advantages.
4. Write a short note on the goals and benefits of e-learning.
5. How can computer-aided assessment be done?

Long-Answer Questions
1. What are the core functions of education? Explain the nature of education and
training.
2. Discuss multimedia in education.
3. Explain e-learning technology.
4. What are the goals and benefits of e-learning? Explain.
5. Explain the future scenario of e-learning.

7.8 BIBLIOGRAPHY
Allen, I.E. Seaman, 2008. Staying the course: Online Education in the United States.
Needham MA: Sloan Consortium.
Allen, I.E. and J. Seaman, 2003. Sizing the Opportunity: The Quality and Extent of
Online Education in the United States. Wellesley, M.A: The Sloan Consortium.
Mason, R. and A. Kaye. 1989. Mindweave: Communication. Computers and Distance
Education. Oxford, UK: Pergamon Press.
Bates. A. 2005. Technology, R-learning and Distance Education. London: Routledge.
Harasim, L.S. Hiltz, L. Teles, and M. Turoff, 1995. Learning Networks: A Field Guide to
Teaching and Learning Online . Cambridge, MA: MIT Press.
Mayer, R.E. 2001. Multimedia Learning New York: Cambridge University Press Paivio,
A. 1971. Imagery and verbal Processes: New York: Holt, Rinehart and Winston.

UNIT 8 MULTIMEDIA AND


VIRTUAL REALITY
Program Name: BSc(MGA)
Written by: Srajan
Structure:
8.0 Introduction
8.1 Unit Objectives
8.2 Fundamental of Multimedia and Virtual Reality
8.2.1 Terminology and Concepts
8.2.2 Timeline
8.2.3 Future
8.2.4 Impact
8.2.5 Heritage and Archaeology
8.2.6 Mass Media
8.2.7 Implementation
8.2.8 Challenges
8.3 Technology Issues
8.4 Computer Science Aspects of VR
8.4.1 Hardware for Computer Graphics
8.4.2 Notable Graphics Workstations and Graphics Hardware
8.4.3 Graphics Architectures for VE Rendering
8.4.4 Computation and Data Management Issues in Visual Scene Generation
8.4.5 Graphics Capabilities in PC-Based VE Systems
8.4.6 Interaction Software
8.4.7 Visual Scene Navigation Software
8.4.8 Geometric Modeling: Construction and Acquisition
8.4.9 Dynamic Model Matching and Augmented Reality
8.5 User Interface
8.5.1 Introduction
8.5.2 Usability
8.5.3 User Interfaces in Computing
8.6 Interaction with Geographic Information
8.6.1 Applications
8.6.2 History of Development
8.6.3 GIS Software
8.6.4 Semantics
8.6.5 Society
8.7 Applications
8.8 Potential
8.9 Summary
8.10 Key Terms
8.11 Questions and Exercises
8.12 Further Reading

8.0 INTRODUCTION
In this unit, you will learn about virtual reality (VR). Virtual reality allows a user to
interact with a computer-simulated environment. The environment may be a simulation of the
real world or an imaginary world. The current VR environment are often primarily visual
experiences, displayed either on a computer screen or through special or stereoscopic
displays. Some of the simulations use additional sensory information. Such as sound through
speakers or headphones. In medical and gaming applications , advanced and haptic systems
now include tactile information, generally known as force feedback.
Users can interact with a virtual environment or a virtual artifact (VA) either through
standard input devices, such as a keyboard and mouse or multimodal devices, such as a wired
glove, the Pothemus boom arm and omni-directional treadmill.
In addition, you will learn about technological issues, computer science aspects, user
interface, interaction with geographic information, application and potential of multimedia
and VR.

8.1 UNIT OBJECTIVES:


After going through this unit, you will be able to :
• Understand the fundamental of multimedia and related terminology.
• Learn about technology issues related with multimedia and VR.
• Know about user interface and interaction with geographic information.
• Learn about applications and potential on multimedia and VR.

8.2 FUNDAMENTAL OF MULTIMEDIA AND VIRTUAL


REALITY
The fundamentals of multimedia and VT are dealt with in the following sections:
8.2.1 Terminology and concepts
The term ‘artificial reality’ has been in use since the 1970s when it was coined by Myron
Krueger. However, the origin of the term virtual reality’ (Refer to Figure C15) can be traced
back to the French playwright , poet, actor and director Antonin Artaud’s seminal book, The
Theatre and Its Double (1938). Artaud described theatre as ‘la realite vertuelle’, a VR ‘in
which characters, objects and images take on the Phantasmagoric force of alchemy’s
visionary internal dramas’. The term could also be traced in Damien Broderick’ 1982
science-fiction nevel The Judas Mandala, where the usage is somewhat different from that
defined above. The Oxford English Dictionary cited the earliest use of the term in a 1987
article titled ‘virtual reality’, but the article was not about VR technology.
8.2.2 Timeline
It is believed that the idea of VR started during the 1560s from 360-degree art through
panoramic murals. Baldassare Peruzzi’s piece titled. ‘Sala delle Prospective’ would be one of
its examples. Vehicle simulators came into the picture in the 1920s. During the 1950s,
Morton Heilig wrote of an ‘Experience Theatre’ capable of including all human senses in an
efficient manner, thus pulling the viewer into onscreen activity. He built a model of his vision
called the Sensorama, in 1962. The Sensorama displayed short films while involving multiple
senses (smell, sound, sight and touch). Much ahead of its time, the Sensorama was a
mechanical device. It purportedly is still in use today.
8.2.3 Future
It is not possible to accurately predict the future of VR. Very soon, the graphics displayed
will soon be close to realism. The audio capabilities will reach unheard of levels of three-
dimensional sound. This infers to adding sound channels both below and above the individual
as well as a holophony approach.
8.2.4 Impact
There has been extensive interest in the possible social impact of innovative technologies,
such as VR (as widely seen in the utopian literature, within social sciences and popular
culture). Mychilo S. Cline, in his book, power, Madness and Immortality; The Future of
Virtual Reality, published in 2005, debates that that VR will eventually lead to major changes
in human life as well as human activity. He argues that:
• VR will be integrated into our daily lives and activities and well be used in various
human ways.
• Techniques will be developed to influence human behavior, interpersonal
communication and cognition (that is, virtual genetics).
• As we spend more and more time in virtual space, there will be a gradual ‘migration
to virtual space’, resulting in significant changes in economics, worldview and
culture.
• The design of virtual environment may be used to extend basic human rights into
virtual space, to promote human freedom and well-being and to promote social
stability as we move to the further stages in sonic-political development.

8.2.5 Heritage and Archaeology


In heritage and archaeology, VR has enormous potential in museum and visitor centre
applications. However, the use of VR has been tempered by the difficulty in presenting a
‘quick to learn’ real-time experience to numerous people at any given time. Many historic
reconstructions tend to be in a pre-rendered format to a shared video display. Therefore, it
allows more than on person to view a computer -generated world, but limits the interaction
that full-scale VR can provide. In 1994, a VR presentation was used for the first time in a
heritage application, when a museum visitor’s interpretation provided an interactive ‘walk-
through’ of a 3D reconstruction of Dudley Castle in England as it was in 1550.
8.2.5.1 VR reconstruction
VR enables heritage sites to be recreated with exact precision, so that the recreations can be
published in various media. The original heritage sites are either often inaccessible to the
public or may no longer exist. This technology can help the user to develop virtual replicas of
caves, natural environment, old towns, monuments, sculptures and archaeological elements.
8.2.6 Mass Media
Over the years, mass media has aided as well as hindered the development of VR. During the
research ‘boom’ that started two decades back, the media predicted the potential of VR . It
was perhaps overexposed in publishing the predictions of anyone who had one (whether or
not that person had a true perception on the technology and its limits). These raised
expectations which where nigh impossible to accomplish under the then prevailing
technology. The entertainment media further added to these expectations with the help of
futuristic imagery that was many generations beyond contemporary capabilities.
8.2.6.1 Fiction books
Many science fiction books and movies have dealt with characters that have been ‘trapped in
virtual reality’. Daneil F. Galouye’s novel Simulacron-3 is one of the latest books to dabble
in the possibility. The novel was screened as a German teleplay titled Welt am Draht’ ‘World
on a Wire’ in 1973, and also into mainstream cinema with the movie titled The Thirteenth
Floor in 1999. Science fiction books have promoted the idea as a part substitution for the
sadness of reality (for e.g. a pauper in the real world can be a presence in VR). These books
have also touted VR as a method for creating spectacular virtual worlds where one may
successfully run away from earth’s now toxic atmosphere. Their minds exist within a shared
and romanticized virtual world called Dream Earth. Therefore, they are unaware of where
they grow up, live and die, never knowing they are living a dream.
8.2.6.2 Television
Perhaps the earliest example of VR on television is a serial ‘The Deadly Assassin’ by Doctor
Who. First broadcasted in 1976, it introduced a dream-like computer generated reality known
as the Matrix. Star Trek: The Next Generation was the first major American television series
to showcase VR. Several episodes featured a holodeck. This was a VR facility that enabled
its users to recreate and experience anything they wanted. In holodeck, replicators, force
fields, holograms and transporters were used to actually recreate and place objects in the
holodeck, rather than illusions of physical objects, as is done in the current VR technology.
8.2.6.3 Motion pictures
The 1982 movie Tron bu Steven Lisberger was the first mainstream Hollywood picture to
explore the idea. A year later, it was fully expanded in Natalie Wood’s film Brainstorm.
David Cronenberg’s film EXisten Z dealt with the danger of confusion between reality and
VR. As seen in the movie The Lawnmower Man cuberspace became something that most
movies completely misunderstood. Also, several episodes of the British comedy Red Dwarf
used the idea that life (or at least the life seen on the show) is a VR game.
8.2.6.4 Music videos
The lengthy music video of hard rock band Aerosmith’s 1993 single ‘Amazing’ depicted VR,
It went as far as to show two young people participating in VR simultaneously from their
separate personal computers (while not knowing the other participant was ).
8.2.6.5 Games
In 1991, Virtuality (originally W Industries, later renamed) licensed the Amiga 3000 for use
in their VR machines and reased a VR gaming system called the 1000CS.

Fig. 8.1 Classic Virtual Reality HMD with Glove


It was a stand-up immersive head-mounted display (HMD) platform with a tracked 3D
joystick (Figure 8.1) . The system featured several VR games, such as Dactyl Nightmare
(shoot-em-up), LegendQuest ( adventure and fantasy), Hero (VR puzzle) and Grid Busters
(shoot-em-up).
8.2.6.6 Attractions
For the developers of theme park style attractions, using VR technology was a major part of
the development of the hardware. It took them beyond simulation towards an immersive
entertainment experience. Of all these developments, the Walt Disney ‘DisneyQuest’ venue
is the major conceptual application. It is still operational in 2009. Mobility of VR attractions
has also been on the forefront of their consumer appeal. With the improvements in
technology and its growing acceptance, various business and corporate events employ VR
providers to attract business and entertain their employees and guests.
8.2.6.7 Fine art
David Em was the first fine artist to build up navigable virtual worlds in the 1970s. He did his
early work on mainframes at III, JPL and Caltech. Jeffrey Shaw explored the potential of VR
in fine arts with his early works, such as Legible City (1989), Virtual Museum (1991) and
Golden Calf (1994). Canadian fine artist Char Davies created immersive VR art pieces of
Osmose (1996) and Ephemere (1998). Maurice Benayoun introduced metaphorical,
philosophical or political content in his work, by combining VR, network, generation and
intelligent agents, in works, such as Is God Flat (1994), The Tunnel under the Atlantic (1995)
and World Skin (1997).
8.2.6.8 Marketing
In the media, a chic image has been cultivated for VR. A side effect of this is that over the
years advertising and merchandise have been associated with VR to take advantage of the
buzz. With varying degrees of success, this is often seen in product tie-ins with cross-media
properties, especially gaming licenses. Among the early examples are NES Power Glove by
Mattel from the 1980s, the U-Force and later, the Sega Activator.
8.2.6.9 Health care education
Virtual reality is finding its way into the training of health care professionals, though its use
is still not widespread. Use of VR ranges from anatomy instruction to surgery simulation.
Annual conferences are organized to examine the latest research in utilizing VR in the
medical fields.
8.2.6.10 Therapeutic uses
Virtual reality is used essentially as a therapeutic tool to a variety of exposure therapy that
ranges from phobia treatments to approaches to treating PTSD. A basic VR simulation with
straight forward sight and sound models has been priceless in phobia treatment (e.g.
zoophobias ) and true exposure.
8.2.6.11 Radio
In 2009, British digital radio station BBC Radio 7 broadcasted Planet B. It was a science-
fiction drama set in a virtual world. Planet B is considered as the largest ever commission for
an original drama programme.
8.2.7 Implementation
A real time virtual environment can be developed using a computer graphics library. This can
be done by using the library as embedded resource coupled with a common programming
language, such as C++. Perl, Java or Python. Open 3D, Java 3D and VRML are some of the
most popular computer graphics library/API/language. Their use will be directly influenced
by the system that demands of performance, program purpose and hardware platform.
Multithreading (for example, Posix) can also be used to accelerate 3D performance and
enable cluster computing with multi-user interactivity.
8.2.7.1 Manufacturing
Virtual reality can be used in new product design, as an ancillary tool for engineering in
manufacturing processes, new product prototype and simulation. Electronic design
automation. CAD, finite element analysis, and computer-aided manufacturing are among
other examples. Stereo lithography and 3D printing shows how computer graphics modeling
can be applied to create physical parts of real objects used to naval, aerospace and automotive
industry. Beyond modeling assembly parts, 3D computer graphics techniques are currently
utilized in the research and development of medical devices for innovative therapies,
treatments, patient monitoring and early diagnosis of complex diseases.
8.2.8 Challenges
Virtual reality has been heavily criticized for its inefficiency in navigating non-geographical
information. Currently, the idea of ubiquitous computing is very popular in user interface
design and this may be a reaction against VR and its problems. In reality, these types of
interfaces have totally different goals and are complementary. Ubiquitous computing is aimed
at bringing the computer into the user’s world, rather than force the user to go inside the
computer. Currently, VR is actually merged in the two user interfaces to create a fully
immersive and integrated experience.

8.3 TECHNOLOGY ISSUES


A VR system may be defined in terms of four types of components, which are as follows:
• Effectors (input and output devices)
• A reality simulator (the computer and sensory synthesis hardware)
• An application (that is, software)
• Geometry (information within the application that describes physical attributes of
objects).

Over the next 20 years, the technologies involved in each of these components will
undergo major development. Ongoing endeavors in miniaturization and nanotechnology
should make effectors ultra lightweight and wireless by the end of this period. Moreover, the
application of nanotechnology to materials science should make it possible to simulate
textures on skin with a much higher degree of verisimilitude than is currently available.
Responsible scientists are researching on ways of controlling prostheses and robots directly
through the activity of the human brain. (Zimmer, 2004). Very soon the same will be possible
with the control of the virtual world Reality simulators are expected to become much smaller
and more powerful over this period. Currently, there are hardware deficits. For example, the
current lack of graphic processing power in the Quantum 3D Thermite wearable computer
(knerr, Garrity and Lampton 2004) These deficits must be overcome in the course of normal
technological development, making only reasonable and conservative extrapolations from
present capabilities and development efforts. VR software is expected to see major
development during this period.

8.4 COMPUTER SCIENCE ASPECTS OF VR


The computer technology that enables us to develop three-dimensional virtual environments
(VEs) comprises both hardware and software. With the advent and availability of
increasingly powerful and affordable visually-oriented, interactive, graphical display systems
and techniques, the current popular, technical and scientific interest in VEs is inspired.
Graphical image generation and display capabilities that were not widely available earlier are
now found on the desktops of many professionals and are finding their way into the
households. These systems have become more affordable and available, coupled with more
capable single-person-oriented viewing and control devices (for example, head-mounted
displays and hand-controllers ). These systems have seen an increased orientation toward
real-time interaction, making these systems both more capable of being individualized and
more appealing to individuals.
Limiting VE technology primarily to visual interactions, however, defines the technology
as a more personal and affordable variant of classical military and commercial graphical
simulation technology. VEs can be much more interesting and potentially useful as a
significant subset of multimodal user interfaces. Multimodal user interfaces and human-
machine interfaces that actively or purposefully use interaction and display techniques in
multiple sensory modalities (for example, visual, haptic and auditory). In this sense, VEs can
be viewed as interactive and spatially-oriented multimodal user interfaces.
With regard to computer hardware, there are several senses of frame rate. They are
roughly classified as:
• Graphical
• Computational
• Data access

Graphical frame rates are critical to sustain the illusion of presence or immersion in a VE.
Note that rates may be independent. The graphical scene may vary without a new
computation and data access due to the motion of the user’s point of view. Experience has
shown that whereas the graphical frame rate should be high as possible. Frame rates of lower
than 10 frames per second severely diminish the illusion of presence. If the graphics
displayed relies on computation or data access, computation and data access frame of 8 to 10
frames per second are necessary to sustain the visual illusion that the user is watching the
time evolution of the VE.
8.4.1 Hardware for computer Graphics
The key development behind the current push for VEs today is the ubiquity of computer
graphics workstations capable of real-time, three-dimensional display at high frame rates. We
have had flight simulators with significant graphics capability for years. These simulators
have been expensive and not widely available. Making it worse, they have not been readily
programmable. Flight simulators are generally constructed with a specific objective in mind,
such as providing training for a particular military plane. To reduce the total number of
graphics and central processing unit cycles required, such simulators are micro coded and
programmed in assembled language. It is difficult to change and maintain systems
programmed in this manner. Hardware upgrades for such systems usually calls for major
undertakings with a small customer base.
8.4.2 Notable Graphics Workstations and Graphics Hardware
Graphics performance is difficult to measure owing to the widely varying complexity of
visual scenes and the different hardware and software approaches to computing and
displaying visual imagery. The most straightforward measure is given in terms of
polygons/second. However, this crudely indicates the scene complexity that can be displayed
at useful interactive update rates. Polygons are the building blocks that are common for
creating a graphics; visual reality is 80 million polygons per picture . If we need
photorealistic VEs at 10 frames/s, this translates into 800 million polygons/s . No current
graphics hardware provides this, so there must be approximations at the moment. This means
living with less detailed virtual worlds, perhaps by judiciously using hierarchical data
structures or off-loading some of the graphics requirements by utilizing available CPU
resources instead.
8.4.3 Graphics Architectures for VE Rendering
In this section, the high-level computer architecture issues are explained. These issues
determine the applicability of a graphics system to VE rendering. The following assumptions
are made about the systems included in our discussion:
• The systems use a z-buffer (or depth buffer), for hidden surface elimination. A z-
buffer stores the depth – or distance from the eye point – of the closest surface seen at
that pixel. When a new surface is scan converted the depth at each pixel is computed.
If the new depth at a given pixel is closer to the eye point in comparison to the depth
currently stored in the z-buffer at that pixel, the new depth and intensity information
are written into both the z-buffer and the frame buffer. Otherwise, the new
information is discarded and the next pixel is examined. In this way, the nearer
objects always overwrite the distant objects, and when every object has been scan
converted all surfaces have been correctly arranged in depth.
• The systems use an application-programmable, general-purpose processor to cull the
database.

Fig. 8.2 The Graphics Pipeline


The performance expected from such a system can be substantial: million triangles per
second or hundreds of millions of fragments per second. The calculations involved in
performing this work require billions of operations per second. None of today’s fastest
general-purpose can satisfies these demands. Therefore, all modern high-performance
graphics systems are run on parallel architectures. Figure 8.2 is a general representation of a
parallel architecture, in which the rendering operations of Figure 8.3 is replicated. Whereas
implementation of such a architecture is attractively simple, it fails to solve the rendering
problem, because primitives in object coordinates cannot be easily separated into groups
corresponding to different sub-regions of the frame buffer. In general there is a many-to-
many mapping between the primitives in object coordinates and the partitions of the frame
buffer.
To allow many-to-many mapping, disjoint parallel rendering pipes must be combined at a
minimum of one point along their paths. In addition, this point must come after the
completion of per-primitive operations. The point or crossbar can be located prior to the
rasterization (the primitive crossbar), between rasterization and per-fragment (the fragment
crossbar), and following the pixel merge (the pixel merge crossbar).
There are four major graphics systems representing different architectures based on
crossbar location:
• Silicon Graphics Reality Engine is a flow-through architecture with a primitive
crossbar.
• Evans and Sutherland’s Freedom series is a flow-through architecture with a fragment
crossbar.
• Pixel Planes 5 uses a tiles primitive crossbar.
• PixelFlow is a tiled, pixel merge machine.

8.4.4 Computation and Data Management Issues in Visual Scene


Generation
Many important applications of VE need its extensive computational and data
management capabilities. The computations and data in the application primarily support the
tasks done in the application. For example , while simulating the computations may support
the physical behavior of objects in the VE. However, while visulising application the
computations may support the extraction of interesting features from a complex pre-
computed dataset. Such computations may be required at the order of millions of floating
point operations. Currently, simulations demand only modest data management capabilities.
However, as the complexity of simulation increases, the data supporting them may increase.
On the contrary, visualizing applications often demand a priori unpredictable access to
gigabytes of data.
8.4.4.1 Strategies for meeting requirements
One strategy of meeting the data management requirements is to observe that, typically, only
a small fraction of the data should be actually used in an application. In the above particle
injection example, only 16 accesses are required (with each access loading a few tens of
bytes) per particle per time step. These accesses are scattered in unpredictable ways across
the dataset. If only the data actually used are loaded the bandwidth requirements of this
example are trivial. However, the seek time requirements are a problem: 20,000 particles
would require 320,000 seeks per time step or 3.2 million seeks per second. This is two orders
of magnitude further to the seek time capabilities of current disk systems.
Supercomputer systems have attained very high computational speeds in many ways, but
these methods typically work only for special computations. For example, Cray
supercomputers rely on a vectorized architecture, which is very fast for array-type operations.
However, it is not as fast as for the particle example discussed earlier.
Another example is the massively parallel system. It distributes memory and computation
among many processors. Massively parallel systems are very fast for some computations but
are slow for computations that are not parallelizable or require large amounts of data
movement. In a VE system, many types of computations may be required, implying that a
unique computational architecture will be unsuitable. To enhance versatility, computations in
VE systems should be based on a few parallel high-power scalar processors with large shared
memory.
8.4.5 Graphics Capabilities in PC – Based VE Systems
Small VE systems have been successful when built around high-end personal computers
(PCs) with special-purpose graphics boards.
8.4.5.1 Software for the generation of three-dimensional VEs
There are many components of the software required for the real-time generation of VEs
These include:
• Interaction software
• Navigation software
• Polygon flow minimization to the graphics pipeline software.
• World modeling software (geometric, physical and behavioral)
• Hypermedia integration software.

Each of these components is large in its own right. All of them must act in consort and in
real time to create VEs. The objective behind the interconnectedness of these components is a
fully detailed, fully interactive and seamless VE . Seamless implies that you can drive a
vehicle across a terrin, stop in front of a building, get out of the vehicle, enter the building on
foot, go up the stairs, enter a room and interact with items on a desktop, all without delay or
hesitation in the system. To build seamless systems, substantial improvements in software
development are required. The following sections describe construction of the software in
support of virtual worlds.
8.4.6 Interaction Software
Interaction software renders the mechanism to:
• Construct a dialogue from various control devices (for example, trackers and haptic
interfaces).
• Apply that dialogue to a system or application, so that the multimodal display changes
appropriately.

The first part of this software accepts raw inputs from a controls device and interprets
them. Several libraries are available both as commercial products and as ‘shareware’. These
libraries read the most common interface devices, such as the DataGlove and various
trackers.
This second part of the building interaction software turns the information about a
system’s state from a control device into a dialogue that is meaningful to the system or
application. At the same time, it filters out erroneous or unlikely portions of dialogue that
might be generated by faulty data from the input device.
Interaction is a critical component of VE systems involving both hardware and software, .
In VEs, interface hardware provides the positions or states of various parts of the body. This
information is typically used to:
• Map user actions to changes in the environment (for example, moving objects by
hand, and so on).
• Pass commands to the environment (for example, a hand gesture or button push).
• Provide information input (for example, speech recognition for spoken commands,
text or numerical input).

The user’s intent must be inferred from the hardware output as read by the computer
system. Inaccuracies in the hardware providing the single may complicate this inference
8.4.6.1 Existing technologies
Despite the existence of several paradigms for interaction in VEs ,including direct
manipulation, indirect manipulation, logical commands and data input, the problem of
realistic, real-time interaction is still comparatively unexplored. Generally, a combination of
these paradigms performs takes in VEs. Other paradigms certainly need development to
realize the potential of a natural interface. The following is an overview of some existing
technologies.
With directly manipulating, the position and orientation of a part of the user’s body,
usually the hand is mapped continuously to certain aspects of the environment. Typically, the
position and orientation of an object in the VE is controlled through direct manipulation.
Pointing to move is another example of direct manipulation in which orientation information
is used to determine a direction in the VE. Analogs of manual tasks, such as picking and
placing require display of forces as well. Therefore, they are well suited to direct
manipulation, though more abstract of the environment, such as background lighting, can also
be controlled in this way.
8.4.6.2 Design approaches and issues to be addressed
The choice of conceptual approach is a crucial decision in designing the interaction. Two of
the more important issues involved in interacting in a three-dimensional environment are line
of sight and acting at a distance . With regard to line of sight . VE applications have to
contend that some useful information might be obscured or distorted due to an unfortunate
choice of user viewpoint or object placement. In some cases, the result may lead to
misinformation, confusion and misunderstanding. Common pitfalls include obscuration and
unfortunate coincidences.
• Obscuration: At times, a user must interact with an object that is currently out of
sight or might be hidden behind other objects. How does dealing with this special case
change the general form of any user interface techniques?
• Unfortunate coincidences: The archetypical example of this phenomenon is the
famous optical illusion in which a person stands hill. A friend stands near the camera,
aligning his hand so that it appears as if the distant friend is a small person standing in
the palm of his hand. Such devices, while amusing in some contexts, may under
circumstances, such as air traffic control, prove dangerous.

The following are a few potentially useful selection techniques for use in 3D computer-
generated environments:
• Pointing and ray casting: It allows selection of objects in clear view but not those
inside or behind other objects.
• Dragging: Analogous to ‘swipe select’ in traditional GUIs. Selections can be made
on the picture plane with a rectangle or in an arbitrary space with a volume by
‘lassoing’. Lassoing allows the user to select a space of any shape. It is an extremely
powerful technique in the two-dimensional paradigm. A three-dimensional input
device is required to carry this idea over to three dimensions and perhaps a volume
selector instead of a two-dimensional lasso.
• Naming: Voice input for selection techniques is vital in three-dimensional
environments. You should not ignore ‘delete my chair’ that is powerful command
archetype. Managing is extremely important and difficult. It forms a subset of the
more general problem of naming objects by generalized attributes.
• Naming attributes: Specifying a selection set by a common attribute or set of
attributes (‘all red chairs with arms’) is a technique that should be utilized . As some
attributes are spatial in nature, it is easy to see how these might be specified with a
gesture and with voice, offering a fluid and powerful multimodal selection technique:
all red chairs, shorter than this (user gestures with two hands) in that room (user looks
over shoulder into adjoining room).
8.4.7 Visual Scene Navigation Software:
Visual scene navigation software renders the means for moving the user through the three-
dimensional virtual world. There are many component parts of this software, including:
• Control device gesture interpretation (gesture message from the input subsystem to
movement processing).
• Virtual camera viewpoint and view volume control.
• Hierarchical data structures for polygon flow minimization to the graphics pipeline.

In the software , all act together in real time to produce the next frame in a continuous
series of frames of coherent condition through the virtual world. The following sections
provide a survey of currently developed navigation software and a discussion on special
hierarchical data structures for polygon flow.
8.4.7.1 Survey of currently developed navigation software
Navigation refers to the problem of controlling the point and direction of view in the VE.
Using conventional computer graphics techniques, navigation can be reduced to the problem
of determining a position and orientation transformation matrix (in homogeneous graphics
coordinates ) for rendering an object. Due to the user’s head motion and the transformation
due to motions over long distance (travel in a virtual vehicle). This transformation matrix
can be usefully decomposed into the transformation . There may also be several virtual
vehicles concatenated together.
8.4.7.2 Survey of hierarchical data structure techniques for polygon flow minimization.
The back end of visual scene navigation comprises hierarchical data structures for the
minimization of polygon flow to the graphics pipeline. When a matrix representing the
chosen view is generated, you need to send the scene description transformed by that matrix
to the visual display. One key technique to get the visual scene updated in real time in
interactive update rates is to minimize the total number of polygons sent to the graphics
pipeline.
In a paper, Clark (1976) presents a general approach for solving the polygon flow
minimization problem. He lays stress on the construction of a hierarchical data structure for
the virtual world (Figure 8.4). The approach is to visualize a world database for which a
bounding volume is known for each drawn object. The bounding volumes are organized
hierarchically in a tree that is used to rapidly discard large numbers of polygons.

Fig. 8.4 Hierarchical Data Structure for Polygon Flow Minimization


This part of the Clark paper gives a good start for anyone who is building a three-
dimensional VE for which the total number of polygons is significantly larger than the
hardware is capable of drawing .
The second part of Clark’s paper discusses the actual display of the polygons in the leaf
nodes of the tree. The objective is to send only minimal descriptions of objects through the
graphics pipeline (minimal based on the expected final pixel coverage of the object). In this
approach, multiple-resolution versions of each three-dimensional object and software will be
there for rapidly determining which resolution to draw.
8.4.7.3 Application-specific solutions
Polygon flow minimization to the graphics pipeline is the best understood by looking at
specific solutions. Some of the more interesting work has been done by Brooks at the
University of North Carolina at Chapel Hill on architectural walkthrough. The objective in
those systems was to provide an interactive walkthrough capability for a planned new
computer science building at the university that would offer visualization of the internal
spaces of that building for the consideration of changes before construction.
8.4.7.4 World modeling
Models defining form, behavior and appearance of objects are the core of any VE. Therefore,
a host of modeling problems are central to the development of VE technology. A significant
technological challenge of multimodal VEs is to design and develop object representation,
simulation and rendering (RSR) techniques that support visual, haptic and auditory
interactions with the VE in real time. The RSR process has two major approaches. First, a
unified central representation may be used that captures all the geometric, surface and
physical properties required for physical simulation and rendering purposes. In principle,
methods , such as finite element modeling can be used as the basis for representing these
properties and for physical simulation and rendering purposes. At the other extreme,
separate, spatially and temporally coupled representations can be maintained that represent
only those object properties relevant for simulating and rendering interactions in a single
modality (for example auditory events).
8.4.8 Geometric Modeling: Construction and Acquisition
Detailed three- dimensional geometric models are required in computer-aided design (CAD),
in mainstream computer graphics and in various other fields. Geometric modeling is one of
the active areas of academic and industrial research in its own right; and a wide range of
commercial modeling systems is available. In spite of the wealth of a available tools,
modeling is generally regarded as onerous task.
Going by the VE construction modeling , geometric modeling is a vital enabling
technology whose limitations may impede progress. Practically, the VE – research
community will benefit from a shared open modeling environment- a modeling environment
that includes physics. To understand this, you need to look at how three-dimensional
geometric models are currently acquired. You can do this by looking at how several VE
endeavors have reported their model acquisition process.
8.4.9 Dynamic Model Matching and Augmented Reality
The term augmented reality refers to the use of transparent head-mounted displays that
superimpose synthetic elements on a view of the real surroundings. Unlike conventional
heads-up displays in which the added elements have no direct relation to the background, the
synthetic objects in augmented reality are supposed to appear as part of the real environment .
That is, as nearly as possible, they should interact with the observer and real objects, as if
they too were real.

CHECK YOUR PROGRESS


1. What is the name of first mainstream Hollywood picture to feature virtual reality?
2. What is VR?
3. What is augmented reality?
4. Define the term user interface.
5. How do graphical user interfaces work?
6. How do web-based user interfaces work?

8.5 USER INTERFACE


User interface (also known as human computer interface or man-machine interface (MMI) is
the aggregate of means by which the users interact with the system – a particular machine ,
device, computer program or other complex tool. The user interface provides means of:
• Input that allows the users to manipulate a system
• Output that allows the system to indicate the effects of the users’ manipulation

8.5.1 Introduction
To work with a system, users must be able to control the system and assess the state of the
system, for example, when driving an automobile, the driver uses the steering wheel to
control the direction of the vehicle, and the accelerator pedal, brake pedal and gearstick to
control the speed of the vehicle. The driver identifies the position of the vehicle by looking
through windscreen and expect speed of the vehicle by reading the speedometer . The user
interface of the automobile is composed of the instruments the driver can use to accomplish
the tasks of driving and maintaining the automobile.
8.5.2 Usability
The design of a user interface effects the effort the user must disseminate to provide input for
the system and to interpret the output of the system and how much effort it takes to learn how
to do this. Usability is the extent or degree to which the design of a particular user interface
keeps an account of the human psychology and physiology of the users, and makes the
process of using the system efficient and satisfying.
8.5.3 User Interfaces in Computing
In computer science and human-computer interaction, the user interface (of a computer
program)implies the graphical , textual and auditory information the program presents to the
user and the control sequences (such as keystrokes with the computer keyboard, movements
of the computer mouse and selections with the touch screen) the user applies to control the
program.
8.5.3.1 Types
At present the following types of user interface are the most common:
• Graphical user interfaces (GUI) accept input through devices, such as computer
keyboard and mouse and provide articulated graphical output on the computer
monitor. There are at least two different principles widely used in GUI design:
Object-oriented user interfaces (OOUIs) and application-oriented interfaces.
• Web-based user interfaces or web user interfaces (WUI) accept input and provide
output by generating web pages that are transmitted via the Internet and viewed by the
user using a web browser program. The relatively new implementations utilize Java,
AJAX, Adobe Flex, Microsoft . NET or similar technologies to provide real-time
control in a separate program, eliminating the need to refresh a traditional HTML –
based web browser. Administrative web interfaces for web servers, servers and
networked computers are often known as control panels.

The following user interfaces are common in various fields outside desktop computing:
• Command line interfaces, where the user provides the input by typing a command
string from the computer keyboard and the system provides output by printing text on
the computer monitor. This is used by programmers and system administrators in
engineering and scientific environment and by technically advanced personal
computer users.
• Tactile interfaces supplement or replace other output forms with haptic feedback
methods. Used in computerized simulators, and so on.
• Touch user interface are GUIs using a touch screen display as a combined input and
output device. It is used in many types of point of sale, industrial processes and
machines, self-service machines, and so on.
Other types of user interfaces:
• Attentive user interfaces manage the user’s attention deciding when to
interrupt the user, the type of warnings, and the level of detail of the messages
presented to the user.
• Batch interfaces are non-interactive user interfaces. Here, the user specifies all
the details of the batch job in advance to batch processing, and receives the
output when all the processing is done. The computer does not prompt for
further input till the processing has started.
• Conversational Interface Agents personifies the computer interface in the form
of an animated person. Robot, or other character (such as Microsoft’s Clippy
the paperclip), and present interactions in a conversational form.
• Crossing-based interfaces are GUIs in which the primary task comprises of
crossing boundaries instead of pointing.
• Gesture interface are GUIs that accept input in a form of hand gestures, or
mouse gestures sketched with a computer mouse or a stylus.
• Intelligent user interfaces are human-machine interfaces aimed to improve the
efficiency, effectiveness, and naturalness of human-machine interaction by
representing , reasoning , and acting on models of the user, domain, task,
discourse, and media (for example, graphics, natural language, gesture).
• Motion tracking interfaces monitor the user’s body motions and translate them
into commands, as currently developed by Apple.
• Multi-screen interfaces, utilize multiple displays to provide a more flexible
interaction. This is often applied in computer gene interaction in both the
commercial arcades and more recently the handheld markets.
• Non-command user interfaces observe the user to infer his/her needs and
intentions, without requiring that he/she formulate explicit commands.
• Objects-oriented user interface (OOUI).
• Reflexive user interfaces allows the users control and redefine the entire
system via the user interface alone, for instance to change its command verbs.
Typically this is only feasible with very rich GUIs.
• Tangible user interfaces lay greater emphasis on touch and physical
environment or its element.
• Task-focused interfaces are user interfaces that address the information
overload problem of the desktop metaphor by making tasks, not files, the
primary unit of interaction.
• Text user interfaces are user interfaces that provide text as output , but accept
other form of input other than the typed command strings.
• Voice user interfaces accept input and provide output by generating voice
prompts. The user inputs by pressing keys or buttons, or responding verbally
to the interface.
• Natural-language interfaces used in search engines and web pages. User types
in a question and waits for a response.
• Zero-input interfaces receive inputs from a set of sensors instead of querying
the user with input dialogs.
• Zooming user interfaces are GUIs in which information objects are
represented at different levels of scale and detail, and where the user can
change the scale of the viewed area to show more detail.
• Archy is a keyboard-driven user interface by JefRaskin, arguably more
efficient than mouse-driven user interfaces for document editing and
programming.

8.5.3.2 Modalities and modes


A modality is a path of communication used by the user interface to carry input and output.
Examples of modalities.
• Input: Computer keyboard allows the user to enter typed text and digitizing tablet
allows the user to create free-form drawing.
• Output: Computer monitor allows the system to display text and graphics (vision
modality) and loudspeaker (Refer to Figure C8) allows the system to produce sound
(auditory modality).

A mode is a distinct method of operation within a computer program, in which the same
input may provide different perceived results depending of the state of the computer program.
Heavy use of modes often reduces the usability of a user interface because the user must
expend effort to remember current mode states, and switch between mode states as necessary.

8.6 INTERACTION WITH GEOGRAPHIC INFORMATION


The geographic information system (GIS) or geographical information system captures,
stores, analyses, manages and presents data that is linked to locations. Technically, a GIS
includes mapping software and its application to remote sensing, land surveying ,aerial
photography, mathematics, photogrammetric ,geography, and tools that can be implemented
with GIS software . Still, many refer to ‘geographic information system’ as GIS even if it
does not cover all tools connected to topology.
8.6.1 Applications
GIS technology can be used for scientific investigations, resource management, asset
management, archaeology, environmental impact assessment, urban planning, cartography,
criminology, geographic history, marketing, logistics, prospectivity mapping, and other
purposes. For example, GIS might allow emergency planners to easily compute emergency
response times ( that is logistics) in the event of a natural disaster. GIS might be used to find
wetlands that require protection from pollution. GIS can be used by a company to locate
a new business location to take advantage of a previously under-serned market.
8.6.2 History of Development
About 15,500 years ago, hunters drew pictures of the animals they hunted on the walls of
caves near Lascaux, France. Track lines and tallies were associated with the animal drawings
presumably to depict migration routes (Figure 8.5). While simplistic in comparison to modern
technologies, these early records replicate the two-element structure of modern technologies,
these early records replicate the two-element structure of modern GIS, an image associated
with attribute information.
In 1854, John Snow depicted a cholera outbreak in London using points to represent
the locations of certain individual cases. Possibly these are the earliest use of the geographic
method. His study of the distribution of cholera helped him in identifying the source of the
disease, a contaminated water pump (the Broad Pump, whose handle he disconnected, thus
terminating the outbreak) within the heart of the cholera outbreak.
Fig. 8.5 E.W. Gilbert’s Version (1958) of Jhon Snow’s 1855 Map of the Soho Cholera
Outbreak Showing the Clusters of Cholera Cases in the London Epidemic of 1854.
8.6.3 GIS Software
Using numerous software applications, geographic information can be accessed, transferred,
overlaid, [processed and displayed . Within industry, prominent commercial offerings from
companies, such as Autodesk, Bentley Systems, ESRI, Intergraph, Manifold System,
Mapinfo and Smallworld, offer an entire suite of tools. Government and military departments
often use custom software, open source products, such as GRASS or uDig or more
specialized products that meet a well defined requirement. Although free tools are there to
view GIS datasets, public access to geographic information is dominated by online resources,
such as Google Earth and interactive web mapping.
8.6.3.1 Background
Up to the late 1990s, when GIS data was mostly based on large computers and used to
maintain internal records software was a stand-alone product. However, with growing access
to the Internet and networks and demand for distributed geographic data grew, GIS software
gradually changed its entire outlook to the data delivery over a network. GIS software is now
often marketed as combination of various interoperable applications and APIs. It helps in
automating many complex processes without worrying about underlying algorithms and
processing steps in conventional GIS software.
8.6.3.2 Relating information from different sources
Location may be annotated by x, y and z coordinates of longitude , latitude and elevation, or
by other geocode systems, such as ZIP Codes or by highway mile markets. Any variable that
can be spatially located can be fed into a GIS. Several computer databases that can be directly
entered into a GIS are produced by government agencies and non-government organization.
Different types of data in map form can be fed into a GIS.
8.6.3.3 Data representation
GIS data represents the real world objects (roads, land use, elevation) with digital data. Real
world objects can be categorized into two abstraction: discrete objects ( a house) and
continuous fields (rain fall amount or elevation). There are two board techniques to store data
in a GIS for both abstractions: Raster and Vector.
Raster
Any digital image represented in grids is essentially a raster image. Anyone familiar with
digital photography will be familiar with a pixel as the image’s smallest individual unit. A
combination of them produces an image that is distant from the usually seen stable vector
graphics on which the vector model is based. A digital image concerns itself with the output
as a depiction of reality, whereas in a photograph transferred to computer, the raster data type
reflects an abstraction of reality .Aerial photos are perhaps the most commonly used form of
raster data, with one primary purpose – to display a comprehensive image on a map, as well
as for digitization. Other raster data sets will contain certain information with regards to
elevation, a DEM (Figure 8.6) or reflectance of a wavelength of light, LANDSAT.

Fig. 8.6 Digital Elevation Model, Map (image) and Vector Data
Digital Elevation Model, Map (image) and Vector Data Raster data type comprise of
columns and rows of cells. Each cell will have a single value. Raster data may be images
(raster images) with a pixel or cell having a colour value. Other values captured for each cell
may be a separate value (e.g., land use) a continuous value (e.g., temperature) or a null if
there is no data available. Despite the ability of the raster cell to store a single value, it can be
extended by using raster bands to represent RGB (red, green ,blue) colours, colourmaps (a
mapping between a thematic code and RGB value) or an extended attribute table with one
row for each unique cell value. The resolution of the raster data set is expressed in its cell
width in ground units.

Fig. 8.7 Simple Vector Map


A simple vector map, using each of the vector elements: points for wells, lines for rivers,
and a polygon for the lake (Figure 8.7)
In a GIS, geographical features are often expressed as vectors, considering those
features as geometrical shapes. Different geographical features are represented by different
types of geometry such as follows:
• Points
Zero-dimensional points are used for geographical features that can be expressed by a
single point reference, in other words, simple location. For example, the location of wells,
peak elevations, features of interest or trailheads. Points provide the least amount of
information of these file types.
• Lines or poly lines

One-dimensional lines or poly lines are used for linear features, such as rivers, roads,
railroads, trails and topographic lines. As with point features, linear features displayed at
a small scale will be represented as linear features rather than as a polygon. Line features
can measure distance.
• Polygons

Two-dimensional polygons are applied for geographical features that cover a


particular area of the earth’s surface. Such features may comprise lakes, park boundaries,
buildings, city boundaries, or land uses. Polygons communicate the most amount of
information of the file types, Polygon features can measure perimeter and area.
Advantages and disadvantages
The following are certain important advantages and disadvantages of using a raster or
vector data model to represent reality:
• Raster datasets capture a value for all points in the area covered that may require more
storage space than representing data in a vector format that can store data only where
required.
• Raster data facilitates easy implementation of overlay operations that are more difficult
with vector data.
• Vector data can be displayed as vector graphics implemented on traditional maps.
Whereas raster data will appear as an image that may have a blocky appearance for object
boundaries depending on the resolution of the raster file.
• Vector data can be easier to register, scale and re-project, that can simplify combining
vector layers from different sources.
• Vector data is more compatible with relational database environments where they can be
part of a relational table as a normal column and can be processed using a multitude of
operators.
• Vector file are usually smaller in size than raster data . which can be 10 to 100 times
larger than vector data (depending on resolution).
• Vector data is simpler to update and maintain. However, a raster image will have to be
completely reproduced (Example: a new road is added).
• Vector data enhances analysis capability, especially for ‘networks’ such as roads, power,
rail, telecommunications, and so on. (Examples: Best route, largest port, airfields
connected to two-lane highways). Raster data will not contain all the characteristics of the
features it displays.

Non-spatial data
Additional non-spatial data can also be stored with the spatial data which are represented by
the coordinates of vector geometry or the position of a raster cell. In vector data, the
additional data comprises attributes of the feature. For example, a forest inventory polygon
may also have an identifier value as well as information about tree species. In raster data, the
cell value can store attribute information. However, it can also be used as an identifier that
can relate to records in another table.
8.6.3.4 Data capture
Data capture – feeding information into the system consumes much of the time of GIS
practitioners. Variety of methods is used to enter data into a GIS where it is stored in a digital
format.
To produce digital data, existing data printed on paper or PET film maps can be
digitized or scanned. A digitizer produces vector data in a way similar to an operator who
traces points, lines and polygon boundaries from a map. Scanning a map produces output in
raster data that could be further processed to produce vector data.
Coordinate Geometry (COGO) is a technique that is enter survey data directly into a
GIS from digital data collection systems on survey instruments. Positions from a Global
Navigation Satellite System (GNSS) like Global Positioning System (GPS), another survey
tool, can also be directly fed into a GIS.
Remotely sensed data also plays a vital role in data collection and comprises sensors
attached to a platform. Sensors contain cameras, digital scanners and LIDER, while platforms
usually comprises of aircraft and satellites.
8.6.3.5 Raster-to-vector translation
A GIS can perform data restructuring to convert data into different formats, for example, a
GIS can be used to convert a satellite image map to a vector structure by generating lines
around all cells with the same classification, while determining the cell spatial relationships,
such as adjacency or inclusion.
More advanced data processing can take place with image processing, a technique
developed in the late 1060s by NASA and the private sector to provide contrast enhancement,
false colour rendering and a variety of other techniques that include use of two dimensional
Fourier transforms.
8.6.3.6 Projections coordinate systems and registration
A property ownership map and a soils map might display data at different scales. In a GIS,
map information must be manipulated so that it registers, or fits, with information gathered
from other maps. Prior to the analysis of digital data , they may have to undergo other
manipulations projection and coordinate conversions, for example that integrate them into a
GIS.
Projection is a basic component of map making . A projection is a mathematical
means of transferring information from a model of the Earth that represents a three-
dimensional curved surface, to a two-dimensional medium paper or a computer screen.
Different projections are used for different map types because each projection particularly
suits specific uses. A projection, for example, that represents the shapes of the continents
precisely will distort their relative sixes.
8.6.3.7 Spatial analysis with GIS
Given the vast range of techniques for spatial analysis that have been developed over the past
half century, any summary or review can only cover the subject to a limited depth. This is a
rapidly changing field. GIS packages are increasingly including analytical tools, such as
standard built-in facilities, optional toolsets, add-ins or ‘analysts’. In many instances, such
facilities are rendered by the original software suppliers (commercial vendors or collaborative
non commercial development teams) while in other cases facilities have been developed and
are provided by third parties.
Data modeling
It is cumbersome to rather wetlands maps to rainfall amounts captured at different points,
such as airports, television stations and high schools. However, a GIS can be used to depict
two-and three-dimensional characteristics of the Earth’s surface, subsurface and atmosphere
from information points. For example, a GIS can quickly generate a map with isopleths or
contour lines that shows differing amounts of rainfall.
Topological modeling
A GIS can identify and analyze the spatial relationships that exist within the digitally stored
spatial data. These topological relationships enable complex spatial modeling and analysis.
Topological relationships between geometric entities traditionally contain adjacency (what
adjoins what), containment (what encloses what) and proximity (how close something is to
something else).
Networks
If all the factories near a wetland were to release chemicals accidentally into the river at the
same time, how long would it take for a damaging amount of pollutant to enter the wetland
reserve? A GIS can simulate the routing of materials along a linear network. Values, such as
shape, speed limit to pipe diameter can be applied in network modeling to represent the flow
of the phenomenon more accurately. Network modeling is commonly employed in
transportation more accurately. Network modeling is commonly employed in transportation
planning, hydrology modeling and infrastructure modeling.

Fig. 8.8 Cartographic Modelling


An example of using layers in a GIS application (Figure 8.8). In this example, the forest
cover layer (light green) is at the bottom with the topographic layer over the forest cover
layer. Next up is the stream layer, then the boundary layer and then the road layer. The order
is very important to properly display the final result. Note that the pond layer was located just
below the stream layer, so that a stream line can be seen overlying one of the ponds.
Map overlay
The combination of several spatial datasets (points, lines or polygons) creates a new output
vector dataset, which is visually similar to stacking several maps of the same region. These
overlays appear similar to mathematical Venn diagram overlays. A union overlay combines
the geographic features and attribute tables of both of both inputs into a new single output.
An intersect overlay defines the area where both inputs overlap and retains a set of attribute
fields for each of them. A symmetric difference overlay defines an output area that contains
the total area of both inputs for the overlapping area.
Data extraction is a GIS process similar or vector overlay. It can be used in either
vector or raster data analysis. Rather than combining the properties and features of both
datasets, data extraction involves using a ‘clip’ or ‘mask’ to pull out the features of one data
set that fall within the spatial extent of another dataset.
Automated cartography
Both digital cartography and GIS encode spatial relationships in structured formal
representations. In digital cartography modeling , GIS is used as a (semi) automated process
of making maps so called automated cartography. In practice, it can be a subset of a GIS,
within which it is equivalent to the stage of visualization because in most cases not all the
GIS functionality in used. Cartographic products can be either in a hardcopy or digital format.
Geostatistics
Geostatistics is a point-pattern analysis that generates field predictions from data points. It is
one of the ways of looking at the statistical properties of those special data. It differs from
general applications because it uses graph theory and matrix algebra to reduce the number of
parameters in the data. Only the second-order properties of the GIS data are examined.
When phenomena are measured, the observation methods control the accuracy of any
subsequent analysis. Due to the nature of the data (for example, traffic patterns in an urban
environment, and weather patterns over the Pacific Ocean), a constant or dynamic degree of
accuracy is always lost on the measurement. This loss of accuracy is determined from the
scale and distribution of the data collection.
To find out the statistical relevance of the analysis an average is determined so that
points (gradients) outside of any immediate measurement can be used to determine their
predicted behavior. This is on account of the limitations of the applied statistic and data
collection methods, and interpolation is required for predicting the behavior of particles,
points and locations that are not directly measurable.
Fig. 8.9 Hillshade
Hillshade model derived from a Digital Elevation Model (DEM) of the Valestra area in
the northern Apennines (Italy) (Figure 8.9).
Interpolation is the process by which a surface is created , often a raster dataset,
through the input of data collected at a number of sample points. Depending on the properties
of the data set, there are several forms of interpolation, each that treats the data differently. In
comparing interpolation methods, initially it should be considered whether the source data
will change (exact or approximate ). Next consideration must be whether the method is
subjective, a human interpretation or objective. Further , it must be considered that there is
the nature of transitions between points: are they abrupt of gradual. Finally, whether a method
is global (it uses the entire data set to form the model) or local where an algorithm is repeated
for a small section of terrain.
Address geocoding
Geocoding is interpolating spatial locations ( X and Y coordinates) from street addresses or
any other spatially-referenced data, such as ZIP Codes, parcel lots and address locations. To
geocode individual addresses , such as a road centerline file with address ranges, a reference
theme is required. Historically, the individual address locations have been interpolated or
estimated by examining address ranges along a road segment . These are usually provided
either in the form of a table or a database. The GIS will place a dot approximately where that
address belongs along the segment of centerline. For example, an address point of 500 will be
at the middle of a line segment that starts with address 1 and ends with address 1000.
Geocoding can also be utilized against actual parcel data, typically from municipal tax maps.
In this case, the geocoding will result in an actually positioned space as opposed to an
interpolated point. This approach is widely used to provide more precise location
information.
Reverse geocoding
Reverse geocoding is the process of returning an estimated street address number related to a
given coordinate. For example, a user can click on a road centerline theme (thereby providing
a coordinate) and get information that reflects the estimated house number. This house
number is interpolated from a range allocated to that road segment. If the user clicks at the
middle of a segment that starts with address 1 and ends with 100, the returned value will be
somewhere near 50. Note that reverse geocoding will not return actual addresses. It will only
return estimates of what should be there based on the pre-determined range.
8.6.3.8 Data output and cartography
Cartography is the design and production of maps or visually representing spatial data. With
the help of computers, a vast majority of modern cartography is done, usually using a GIS.
However, production quality cartography is also achieved by importing layers into a design
program to refine it. Most GIS software provides the user substantial control over the
appearance of the data.
Cartographic work serves two major functions:
• It produces graphics on the screen or on paper that convey the results of analysis to
the people who make decisions about resources. Wall maps and other graphics can be
produced, helping the viewer to visualize and thereby, understand the results of
analyses or simulations of potential events. Web map servers facilitate distribution of
generated maps using web browsers implementing various web-based application
programming interfaces (AJAX, Java, Flash, and so on ).
• Other database information can be produced for further analysis or use. An example
could be a list of al addresses within one mile (1.6 km) of a toxic spill.

8.6.3.9 Graphic display techniques


Traditional maps are abstractions of the real world by sampling of important elements
portrayed on a sheet of paper with symbols to represent physical objects. People who use
these maps must interpret these symbols. Topographic maps display the shape of land surface
with contour lines or with shaded relief.
Today, graphic display techniques, such as shading based on altitude in a GIS can make
relationship map elements visible, enhancing one’s ability to extract and analyze information.
For example, to produce a perspective view of a portion of San Mateo County, California two
types of data were combined is a GIS:
• The digital elevation model comprises surface elevations recorded on a 30-metre
horizontal grid. It shows high elevations as white and low elevations as black.
• The accompanying Landsat Thematic Mapper image displays a false-colour infrared
image looking down at the same area in 30-meter pixels or picture elements, for the
same coordinate points, pixel by pixel, as the elevation information.

8.6.3.10 Spatial ETL


Spatial ETL tools provide the data processing feature of traditional extract, transform load
(ETL) software. However, the primary focus is on the ability to manage spatial data. They
provide the GIS users with the ability to translate data between different standards and
proprietary formats, while geometrically transforming the data en-route.
8.6.3.11 The Future
Fig. 8.10 The Future GeaBios – Tiny WMS/WES Client (Flash/DHTML)
Many disciplines can benefit from the GIS technology. An active GIS market has helped in
reducing the costs and in continual improvements of hardware and software components of
GIS (Figure 8, 10). In turn, these developments will result in a much wider use of the
technology throughout science, government, business and industry, with applications
including real estate, public health, crime mapping, national defense, sustainable
development, natural resources, landscape architecture, archaeology, regional and community
planning, and transportation and logistics.
8.6.3.12 OGC standards
The Open Geospatial Consortium (OGC) is an international consortium of 384
companies, government agencies, universities and individuals participating in a consensus to
develop publicly available geo-processing specifications. Open GIS specifications has
defined open interfaces and protocols that support interoperable solutions that ‘geo-enable’
the Web, wireless and location-based services and mainstream IT, and enable technology
developers to make complex spatial information and services accessible and useful with all
kinds of applications. OGC protocols contain web map service (WMS) and web feature
service (WFS).
GIS products are divided by the OGC into two categories, based on how completely
and accurately the software follows the OGC specifications (Figure 8.11).
Fig. 8.11 OGC Standards Help GIS Tools Communicate
Compliant products are software products that comply to OpenGIS Specifications of
OGC. When a product has been tested and certified as compliant by the OGC testing
program, the product is automatically registered as ‘compliant’ on this site.
8.6.3.13 Web mapping
In recent years, an explosion of mapping applications was experienced on the Web, such as
Google Maps and Bing Maps. These websites give public access to huge amounts of
geographic data.
Some of these sites, such as Google Maps and OpenLayers, expose an API that allows
users to create custom applications. These toolkits commonly provide street maps,
aerial/satellite imagery, geocoding, searches and routing functionality.
8.6.3.14 Global change, climate history program and prediction of its impact
Conventionally, maps have been deployed to explore the Earth in order to exploit its
resources. GIS technology, as an extension cartographic science, has augmented the
competence and investigative power of conventional mapping. Now, as the scientific
community comes to terms with environmental consequences of anthropogenic activities that
are influencing climate change, GIS technology is quickly emerging as a necessary tool to
recognize the impacts of these kinds of changes over time. GIS facilitates combining several
sources of information with current map and the latest information from earth observation
satellites with the help of putputs of climate change models. This can aid understanding the
impact change on complex natural systems. The best example of this is the study of the
melting of ice in the Arctic Pole.
8.6.3.15 Adding the dimension of time
The condition of the Earth’s surface, atmosphere and sub-surface can be determined by
putting satellite data into a GIS, technology offers researchers the facility to examine the
variations in Earth over time. For instance, the changes in vegetation over the course of a
growing season can be animated to determine when drought was most widespread in a
particular region. The resulting graphic, also known as a normalized vegetation index,
determines a rough measure of plant health. Working with two variables over time will
facilitate researchers to come up with regional differences in the lag between a decline in
rainfall and its consequence on vegetation. GIS technology as well as the availability of
digital data on regional and global scales help to accomplish such analyses. The satellite
sensor output used in generating vegetation graphic is produced, for instance, by the
advanced very high-resolution radiometer (AVHRR). This sensor system shows the amount
of energy that is reflected from the Earth’s surface across different bands of the spectrum for
surface areas of about 1 square Kilo, etre. The satellite sensor produces images of a specific
location on the Earth twice a day. AVHRR and the moderate-resolution imaging spectro
radiometer (MODIS) . more recently, are two of various sensor systems used for Earth
surface analysis. More sensors are bound to follow that are capable of generating even greater
amounts of data.
8.6.4 Semantics
In information systems, tools and technologies emerging from the W3C’s Semantic Web
activity are proving useful for data integration problems. Correspondingly, such technologies
have been proposed to facilitate interoperability and data reuse among GIS applications and
also to enable new analysis mechanisms.
8.6.5 Society
With the growing popularity of GIS in decision-making, scholars have begun to scrutinize the
social implications of GIS. It has been argued that the production, distribution, utilization and
representation of geographic information are largely related to the social context. Other
related topics contain discussion on copyright, privacy and censorship. Using GIS as a tool
for public participation is a more optimistic social approach to GIS adoption.
In the past, the design of geographic information systems (GISs) has been scrutinized
in a bottom-up manner. At the same time, little consideration has been made to those system
components with which users have immediate contact, such as, languages to query spatial
objects or the user interface. Considerations about the interaction between the users and
spatial data are of primary significance for these issues. The domain of this paper is the
investigation of interactive spatial query languages that allow users to question in an ad hoc
manner agaist a geographic information system. It is motivated by the observation that
traditional database query languages are insufficient for the treatment of spatial properties.
The methodology is based on the user’ interactions with spatial objects, which are graphically
provided on a screen and their pertinent operations. Objects and operations are rendered at
trhe conceptual level of the user interface and complemented by the selection of appropriate
techniques to interact with spatial objects rendered on a screen. A number of spatial concepts
are presented that are crucial when it comes to design a GIS query language. In a series of
interface snapshots including them into a human interface is presented, simulating the
interaction between a user and a GIS.

8.7 APPLICATIONS
Virtual reality is often used to describe a wide variety of applications, commonly associated
with its immersive, highly visual and 3D environments. The development of CAD software,
graphics hardware acceleration, head mounted displayed, database gloves and miniaturization
have helped popularizing the notion of VR. Michael Heim in his book. The metaphysics of
Virtual Reality has identified seven different concepts of VR.
• Simulation
• Interaction
• Artificiality
• Immersion
• Telepresence
• Full-body immersion
• Network communication

The definition still has a futuristic romanticism attached. People often identify VR with
head mounted displays and data suits.

8.8 POTENTIAL
Virtual reality is a powerful technology that has potential for far-ranging social and
psychological impact. Disciplinary psychology and other social sciences should take a
proactive stance in relation to VR and conduct research to determine the outlines of this
potential impact, with the hope of affecting its direction. There are potential psychosocial
effects of a ‘seamless VR’ in relation to several societal domains: private experience, home
and family, and religion and spirituality. Engineering and social science professionals must
cooperate in research regarding the potential societal effects of VR.

CHECK YOUR PROGRESS


1. What is tactile interface?
2. What is attentive user interface?
3. Define modality.
4. Define mode.
5. What is raster?
6. Explain cartography.
7. What are OGC standards?

8.9 SUMMARY
• Virtual reality is a technology that allows a user to interact with a computer simulated
environment.
• The current virtual reality environments often are primarily visual experiences
displayed either on a computer screen or through special or stereoscopic displays.
Any user can interact with a virtual environment or a virtual artifact (VA) either
through standard input devices (e.g., a keyboard and mouse) or through multimodal
devices (e.g., such as a wired glove , the Polhemus boom arm and omni-directional
treadmill).
• The computer technology that enables us to develop three-dimensional virtual
environment (VEs) comprises both hardware and software.
• User interface (also known as human computer interface or man-machine interface
(MMI) is the aggregate of means by which the users interact with the system – a
particular machine, device, computer program or other complex tool.
• A geographic information system (GIS) or geographical information system captures,
stores, analyses, manages and presents data that is linked to locations. Technically, a
GIS includes mapping software and its application to remote sensing, land surveying ,
e=aerial photography, mathematics, etc.
• Virtual reality is often used to describe a wide variety of applications, commonly
associated with its immersive, highly visual and 3D environments.
• Virtual reality is a powerful technology that has potential for far-ranging social and
psychological impact. Disciplinary psychology and other social sciences should take a
proactive stance in relation to VR and conduct research to determine the outlines of
this potential impact, with the hope of affecting its direction.

8.10 KEY TERMS


• Virtual reality: A technology that allows a user to interact with a computer simulated
environment.
• Advanced very high-resolution radiometer: An instrument that determines the
amount of energy reflected from the earth’s surface across various bands of the
spectrum for surface areas of about 1 square Kilometer.
• Attentive user interfaces: An interface that manages the user’s attention deciding
when to interrupt the user, the type of warnings and the level of detail of the messages
presented to the user.
• Augmented reality: The use of transparent head-mounted displays that superimpose
synthetic elements on a view of the real surroundings.
• Batch interfaces: Non-interactive user interfaces. Here, the user specifies all the
details of the batch job in advance to batch processing and receives the output when
all the processing is done.
• Cartography: The design and production of maps or visually representing spatial
data. With the help of computers, a vast majority of modern cartography is done,
usually using a GIS.
• Command line Interfaces: This is used by programmers and system administrators
in engineering and scientific environment and by technically advanced personal
computer users.
• Graphical user Interface: Computer interface input through devices , such as
computer keyboard and mouse and provides articulated graphical output on the
computer monitor.
• Modality: A path of communication used by the user interface to carry input and
output.
• Mode: A distinct method of operation within a computer program, in which the same
input may provide different perceived results depending of the state of the computer
program.
• OGC standards: An international consortium of 384 companies, government
agencies, universities and individuals participating in a consensus to develop publicly
available geo-processing specifications.
• Touch user interface: GUIs using a touchscreen display as a combined input and
output device. These are used in many types of point of sale, industrial processes and
machines, self-service machines, and so on.
• Web-based user interfaces or web user interfaces (WUI): Interfaces that accept
input and provide output by generating web pages that are transmitted via the Internet
and viewed by the user using a web browser program.

8.11 END QUESTIONS


Short –Answer Questions
1. What are the hardware components used in computer graphics?
2. What are the computation and data management issues in visual scene generation?
3. What is geometric modeling? How is construction and acquisition done?
4. What are GIS software programs?
5. What are the applications of multimedia and VR?
6. Write a short note on advanced very high resolution radiometer (AVHRR).

Long-Answer Questions
1. Discuss the fundamentals of multimedia and VR.
2. Explain the technological issues in multimedia and VR.
3. Discuss graphics capabilities in PC-based VE systems.
4. What is dynamic model matching and augmented reality?
5. What is user interface? Explain user interface in computing.
6. What do you understand by geographic information?

8.12 BIBLIOGRAPHY
Castranova, E.2007, Exodus to the Virtual World: How Online Fun is Changing Reality.
New York: Palgrave Macmillan.
Burdea G and P. Coffet, 2003. Virtual Reality Technology, 2nd edition. New Jersey:
Wiley-IEEE Press.
Goslin, M and J.F. Morie,1996.”Virtopia” Emotional experiences in Virtual
Environments’, Leonardo, Vol. 29, No, 2, pp. 95-100.
UNIT 9 MULTIMEDIA:
APPLICATION AND FUTURE
Program Name:BSc(MGA)
Written by: Srajan
Structure:
4.0 Introduction
4.1 Unit Objectives
4.2 Applications for multimedia
4.3 Future Applications
9.3.1 Bokode: The Better Barcode
9.3.2 Chameleon Guitar
9.3.3 GIRLS involved in real-life sharing
9.3.4 TOFU: A squash and Stretch Robot
9.3.5 Merry Miser
9.3.6 Mycrocosm
9.3.7 Quickies
9.3.8 SixthSense
4.4 Summary
4.5 Key Terms
4.6 End Questions

9.0 INTRODUCTION
In this unit we are going to learn about multimedia and its applications in daily life and its
future applications. As we all know Multimedia is a combination of different media elements
like, text, graphics. Nowadays multimedia is used in every field education, advertising,
medicine, business etc. These days, the rapid evolution of multimedia is the result of the
emergence and convergence of all these technology.
We are going to learn about different applications like Bokode, TOFU, Mycrocosm,
Merry Miser and how they are helpful to us in the daily life.

9.1 UNIT OBJECTIVES:


After studying this unit you will be able to
• Describe the applications of multimedia
• Discuss the future of multimedia
9.2 APPLICATIONS OF MULTIMEDIA
Today, multimedia is ubiquitous in daily life, be it for pleasure and entertainment,
work, learning, communication and whatever other purpose one can think of From the
smallest applications on the mobile phone to large scale presentations in theatre halls,
multimedia has made its presence felt in all spheres of life today. It can be used for
entertainment, corporate presentations, education, training, simulations, digital publications,
museum exhibits, concerts and theatre, and much more.
Creation of multimedia has also become easier with a host of multimedia authoring
applications such as Flash, Shockwave and Director, which let authors create products that
are limited only by imagination.

9.3 FUTURE APPLICATIONS


It is noteworthy that augmented reality (AR) is an expensive development in technology.
As a result, the future of AR is dependent on the reduction of costs in some or the other way.
The affordable AR technology will be widespread. However, major industries now are the
sole buyers as they have the opportunity and resources to utilize this as follows:
• Expanding a PC screen into the real environment: In real space, program
windows and icons appear as virtual devices and are operated by eye or gesture
either by gazing or pointing. A single can concurrently simulate a hundred
conventional PC screens or application windows all around a user.
• Virtual devices of all kinds, for example, replacement of traditional screens,
control panels and entirely new applications are impossible in real hardware,
such as 3D objects that interactively change the shape and appearance on the
basis of current task or requirement.
• Enhanced media applications, such as pseudo-holographic virtual screens,
virtual surround cinema, virtual holodecks (allowing interaction between
computer-generated imagery and live entertainers and audience).
• Virtual conferences in holodeck style.
• Replacement of cell phone and car navigator screens: Eye-dialing, inputting
information directly into the environment, for example, guiding lines directly
on the road and enhancements, such as X-ray views, etc.
• Virtual plants, panoramic views, wallpapers, decorations, artwork,
illumination, and so on, enhancing everyday life. A virtual window, for
example, can be displayed on a regular wall to show live feed of a camera
placed on the outer side of the building, thereby allowing the user to toggle a
wall's transparency effectually.
• With the emergence of AR systems in mass market, you may make Christmas
decorations, advertisement towers, virtual window dressings, posters, traffic
signs and a lot more. These may be fully interactive even at a distance, for
example, by eye pointing.
• Virtual gadgetry becomes possible. Any physical device currently produced to
assist in data-oriented tasks, such as the clock, radio, PC, arrival/departure
board of an airport, stock ticker, PDA, PMP, informational posters/fliers/
billboards, in-car navigation systems, and so on, could be replaced by virtual
devices are inexpensive to produce aside from the cost of coding the software.
Examples include a virtual wall clock, a docked to-do list for the day by your
bed to look at first thing in the morning, and so on.
• Group-specific AR feeds that can be subscribed; for example, a construction
manager at his site can create and dock instructions, including diagrams at
specific locations. The workers can refer to this feed of AR items as they work.
Another example can be patrons at a public event subscribing to a feed of
direction and information-oriented AR items.
9.3.1 Bokode: The Better Barcode

Fig 9.1: Bokode

Bokode is a new optical data-storage tag of taking only 3mm of space can store data a
million times more than a bar code (Figure 9.1).
The typical barcodes on product packaging disseminate information to the scanner at
the checkout counter—that is what they do. Now, researchers have come up with a new tiny
barcode at the Media Lab that can disseminate a variety of useful information to shoppers as
they scan the shelves and can even lead to new devices for business meetings, classroom
presentations, videogames or motion-capture systems.
The tiny labels, just 3 mm across, are equivalent to the size of the @ symbol on a
computer keyboard. Yet, they can contain thousands of bits of information far more than an
ordinary barcode. Currently, they need a lens and a built-in LED light source. However,
future versions can be made reflective, just like the holographic images on credit cards, which
will be much cheaper and more unnoticeable.
One of the few advantages of the new labels, unlike today's barcodes, is that they can
be read from a distance—from a few meters. In addition, unlike the laser scanners that are
required to read today's labels, any standard digital camera can read these, such as the ones
built in to cell phones around the world.
9.3.2 Chameleon Guitar
Fig. 9.2: Chameleon Guitar
(Source: Webb Chappell)
You can implement a special guitar that combines physical acoustic properties with
virtual capabilities in this research. A wooden resonator, a replaceable and unique piece of
wood that creates this acoustic sound, will have the acoustical values (Fig 9.2). The acoustic
signal this wooden heart creates is digitally processed in a virtual sound box to create a
flexible sound design.
Today's musical or graphical tools and instruments fall into two distinct classes, each
with its unique benefits and drawbacks:
• Traditional physical instruments: These instruments offer a richness and
uniqueness of qualities because of the unique properties of the physical
materials used. The hand-crafted construction qualities are also very important
for these tools.
• Electronic and computer-based instruments: These instruments lack the
richness and uniqueness of the traditional physical instruments. They produce
predictable and generic results but at the same time offer the advantage of
flexibility, such as they can be many instruments embedded into one.
Here, a novel approach is proposed to design and build instruments that attempts to
combine the best of both. The approach will be characterized by a sampling of the
instrument's physical matter along with its physical properties and complemented by a
physically simulated, virtual shape. This method to build digital objects holds some of the
rich qualities and variations that are found in real instruments (the blend of natural materials
with craft) while maintaining flexibility and open-endedness of the digital ones.

9.3.3 GIRLs Involved in Real-Life Sharing


Fig 9.3: Reflecting Emotions

Girls Involved in Real-Life Sharing (GIRLS) allows users to actively reflect the emotions
related to their situations by constructing pictorial narratives (Fig 9.3). The system applies
common-sense reasoning to derive effective digital content to support emotional reflection
from the users' stories. The users are able to gain new knowledge and understanding about
themselves and others by exploring personal and authentic experiences. Currently, this
project is converted into an online system for the school counselors to use.

9.3.4 TOFU: A Squash and Stretch Robot

Fig. 9.4: TOFU

TOFU aims to explore new ways of robotic social expression by using the techniques TOW
2D animation for decades have been using for their projects (Figure 9.4). Disney Animation
Studios has pioneered animation tools, such as 'squash and stretch' and 'secondary motion'
in the 1950s. Since then, animators have been widely using such techniques, which are not
used commonly for designing robots. TOFU can equally squash and stretch, which is named
after the "squashing and stretching" food product. Clever use of elastic coupling
complemented by compliant materials provides a vibrant yet robust actuation method.
TOFU uses inexpensive OLED displays instead of using eyes actuated by motors. These
displays are highly dynamic and lifelike in motion.
Check your progress-1
What is augmented reality (AR)?
What is TOFU?
What is Bokode?
Define Traditional physical instrument.
Define electronic and computer-based instruments.

9.3.5 Merry Miser


Merry Miser, the mobile application, allows the users to make better spending decisions.
This application uses the context provided by a user's financial history and location to
provide personalized interventions when the user is about to make an expenditure. The
interventions motivated by prior research in positive psychology, shopping psychology, and
persuasive technology. The interventions are consisting of:
• Information displays about context-relevant spending history
• Subjective assessments of past purchases
• Personal budgets
• Savings goals
Merry Miser provides interventions when a user is about to make an expenditure.
Locations and messages are personalized using users' financial history (see Figure 9.5).

Fig. 9.5: The Merry Miser Application

Advertisers and marketers use innovative ways to hawk their products to you, and
never stop searching for different avenues as well. The chief manner of advertisement has
been to appeal our emotions, our irrationality and our fears, and these are in use since the
1930s, which convinces us and pushes us to the brink of the thought that we cannot be happy
without their products. Many of us might think that we are not susceptible to this
manipulation. Studies and advertisers' success show that this strategy succeeds far too often.
Merry Miser E
attempts by providing contextual information that it
thinks work against the trend that can help users to:
• Track their finances
• Maintain budgets
• Track how past purchases have made them feel
It relates expectations of users on the type of purchase that is going to be, and how good
the purchase actually ends up being. It allows users to train themselves about their own
assessments. It also promotes long-term, intelligent thinking in the face of the manipulative
marketers.

9.3.6 Mycrocosm
Minutes Late for Work

Fig 9.6: Sharing Data with Mycrocosm


The weblog has grown in popularity. The development of Weblog’s many variants, such
as vlogs, moblogs, photoblogs and tumblelogs indicate that people are willing to share what
they are seeing, thinking and doing increasingly these days. Micro-blogging has unfolded
this space even further to those who would not regard themselves authors at all. Services,
such as twitter and the status updates are common to social networking sites that open up a
form of publication that is apt so wide, and its audience is so fundamentally amateur.
Mycrocosm is a Web service that applies the visual language of statistics to share smaller
chunks of personal information, including individual numbers and words (Figure 9.6). This
information is so full of meaning in our lives. Mycrocosm also allows users to take a note of
a wide variety of the minutes of their daily lives that allows them to build up a rich, online
picture of the small, petty things they find so meaningful.

9.3.7 Quickies

Fig. 9.7: Sticky Notes


Quickies brings to us one of the most useful inventions of the twentieth century, the
sticky note, into the digital age which is so ubiquitous (Figure 9.7). Sticky (Post-it) notes
enable us to manage our to-do lists and contain short reminders and information that are
needed in the near future. But, keeping a track of them though is a task in itself. Quickies
have enriched Post-it notes, making them traceable and manageable. We give these stickies
intelligence and enable it to remind us at the exact time about the task we need to perform.
The application of RFID and ink-recognition technologies can be used to:
• Create intelligent sticky notes that are searchable
• Send reminders and messages
• Seamlessly connect physic al and informational experiences.

9.3.8 SixthSense

SixthSense gestural interface enhances our physical world with digital information which
can easily be put on. It also allows us to use hand gestures to interact with the information.
SixthSense brings forth intangible and digital information into the tangible world. It
also allows us to interact with this information through natural hand gestures. SixthSense
frees information from its confinement, seamlessly integrating it with reality, thus confining
the entire world into your computer.

Fig. 9.8 Sixth Sense and Some of its Applications: Watching News Video,
Taking Photographs, Using a Map, Checking the Time, Drawing,
and Recognizing Gestures

The Sixth Sense prototype constitutes a pocket projector, a mirror and a camera worn in
a mobile device that is pendant-like in structure. Both the camera and the projector are
connected to a mobile computing device that is in the user's pocket. The system converts any
surface into a digital one by projecting information onto the surfaces and physical objects
around us. Using computer-vision-based techniques, the camera recognizes and tracks the
user's hand gestures and physical objects. SixthSense employs a techniques of simple
computer-vision to process the video stream data from the camera, and follows the locations
of the colored markers on the user's fingertips (that are used normally for visual tracking). In
addition to this is the software reads the data into gestures to use for interaction with the
projected application interfaces.
The recent SixthSense prototype supports various types of gesture-based interactions,
showing the viability, flexibility and usefulness of the system. The cost incurred in building
the current prototype system would be approximately $350.

Check your progress-2


What is Mycrocosom?
What is merry miser?

9.4 SUMMARY
Multimedia comprises the integration of graphics, text, animation, and video, audio to
provide the user with high levels of control.
Some new developments in multimedia technology include the bokode and the
chameleon guitar.

9.5 KEY TERMS


• Augmented reality (AR): Live direct or indirect view of the real world
environment whose elements are augmented by virtual computer generated
imagery.
• Bokode: An optical data-storage tag of taking only 3mm of space that can store
data a million

9.6 END QUESTIONS


1) Write a note on Bokode?
2) Explain the applications of multimedia.
3) Write a brief note on the future of multimedia system.
4) Write a advantages and disadvantage of both the traditional and computer based
instruments.
5) What is Merry miser application? How it is important?
6) What is Mycrocosm?
7) How sticky notes are important?
8) Write a note on sixth Sense.

Answer to check your progress questions


Check your progress -1:
Augmented reality (AR) is a live direct or indirect view of the real world
environment whose elements are augmented by virtual computer
generated imagery.
TOFU aims to explore new ways of robotic social expression by using
the techniques TOW 2D animation for decades have been using for
their projects
Bokode is an optical data-storage tag of taking only 3mm of space that
can store data a million
Traditional physical instruments: These instruments offer a richness
and uniqueness of qualities because of the unique properties of the
physical materials used.
They produce predictable and generic results but at the same time
offer the advantage of flexibility, such as they can be many
instruments embedded into one.
Check your progress -2:
Mycrocosm is a Web service that applies the visual language of
statistics to share smaller chunks of personal information,
including individual numbers and words.
Merry Miser, the mobile application, allows the users to make
better spending decisions. This application uses the context
provided by a user's financial history and location to provide
personalized interventions when the user is about to make an
expenditure.

9.7 FURTHER READING

1. Castranova, E. 2007. Exodus to the Virtual World: How Online Fun is Changing
Reality. New York: Palgrave Macmillan.

2. Burdea, G and P. Coffet. 2003. Virtual Reality Technology, 2nd edition. New Jersey:
Wiley-IEEE Press.

3. Goslin, M and Morie, J. F. 1996. "Virtopia" Emotional experiences in Virtual


Environments', Leonardo, Vol. 29, No. 2, pp. 95-100.

4. Grau, Oliver. 2003. Virtual Art: From Illusion to Immersion (Leonardo Book Series).
Cambridge, MASS: MIT-Press.

5. Hillis, Ken. 1999. Digital Sensations: Space, Identity and Embodiment in Virtual
Reality. Minneapolis, MN: University of Minnesota Press.
6. Kalawsky, R. S. 1993. The Science of Virtual Reality and Virtual Environments: A
Technical, Scientific and Engineering Reference on Virtual Environmen4 Reading,
Mass: Addison-Wesley.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy