Introduction To Multimedia Its Application
Introduction To Multimedia Its Application
1.0 INTRODUCTION
As the first Unit of the course we are going to learn what actually the multimedia is.
In this lesson we will learn the preliminary concepts of Multimedia. We will discuss the
various benefits and applications of multimedia. After going through this chapter then you
will be able to understand why multimedia is important in today’s world.
The termed Multimedia is combination of two terms. Multiple and media. Multimedia
is a combination of different media elements like text, video, audio, graphics. Nowadays
multimedia is used in every field education, advertising, medicine, business etc. These days,
the rapid evolution of multimedia is the result of the emergence and convergence of all these
technology.
In this unit you will come to know about the various trends of multimedia system.
Also the challenges faced by multimedia system, features of multimedia system and various
components of the multimedia system.
Multimedia
A Multimedia is an Application which uses a collection of multiple media sources e.g.
text, graphics, images, sound/audio, and video. Multimedia describes in simple language is
"more than one medium." In other words, Multimedia is an integration of text, images,
sounds, and movement.
Multimedia System
Multimedia system is a computer system that has ability to store, compress, decompress,
capture, digitize, and present information. The main purpose of multimedia system is to
provide a creative and effective way of producing, storing and communicating information.
Marketing, Training, Education, Entertainment are the areas where multimedia used.
Multimedia systems may have to render a variety of media at the same time. There is a
material relationship between many forms of media (e.g. Video and Audio.) These may
create problems such as following:
The main issues that multimedia systems need to deal with are as follows:
How to represent and store temporal information.
How to strictly maintain the temporal relationships on play back/retrieval
What process are involved
To be able to digitize many initial sources of data, one should translate the data form an
analog source to digital representation. The will involve sampling (audio/video) although
digital cameras now exist for direct scene to digital capture of images and video. Scanning
(graphics, still images).
The data is large several Mb easily for audio and video -- therefore storage, transfer
(bandwidth) and processing overheads are high. Data compression techniques very common
and regularly employed to compress large volume of data.
Capture devices
The various capture devices that are required for multimedia: Video
Camera, Video Recorder, Audio Microphone, Keyboards, mice, graphics
tablets, 3D input devices, tactile sensors, VR devices. Digitizing/Sampling
Hardware
Storage Devices
The storage devices that are required: Hard disks, CD-ROMs, Jaz/Zip drives,
DVD, etc
Communication Networks
Ethernet, Token Ring, FDDI, ATM, Intranets, Internets.
Computer Systems
Multimedia Desktop machines, Workstations, MPEG/VIDEO/DSP Hardware
Display Devices
The display devices are: CD-quality speakers, HDTV,SVGA, Hi-Res
monitors, Color printers etc.
These are developing very fast to support ever increasing need for Multimedia.
Switching, protocol, carrier, application, coding/compression, database, processing,
and system Integration technologies are the forefront of this development.
It can be used to help students and teacher to teach as well as learn the given topics
easily.
It can be used for different kind of people .we can spread knowledge easily all over
the world. It can be used to spread knowledge easily from one person to a whole
group.
It is very User friendly. You can do multitasking using multimedia.
It is easy to take the multimedia files from one to other places as it can be stored in
the cheap and light storage devices like CD-ROM.
It is integrated .and interactive. It can be used for any subject and for anyone.
It can be used in Television, Films Industries and for personal entertainments.
It is also used in Internet to make up the interactive web-page contents.
We can give the everlasting impression to the intended audiences on a specific topic
by the use of multimedia.
Colored pictures, Motion pictures and other graphics could be shown in monitors and
other big screens so that many people could view it and make out the impression
about it.
Multimedia systems are generally very interactive so it is interesting to use.
1.7 SUMMARY
The newspaper was the first mass communication medium to use multimedia as a
tool.
Multimedia defines electronic media devices employed to store and experience
multimedia content.
To support multimedia application over a computer network, multimedia provides the
distributed application.
BIBLIOGRAPHY
1. Bush, Vannevar 1970.Pieces of the Action. New York: William Morrow&
company.
2. Bush, Vannevar 1967. Science is not Enough. New York: William Morrow &
Company.
3. Hackbarth, Steven. 1996. The Educational Technology handbook: A
comprehensive Guide. New Jersey: Educational Technology Publications.
4. Information Age Tour.1997. National Museum of American History. Smithsonian
Photographs.
5. PBS 1992.[ Video] The machine That changed the World (v-14), Bostan MA:
WGBH Educational Foundation
2.0 INTRODUCTION
In this unit we are going to learn what actually the multimedia system is and usage of
multimedia in different fields. Multimedia mainly divides into two categories. (I) Interactive
and (II) Non Interactive. After doing through this unit you will come to know about how
interactivity is important in multimedia. You will understand the different interactive
medium.
In today’s world multimedia is very important. In our daily work we often use
multimedia. In this unit we will come to know about different sectors where multimedia used.
C. Presentations:
Presentation is the practice of explaining the content of a topic to a learner or
an audience. A presentation program, such as Microsoft PowerPoint, is commonly
used to create the presentation content.
Useful examples of presentation:
• BCC diagram: This diagram is extremely helpful in marketing, although
most students only learn the fundamentals of the diagram at the
undergraduate level.
• SWOT analysis: This is useful in business and states a problem effectively
and precisely.
D. Computer Games:
A computer games also known as PC games is a game played on personal
computer or an arcade machine or video game console.PC games are developed
by one or more game developers, often in collaboration with other specialists.
These games may then be distributed on physical media, such as CDs, DVDs or
through online delivery services, possibly free redistributed software.PC games
normally required specialized hardware in user’s computer in order to play those
advance games, also need internet connection for online play or specific
generation of graphics processing unit.
E. Movie Theatre:
Movie theater or cinema theatre also called picture theatre is a place, normally
a building, for viewing motion pictures called films or movies. Movie theaters are
commercial operations serving to the common people for their entertainment.
With the help of movie projector movie is projected onto a large projection
screen at the front of the auditorium .while the dialogue, sounds and music are
played through a number of wall-mounted speakers. Some movie theatres are now
equipped for high-end digital cinema projection.
F. Spatial navigation:
Spatial Navigation is part of our daily life. In the field of computing, spatial
navigation is the ability to navigate between the focusable elements, such as form
controls and hyperlinks, within a user interface. We navigate through real objects such
as buildings, trees, etc. Spatial navigation is widely used in multimedia applications
such as web pages and interactive 2D maps and computer games.
Previously, in case of web browsers tabbing navigation was utilized to change
the focus within an interface, by pressing the tab key on a computer keyboard to focus
on the next element (or Shift + Tab to focus on the previous one). The order is based
on that in the source document. Without any style HTML, this method normally
works as the spatial location of the element is in the same order of the source
document. However, with the introduction of style through presentational attributes
or style sheets such as CSS, this type of navigation is being used less often. Spatial
navigation uses the arrow keys (with one or more modifier key held) to navigate on
the "2D plane" of the interface. For example, pressing the "up" arrow key will focus
on the closest focusable element on the top (relative to the current element). In many
cases, a lot of key presses could be saved.
In opera’s web browser this feature is available. This allows user a faster way
to jump to different areas in long articles or web pages without manually scanning and
scrolling with their eyes. Moreover, when scanning long text pages, opera also
permits user to jump directly to sub-headers by using the S key. User can move up by
using W key. Different applications have different spatial navigation level.
Multimedia is used in the field of mass media i.e. journalism, there are many
magazines and newspaper that are published periodically. Publishing house also use
multimedia for newspaper designing and other stuff also. And now days it's not only the text
that we can see in the newspaper, but we can also see photographs in newspaper, this shows
today how multimedia is important and worthy.
b. Commercial
Much of the electronic old and new media used by commercial artists is multimedia.
In the field of advertising the multimedia plays a great and a vital role. Exciting presentations
are used to grab and keep attention in advertising. For business communication Multimedia is
very powerful tool. It helps to increase the quality of business communication. Commercial
multimedia developers may also be hired to design for nonprofit services and government
service applications as well.
c. Entertainments and fine arts:
Multimedia is heavily used in the entertainment industry, especially to develop special
effects in movies. For examples movies like Jurassic park, Ice Age. Avtar movie is also
remembered for their special effects and animation. Multimedia games are very popular
among the children, basically software program which is available on CD or online. Some
video games also based on multimedia features. We can see the use of multimedia in art
gallery displayed different pictures. Though the multimedia display material may be violate,
the preservation of the content is as strong as traditional media.
2.5.2 Education
Multimedia is important in n the area of education also. Talking particularly about the
schools, the education of multimedia is very important for children also. In Education,
multimedia is used to produce computer-based training courses (popularly called CBTs) and
reference books like encyclopedia and almanacs. A CBT allows the user to go through a
series of presentations, text about a particular topic, and associated illustrations in various
information formats. Edutainment is an unusual term used to define mix entertainment with
education especially multimedia entertainment.
In the past decade learning methods has improved a lot because of the introduction of
multimedia. Various field of research have evolved (e.g. Multimedia learning, Cognitive load
and the list goes on). The possibilities for learning and instruction are nearly endless.
2.7 SUMMARY
Multimedia finds its application in various areas including education,
advertisements, art, engineering, medicine, entertainment, business, mathematics,
scientific research and spatial temporal applications.
The e-learning, or electronic learning, field creates a dynamic environment that
stimulates learners through self-directed training. E-learning accomplishes self-
directed learning by utilizing animation authoring tools to develop interactive
multimedia.
PC games are developed by one or more game developers, often in collaboration with
other specialists. These games may then be distributed on physical media, such as
CDs, DVDs or through online delivery services, possibly free redistributed software.
WWW is a best example of hypermedia. A non-interactive cinema presentation is a
classic example of standard multimedia, because it lacks hyperlinks.
Multimedia and the Internet require a completely new approach to writing.
The style of writing that is suitable for the 'on-line world' is highly optimized and
designed to be able to be quickly scanned by readers.
BIBLIOGRAPHY
Richard, Albarino, ‘Goldstein’s light Works at Southampton,’ Variety, Vol.213, No.
12
Variety, 1-7 January, 1996:3.
Stewart, C and A. Kowaitzke.1997. Media: new Ways and Meanings, 2nd Edition.
Sydney: Jacaranda, Press.
Sears, Andrew and Julie A. Jacko, (Eds).2007. Handbook of Human Computer
Interaction, 2nd Edition. Boca Raton: CRC press.
3.0 INTRODUCTION
In the last two units we learn about the multimedia systems and its applications. In
this unit we are going to learn about computer graphics. Computer graphics is actually
become a bridge between computers and humans. Today almost everything becomes digital
and things doing in old ways are almost gone. In 1950s computer graphics used in small scale
due to limitations in technology. But in 1951, Massachusetts Institute of Technology (MIT)
developed a mainframe computer called Whirlwind become the origin of computer graphics.
Computer graphics has growing in large scale over the past 40-45 years. Now it becomes the
important medium of communication with computer application.
In our day to day lives, computer graphics are playing an important role. Computer
imagery is found in everywhere like in newspaper, Photography, television, Weather reports,
Medical applications in all kind of surgical procedures. Computer graphics is a single image
or a series of images called video. The main purpose of computer graphics is to visualize real
object with the help of computer. The computer graphics are divided into three types: Two
Dimensional (2D), three dimensional (3D) and animated graphics.
Computer Graphics are generated using computers and generally the represented and
manipulation of pictorial data by computer. Computer graphic are used in every filed.
Computer graphics can be a single image or series of images called video. Computer graphics
can be categorized in to two dimensional (2d), three dimensional (3d) and animation
graphics. A specialized sub filed of computer science has emerged that studies methods for
manipulating and digitally synthesizing visual content.
Instead of drawing on a paper, the many artists, designer uses computer graphics.
Using computer graphics it becomes easier to scale, save, and edit your image. We can easily
swap the colors, easily print the image and also upload it on the web.
3.2.1 History of Computer Graphics
In 1950, Verne Hudson and William Fetter, graphic designer working in Boeing,
crated computer graphics as visualization tool .They developed computer graphic tool for
engineers and scientists. This is also called computer generated imagery or CGI. In 1951, Jay
Forrester and Robert Everett of Massachusetts Institute of Technology (MIT) developed a
mainframe computer called Whirlwind. The Whirlwind was the first computer with a video
display in real time.
In 1955, a light pen is introduced. A light pen can be able to work with any CRT-based
display. In 1961, Ivan Sutherland develops Sketchpad software. It is also called Robot
Draftsman. Sketchpad application allow user to draw sketched on computer using light pen.
Ivan Sutherland is considered the ‘grandfather’ of the graphical user interface (GUI) and
interactive computer graphic because his development of Sketchpad. Later, virtual reality
equipment and flight simulators also developed by Sutherland.
In 1977, Graphic designers and artist began to understand the importance of personal
computer. The Apple II was the first graphic personal computer. In 1981, Quantel a UK
based company developed software called Paintbox. It was a revolutionary computer graphic
program. With the help of this application Filmmaker and TV producers can edit and
manipulate video images digitally.
Fig 3.2 Apple-II Personal Computer
In late 1980s, Silicon Graphics Interface (SGI) computers were employed to design
some of the full-length computer-generated short films at Pixar. In 1982,TRON was the first
movie in which computer graphics are used extensively. In Tron, computer graphic imagery
mixes with live action.
One of the most popular graphic software named Photoshop ‘s first version was
developed in 1990.Another graphic program called Paintshop also launched the same year. In
the 1990s, 3D graphics became popular in multimedia, gaming and animation. 1996 saw the
release of fully 3D games. Quake was one of those games which was released and become
very popular. In 1995, Pixar Animation Studio produces the movie ‘Toystory’ uses CGI
graphics in impressive way. Due to the advancement of 3d modeling software and powerful
graphics hardware, computer graphics has become more interactive and true to life.
Generally, they are obtained in a regular pattern and usually possess a regular pattern.
This is an example of a regular volumetric grid, with each voxel (volumetric element) being
represented by a single value that can be obtained by sampling the close area that surrounds
the voxel.
3.3.10 3D Modeling
The process of developing mathematical and wireframe representation of any three
dimensional object using specialized software is called 3D modeling. Models are created
automatically or manually. The manual modeling process of preparing geometric data for 3D
computer graphics is similar to that of plastic arts, such as sculpting. 3D models are created
using multiple approaches, such as:
i. NURBS curves for generating accurate and smooth surface patches.
ii. Polygonal mesh modeling for manipulation of faceted geometry.
iii. Polygonal mesh subdivisions are advanced tessellations of polygons that result in
generation of smooth surfaces like that of NURBS models.
A 3D model can be displayed as two-dimensional image by the process of 3D rendering.
These can be used in a computer simulation of physical phenomena, or animated directly for
other purposes. For physically creating the model, a 3D printing device is required.
What is pixel?
What is ray tracing?
Define shading?
What is texture mapping?
What is rendering?
3.4 2D GRAPHICS
2D computer graphics are computer based conceptions of digital images mostly produced
by two-dimensional models, like 2D geometric models, digital images etc., and by techniques
relevant to them.
2D graphics used in many applications that are based on printing and drawing
technologies, like cartography, typography, advertising etc. Two dimensional models are
more preferred by many professionals as controlling the image is easier in comparison to
three dimensional computer graphics.
In this section you will learn about two types of 2D computer graphics.
3.4.1 Raster Graphics (Pixel Art)
In computer graphics, raster images are made up of grid of Pixels. Pixel is a packet of
color. These pixels together form boxes of color to create an overall finished image. Raster
image is also called bitmap image because it contains information that is directly mapped to
the display grid. Most of the pictures or images import from the digital camera are raster
images.
Raster images have a finite set of pixels called picture elements. This type of digital
images contains a fixed number of rows and columns. Each pixel is assigned a specific value
that determines its color. The raster image system uses the red, green, blue (RGB) color
system. In general, pixels are a two-dimensional array. It means it has width and height. This
width and height comparison ratio called pixel aspect ratio.
3.4.2.3 Applications
These days, the term ‘vector graphics’ is commonly used in the perspective of two-
dimensional computer graphics. It is one of the ways for an artist to produce an image on a
raster display. Other ways include text, 3D rendering and multimedia. Almost all modern 3D
rendering is achieved through the use of extensions used in 2D vector graphics techniques.
Bitmap data is an array of Bytes that comes after the color table. There are consecutive rows
are called ‘scan line’ to represent the array. Each scan line contains the number of bytes that
represent the pixels. Scan order always goes in left- to- right direction. In scan line Number
of bytes depends upon pixel width and color format. The scan lines started in the lower left
corner going from left to right and then row by row from the bottom to the in the upper right
corner. So the first byte represents the pixels in lower-left corner and last byte represent in the
upper right corner.
3.8.3 Usage of BMP Format and Related Formats
The simplicity of the BMP file Format is the main reason for its widespread usage in
Windows and other operating systems. The fact that this format is relatively well-documented
and free of patents makes it a convenient format for image processing programs.
The X windows system uses the same XBM format for black- and-white images
and XPM (pixelmap) for color images. The Portable Pixmap (PPM) and True vision TGA
format also exist but are less often used or only for special purposes. i.e. TGA can also
contain transparency information. There are also a variety of ‘raw’ formats, which help in
saving raw data with no other information.
Joint Photographic Experts Group (JPEG) is mostly used for images those are
created by digital photography.JPEG is a compression method.JPEG compressed
images are usually stored in JFIF (File Interchange Format). JPEG is a very common
format for storing and transmitting photographic images on the web. JPEG
compression is a lossy compression. The JPEG filename extension is .jpg in DOS
operating system. Other operating system may use .jpeg extension.
II. Exif
Exchangeable image file format (Exif) is a file standard similar to the JFIF format
with .tiff extensions. It is integrated in the JPEG –writing software used in most
cameras. The metadata are recorded for individual images and include things like
name of the camera, camera settings, time and date, shutter speed, exposure,
compression, image size, color information etc.
III. TIFF
V. PNG
Portable Network Graphics (PNG) file format was created as the free and open-source
successor to the GIF. The PNG format supports true color (16 million colors),while
the GIF supports only 256 colors.
VI. GIF
Graphic Interchange Format (GIF) is limited to an 8 bit palette or 256 colors. This
makes the GIF format suitable for storing graphics with relatively few colors, such as
simple diagrams, shapes, logos and style images.
VII. BMP
The BMP file format handles graphics files within the Microsoft Windows OS. In
general, BMP files are uncompressed and large. The advantage is their simplicity and
wide acceptance in Windows programs.
VIII. PPM,PBM and PGM
NETPBM format is a family including the portable Pix Map file format (PPM),
the portable bitmap file format (PBM) and the portable Grey Map file format (PGM).
Others
Other image file formats for raster type are as follows:
• Inter Leaved Bit Map (ILBM)
• TARGA
• Personal Computer eXchange (PCX)
• Chasys Draw Image (CD5)
• Enhanced Compression Wavelet (ECW)
• Flexible Image Transport System (FITS)
3.10.1.3 3D Formats
Some of the 3D formats are as follows:
I. PNS
The PNG Stereo (.pns) format consists of a side-by side image based on Portable
Network Graphics (PNG).
II. JPS
The JPEG Stereo (.jps) format consists of a side-by-side image format based on
JPEG.
III. MPO
It also known as Multi Picture Object (MPO), the file format was first used in the
FinePix REAL 3D WI camera made by FujiFilm. The format is proposed as an open
standard as CIPA DC-007-2009 by Camera and Imaging Products Association
(CIPA).
3.11 SUMMARY
Computer Graphics are generated using computers and generally the represented and
manipulation of pictorial data by computer.
Graphics is a combination of text, color and illustration. Graphics refers to visual
presentation of any object on the surface such as wall, canvas, computer screen, maps,
drawings, photograph etc. Some more examples of graphics are line art, graphs,
numbers, geometric designs, typography, engineering drawings etc. The main
objective of the graphics is to create effective communication with the help of other
cultural elements to create a unique style.
A pixel in digital imaging is a single point in a raster image.
Rendering is the process to get output from the computer. Rendering is the process of
generating a 2d image from a 3D model.
Shading is depicting the depth in 3D models or illustrations by varying the levels of
darkness.
2D computer graphics are computer based conceptions of digital images mostly
produced by two-dimensional models, like 2D geometric models, digital images etc.,
and by techniques relevant to them.
Raster image is also called bitmap image because it contains information that is
directly mapped to the display grid. Most of the pictures or images import from the
digital camera are raster images.
Vectors graphics are made by geometrical primitives like line point, curves and
shapes, which all are based on mathematical equations, to represent images in
computer graphics.
3D computer graphics utilize a three –dimensional representation of geometric data
that is stored in the computer for the intent of performing calculations and presenting
2D images. Such images may be later used for display or for real-time viewing.
Vector editors are often compared with raster graphics editors. Vector editors are
better for page layout, graphic design, typography, sharp-edged artistic illustrations,
logos, technical illustrations, making diagrams and flow chart.
Raster editors are more suitable for retouching, photo realistic illustrations, photo
processing and collage. Several contemporary illustrators use Corel Photo-Paint and
Photoshop software to create all kinds of illustrations.
BIBLIOGRAPHY
Foley, James D., John F.Hughes and Andries Van Dam. 1995. Computer Graphics:
Principles and Practice. Montreal: Addison-Wesley Professional.
Hearn, Donald and M.Pauline Baker.1994. Computer Graphics. Upper Saddle River, NJ:
Prentice –Hall.
Hill, Francis S.2001. Computer Graphics. Upper Saddle River, NJ: Prentice –Hall.
Lewell, John,1985. Computer Graphics. A survey of Current Techniques and
Applications. New York: Van Nostrand Reinhold.
McConnell, Jeffrey J.2006. Computer Graphics: Theory Into Practice. MA: Jones &
Bartlett Publishers.
Slater, M.,A. Steed, Y. Chrysantho. 2002. Computer Graphics and Virtual Environments:
From Realism to Real-Time.Montreal: Addison- Wesley.
1.0 INTRODUCTION
Animation is one of the most universal and all-permeating forms of visual
communication today, seen everywhere from the TV channels dedicated exclusively to
cartoons to the title sequences of our favorite movies to the reactive graphic interfaces our
smartphones. Animation is a fast-growing and exciting area.
Animation is the method of making the illusion of motion and change by means of the
rapid display of a sequence of static images that slightly differ from each other. The illusion
of movement in animation is created by a physiological phenomenon called persistence of
vision. Therefore, when many images are passing the eye, the eye retains each one and the
brain interprets it as a continuous image. Each image is seen for less than a millisecond and is
travelling at the speed of light.
Computer animation is the use of computers to create animations. There are a few
different ways to make computer animations. An early step in the history of computer
animation was the sequel to the 1973 film Westworld, a science-fiction film about a society in
which robots live and work among humans.
Developments in CGI technologies are reported each year at SIGGRAPH, an annual
conference on computer graphics and interactive techniques that is attended by thousands of
computer professionals each year.
In this unit, you will learn about computer animation along with some early animation
techniques and their differences. You will learn various types of animation and software used
for it. You will also see the brief history of leading animation studios.
• Zoetrope: It is a device that produces the illusion of motion from a rapid succession
of static pictures. A circle of paper with drawings on was used, each drawing slightly
different from the previous one. This paper was placed in the zoetrope, which had
small slits around the outside. When the zoetrope was spun round, the pictures would
pass each of the slits at speed, creating the illusion of movement.
Other animation approaches
• Hydrotechnics: It is a technique that contains lights, water, fire, fog, and lasers, with
high-definition projections on mist screens.
• Drawn on film animation: This technique involves scratching, etching directly on an
exposed film reel.
• Paint-on-glass animation: It is a technique for making animated films by
manipulating slow drying oil paints on sheets of glass.
• Pinscreen animation: It makes use of a screen filled with transferable pins that can
be moved in or out by pressing an object onto the screen. The screen is lit from the
side so that the pins cast shadows. The technique has been used to create animated
films with a range of textural effects difficult to achieve with traditional cel animation
• Sand animation: In this kind of method, sand is moved around on a back- or front-
lighted piece of glass to create each frame for an animated film. This creates an
interesting effect when animated because of the light contrast.
3D Animation Software
1. Maya: Maya is owned by Autodesk. It is currently referred as the industry standard for
3D animated movies, games, television and computer generated 3D effects used in live
entertainment. Maya is always up to date and fully featured and is the flawless program
for those who are motivated to become proficient animators. It allows a huge array of
shading and lighting effects, because it Maya is fast becoming the famous choice of
software for many film makers. It’s another best feature is, easy customization. Means
you can easily integrate other third party software.
Features of Maya
Main advantage is anyone can make, modify, and re-upload content to and
from the 3D warehouse free of charge. All the models in 3D Warehouse are free, so
anyone can download files for use in Sketchup or even other software such as
Autocad, Revit and Archicad - all of which have apps allowing the retrieval of models
from 3D Warehouse.
Since 2014 Trimble has launched a new version of 3D Warehouse where
companies may have an official page with their own 3D catalog of products. Trimble
is currently investing in creating 3D developer partners in order to have more
professionally modeled products available in 3D Warehouse. According to the
Trimble, 3D Warehouse is the most popular 3D content site on the web.
6. ZBrush: Unlike other traditional 3D animation software, ZBrush is a digital sculpting
tool. It combines 3D/2.5D modeling, texturing and painting. It uses a proprietary "pixol"
technology which stores lighting, color, material, and depth information for all objects on
the screen.
ZBrush is used for creating high-resolution models (able to reach 40+ million
polygons) for use in movies, games, and animations, by various companies. ZBrush uses
dynamic levels of resolution to allow sculptors to make global or local changes to their
models.
ZBrush is most known for being able to sculpt medium to high frequency details that
were traditionally painted in bump maps. The resulting mesh details can then be exported
as normal maps to be used on a low poly version of that same model. They can also be
exported as a displacement map, although in that case the lower poly version generally
requires more resolution. Or, once completed, the 3D model can be projected to the
background, becoming a 2.5D image (upon which further effects can be applied). Work
can then begin on another 3D model which can be used in the same scene. This feature
lets users work with complicated scenes without heavy processor overhead.
Features of ZBrush
• Pixol: Like a pixel, each pixol contains information on X and Y position and color
values. Additionally, it contains information on depth (or Z position), orientation and
material. ZBrush related files store pixol information, but when these maps are
exported (e.g., to JPEG or PNG formats) they are flattened and the pixol data is lost.
• 3D Brushes: ZBrush comes with many features to aid in the sculpting of models and
polygon meshes. The initial ZBrush download comes with 30 default 3D sculpting
brushes with more available for download. Each brush offers unique attributes as well
as allowing general control over hardness, intensity, and size.
• Polypaint: It allows users to paint on an object's surface without the need to first
assign a texture map by adding color directly to the polygons.
• Transpose: ZBrush also has a feature that is similar to skeletal animation in other 3D
programs. The transpose feature allows a user to isolate a part of the model and pose
it without the need of skeletal rigging.
• ZSpheres: A user can create a base mesh with uniform topology and then convert it
into a sculptable model by starting out with a simple sphere and extracting more
"ZSpheres" until the basic shape of the desired model is created.
• GoZ: Introduced in ZBrush 3.2 OSX, GoZ automates setting up shading networks for
normal, displacement, and texture maps of the 3D models in GoZ-enabled
applications. Upon sending the mesh back to ZBrush, GoZ will automatically remap
the existing high-resolution details to the incoming mesh. GoZ will take care of
operations such as correcting points & polygons order. The updated mesh is
immediately ready for further detailing, map extractions, and transferring to any other
GoZ-enabled application.
• Best Preview Render: It also includes full render suite known as Best Preview
Render, which allows use of full 360° environment maps to light scenes using HDRI
images. BPR includes a new light manipulation system called LightCaps. With it, one
can not only adjust how the lights in the scene are placed around the model, but also
generate environments based on it for HDRI render later on. It also allows for
material adjustments in a realtime.
• DynaMesh: It allows ZBrush to quickly generate a new model with uniform polygon
distribution, to improve the topology of models and eliminate polygon stretching.
• Fibermesh: It a feature that lets users to grow polygon fibers out of their models or to
make various botanical items. It is also a way to edit and manipulate large amounts of
polygons at once with Groom brushes.
• ZRemesher: An automatic retopology system previously called QRemesher that
creates new topology based on the original mesh. The new topology is generally more
clean and uniform. This process can also be guided by the user to make the new
topology follow curves in the model and retain more detail to specified areas.
7. Houdini: It is 3D animation application software developed by Side Effects Software
based in Toronto. Side Effects adapted Houdini from the PRISMS suite of procedural
generation software tools. Its exclusive attention to procedural generation distinguishes it
from other 3D computer graphics software.
Features of Houdini
• Modeling: All standard geometry entities including Polygons, (Hierarchical)
NURBs/Bézier Curves/Patches & Trims, Metaballs
• Animation: Keyframed animation and raw channel manipulation (CHOPs), motion
capture support
• Dynamics: Rigid Body Dynamics, Fluid Dynamics, Wire Dynamics, Cloth
Simulation, Crowd simulation.
• Lighting: Node-based shader authoring, lighting and re-lighting in an IPR viewer
• Rendering: Houdini ships with its native and powerful rendering engine Mantra, but
the Houdini Indie license (Houdini version for indie developers) supports other 3rd
party rendering engines such as: Renderman, Octane, Arnold.
• Volumetrics: With its native CloudFx and PyroFx toolsets, Houdini can create
clouds, smoke and fire simualtions.
• Compositing: Full compositor of floating-point deep (layered) images.
• Houdini is an open environment and supports a variety of scripting APIs. Python is
increasingly the scripting language of choice for the package, and is intended to
substitute its original CShell-like scripting language, Hscript. However, any major
scripting languages which support socket communication can interface with Houdini.
4.5.1 Rendereres
Rendering is the final process of creating the actual 2D image or animation from the
prepared scene or you can say 3D rendering is the 3D computer graphics process of
automatically converting 3D wire frame models into 2D images with 3D photorealistic
effects or non-photorealistic rendering on a computer.
Rendering can be compared to taking a photo or filming the scene after the setup is
finished in real life. Several diverse, and often dedicated, rendering methods have been
developed. These ranges from the distinctly non-realistic wireframe rendering through
polygon-based rendering, to more innovative techniques such as: scanline rendering, ray
tracing, or radiosity. Time required for rendering may change from fractions of a second to
days for a single image/frame. In general, different methods are better suited for either photo-
realistic rendering, or real-time rendering.
Rendering has uses in movie, architecture, video games, simulators, visual effects, and
design visualization, each employing a different balance of features and techniques. As a
product, a wide variety of renderers are available. Some are integrated into larger modeling
and animation packages, some are stand-alone, some are free open-source projects.
Real time rendering:
Rendering for interactive media, such as games and simulations, is calculated and
displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time
rendering, the aim is to show as much information as possible as the eye can process in a
fraction of a second .The primary goal is to achieve as high as possible degree of
photorealism at an acceptable minimum rendering speed.
Rendering software may simulate such visual effects as lens flares, depth of field or
motion blur. The rapid increase in computer processing power has allowed a progressively
higher degree of realism even for real-time rendering, including techniques such as HDR
rendering.
Non real-time rendering:
Animations for non-interactive media, such as feature films and video, are rendered much
more slowly. Non-real time rendering enables the leveraging of limited processing power in
order to obtain higher image quality. Rendering times for individual frames may vary from a
few seconds to several days for complex scenes. Rendered frames are stored on a hard disk
then can be transferred to other media such as motion picture film or optical disk. These
frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per
second, to achieve the illusion of movement. When the goal is photo-realism, techniques such
as ray tracing or radiosity are employed. This is the basic method employed in digital media
and artistic works.
Let’s look few of them.
• Mental Ray:
It is a production-quality rendering application developed by Mental Images (Berlin,
Germany). As the name suggests, it supports ray tracing to produce images. Mental
Images was bought in December 2007 by NVIDIA.
The key feature of Mental Ray is the accomplishment of high performance through
parallelism on both multiprocessor machines and across render farms. The software uses
acceleration techniques such as scanline for primary visible surface determination and
binary space partitioning for secondary rays. It also supports caustics and physically
correct simulation of global illumination employing photon maps. Any combination of
diffuse, glossy (soft or scattered), and specular reflection and transmission can be
simulated.
It was designed to be integrated into a third-party application using an API or be used
as a standalone program using the .mi scene file format for batch-mode rendering.
Presently there are many programs integrating it such as Autodesk Maya, 3D Studio Max,
AutoCAD, Cinema 4D etc. Since 2010 Mental Ray also includes the iray rendering
engine, which added GPU acceleration to the product. In 2013, the ambient occlusion
pass was also accelerated by CUDA, and since 2015 the GI Next engine can be used to
compute all indirect/global illumination on GPUs.
Till today Mental Ray has been used in many feature films, including Hulk, The
Matrix Reloaded & Revolutions, Star Wars: Episode II – Attack of the Clones, The Day
After Tomorrow and many more. In 2003, Mental Images was awarded an Academy
Award for their contributions to the mental ray rendering software for motion pictures.
• RenderMan:
It is proprietary photorealistic 3D rendering software produced by Pixar Animation
Studio. RenderMan is used by Pixar to render all of their in-house 3D animated movie
productions and is also available as a commercial product licensed to third parties.
Previously it was known as PhotoRealistic RenderMan. In May 2014, Pixar declared it
would offer a free non-commercial version of RenderMan and from March, 2015,
RenderMan is available for non-commercial use.
RenderMan defines cameras, geometry, materials, and lights using the RenderMan
Interface Specification. This specification facilitates communication between 3D
modeling and animation applications and the render engine that generates high quality
images. Additionally RenderMan supports Open Shading Language to define textural
patterns.
RenderMan has been used to create digital visual effects for Hollywood
blockbuster movies such as Beauty and the Beast, The Lion King, Terminator 2:
Judgment Day, Toy Story, Jurassic Park, Avatar etc.
• V-Ray:
It is a commercial rendering plug-in for 3D computer graphics software applications. It is
developed by Chaos Group, a Bulgarian company based in Sofia, established in 1997. V-
Ray is used in media, entertainment, and design industries such as film and video game
production, industrial design, product design and architecture.
V-Ray is a rendering engine that uses advanced techniques, for example global
illumination algorithms such as path tracing, photon mapping, and directly computed
global illumination. The use of these techniques often makes it preferable to conventional
renderers which are provided standard with 3D software, and generally renders using
these techniques can appear more photo-realistic, as actual lighting effects are more
realistically emulated. It supports many 3D applications such as 3ds Max, Cinema 4D,
Maya, Modo etc.
• Octane Render:
It is a real-time 3D unbiased rendering application that was started by the New Zealand-
based company Refractive Software, Ltd and OTOY took over on March 2012. It is the
first commercially available unbiased renderer to work entirely on the GPU (Graphics
Processing Unit) and to be released to the public. It has the facility to work in real time.
This lets users to modify materials, lighting and render settings “on the fly” because the
rendering viewport updates immediately whenever a change is made. It uses the graphics
card to calculate all measures of light, reflection and refraction.
Check Your Progress 4
What is Adobe Animate’s previous name?
Which is the most commonly used 2D animation software?
What is the use of Maya fur?
What is the built-in scripting language of Max?
What is pixol?
What is Rendering?
5.0 INTRODUCTION
In this unit, You will learn about interactive media. It is a type of collaborative media that
allows dynamic involvement by the recipient. You will also learn about the development of
the World Wide Web, which is a network of interconnected hypertext documents contained
on the Internet, the safety and security of the Web, and Internet forums. An Internet forum, or
message board, is an online discussion website, From a technological point of view, forums
or boards are web applications dealing user-generated content.
In addition, you will learn about computer or PC games. Computer games have
developed from the simple graphics and gameplay of former titles such as Spacewar, to a
wide range of more visually advanced titles. This unit also explains the concept of mobile
telephony. Since the debut of mobile phones, concerns (both scientific and public) have been
evoked about the potential health impacts from regular use. In this, unit, you will also be
learning about interactive television. This form of television ranges from low interactivity
(e.g., volume, TV on/off, channel changing) to moderate interactivity (e.g., movies on
demand without player controls) and high interactivity where, for example, an audience
member can influence the program that is being telecast. Finally, you will learn about
hypermedia. It is applied as a logical annexe of the term ‘hypertext’. In the latter, graphics,
audio, hyperlinks, plain text and video interlace to create a generally non-linear medium of
information.
Extra publications provide definitions of other essential technologies for the WWW,
including but not restricted to, the following:
• HyperText Transfer Protocol (HTTP)
• HTTP Authentication
Uniform Resource Identifier: The Web is used for obtaining information as well as providing
information and interacting with society. Therfore, it essential that the Web is approachable
and provides equal access and equal opportunity to people with impairments. According to
Tim Berners-Lee, ‘The power of the Web is in its universality. Access by everyone regardless
of disability is an essectial aspect.’ Many countries govern Web accessibility as a requirement
for websites. International practice in the W3C Web Accessibility Initiative resulted in simple
guidelines that web content authors as well as software developers can expend to make the
Web accessible to persons who may or not be using helpful technology.
Internationalization and statistics
The W3C Internationalization activity ensures that web technology will figure out in all
languages, scripts and cultures. Beginning in 2005, Unicode gained ground and eventually in
December 2007 exceeded both ASCII and Western European as the Web’s often-used
character encryption. Originally RFC 3986 permitted resources to be identified by URL in a
subset of US-ASCII. RFC 3987 grants more characters any character in the Universal
Character Set – and now a resource can be discovered by IRI any language.
According to a 2001, study, there were 550 billion text files on the Web, mostly in the
inconspicuous Web. A 2002 survey of 2,024 million web pages found that by far the most
web content was in English: 56.4 per cent; were pages in German (7.7 per cent), French (5.6
per cent) and Japanese (4.9 per cent). A latest study using web searches in 75 different
languages to sample the Web found out that there were more than 11.5 billion web pages in
the publicly indexable Web.
Technology of the www: Speed issues and caching
Frustration over overcrowding issues in the Internet base and the high response time resulting
in slow browsing has led to people calling it the World Wide Wait. Speeding up the Internet is
an ongoing discussion over the use of peering and quality of service (QoS) technologies.
Other answers to minimize the wait can be found at W3C. The standard roadmap for ideal
Web reaction times is as follows:
• 1 second. Highest satisfactory response time. Download times above 1 second cut off
the user experience.
• 10 seconds, Unacceptable response time. The user experience is interrupted and the
user is likely to leave the site or system.
• 0.1 second (one tenth of a second). Ideal response time. The user does not sense any
interruption.
If a user returns to a web page after a short interval, the page data usually will not
demand to be re ob-tained from the main web server. Web browsers tend to hoard freshly
received data, typically on the local hard drive. HTTP petitions shipped by a browser
generally only ask for the data that has changed following the last download Current locally
cached data are usually reused. Vaching helps bring down Web traffic on the Internet. The
decision to terminate the downloaded file is taken independently, whether it be image,
JavaScript, style sheet. HTML etc. Thus, even sites with extremely active cintent, need not
frequently refresh their basic resources. Website architects tend to compare resources, like
JavaScript and CSS data, into some site-wide files so that they can be cached quickly. This
tends to lower demands on the web server and helps in minimize page download times.
Poll
Most forums carry out an opinion poll system for threads. Most implementations grant for
single-choice or multo-choice (sometimes limited to a certain number) when selecting
alternatives as well as private or public display of voters.
RSS and ATOM
RSS and ATOM feeds permit a minimalistic way of signing to the forum. Common
implementations only allow RSS feeds numbering the last few threads updated for the forum
index and the last posts in a thread.
Other forum features
An ignore list permits members to conceal posts of other members that they do not want to
see or have a trouble with. In most implementations they are cited as foe list or ignore list.
Normally, the posts are not hidden, but minimized with only a small bar indication a post
from the user on the ignore list is there.
Comparison with other web applications
One major difference among forums and electronic mailing lists in that the latter
automatically hand over new messages over to the subscriber; forums on the other hand want
the member to visit the website and look for new posts. Members sometime tend to miss
threads they are concerned with, so forums have an ‘e-mail notification’ feature through
which members be notified of new posts in a thread. The main difference between
newsgroups and forums is that extra software called a newsreader is required to take part in
newsgroups. Visiting and taking part in forums generally has no need for additional software
outside of the web browser.
Hardware
Modern computer games place great demand on the computer’s hardware, often
necessitating a fast central processing unit (CPU) to run properly. CPU makers
historically trusted mainly on increasing clock rates to ameliorate the performance of their
processors, but had started to proceed steadily to wards multi-core CPUs by 2005. These
processors permit the computer to simultaneously do multiple tasks, called threads, letting
the use of more complex graphics, artificial intelligence and in games physics.
Sound cards are also available to render better audio in computer games.
These cards supply improved 3D audio and audio enhancement that is mostly not
available with integrated alternatives, at the cost of marginally lower overall performance.
Physics processing units (PPUs) , such as the Nvidia PhysX (formerly AGEIA
PhysX) card, are also available to speed up physics simulations in modern computer
games. PPUs allow the computer to appendage more complex interactions among objects
than is realizable applying only the CPU, potentially allowing players a much greater
degree of control over the world in games designed to use the card.
Software
Computer games also depend on third-party software such as device drivers, an
operating system (OS), libraries and more to run. These days, the vast bulk of computer
games are designed to run on the Microsoft Windows OS. Whereas earlier games written
for MS-DOS would include code to transmit directly with hardware, today application
programming interfaces (APIs) furnish an interface between the game and the OS,
simplifying game design.
Multiplayer
Local area network gaming
Multiplayer gaming was largely fixed to local area networks (LANs) before cost-
effective broadband Internet access became available, due to their typically higher
bandwidth and lower latency than the dial-up services of the time. These vantages
permitted more players to link up any given computer game, buy have remained today
because of the higher latency of most Internet connections and the costs related with
broadband Internet.
Online games
Online multiplayer games have reached popularity largely as a result of growing
broadband adoption among consumers. Affordable high-bandwidth Internet connections
permit large numbers of players to play together, and thus have found special use in
massively online RPGs, Tanarus and persistent online games such as World War II
Online.
Although it is possible to take part in online computer games using dial-up
modems, broadband internet connections are generally considered essential to minimize
the latency between players (commonly known as ‘lag’), Such links need a broadband
compatible modern connected to the personal computer through a network interface card
(generally incorporated into the computer’s motherboard), optionally separated by a
router.
Emulation
Emulation software, used to run software without the original hardware, is popular for
their power to play legacy video games without the consoles or operating system for
which they were projected. Console emulators such as NESticle and MAME are
relatively commonplace, although the complexity of advanced consoles such as the Xbox
or Playstation makes them far more different to emulate, even for the original
manufactures.
Controversy
PC games have long seen a lot of arguments, particularly related to the violence that
has become commonly associated with video gaming in general. The argument
beleaguers the influence of objectionable content on the social development of minors,
with organizations, such as the American Psychological Association concluding that
video game violence increases children’s agreession, a concern that motivated a further
investigation by the Center for Disease Contril.
Video game habituation is another cultural aspect of gaming to draw criticism
as it can have a negative influence on health and on social relations. The problem of
habituation and its health risks seems to have grown with the rise of Massively
Multiplayer Online Role Playing Games )MMORPGs). Beside the social and health
problems associated with computers, game addiction has grown similar worries about the
effect of computer games on education.
CHECK YOUR PROGRESS
1. What is the difference between the Internet and WWW?
2. What is caching?
3. What is spamming?
4. What is a private message?
5. Why is online gaming popular?
Fig. 5.9 Use of Mobile Phone is Prohibited in Some Train Company Carriages
Mobile phone use can be a significant matter of social offence: phones ringing during
funerals or weddings; in toilets, cinemas and theatres. Some libraries, book shops, bathrooms,
cinemas ,places of worship and doctor’ offices forbid their use, so that other patrons are not
disturbed by conversations. Some facilities install singal-jamming gadgets to prevent their
use, although in many contries, including the US such equipment is illegal.
Trains, particularly those involving long-distance services, often offer a ‘quiet
carriage’ where phone use is banned, much like the specified non-smoking carriage of the
past (see Figure 5.9). In United Kingdom, however, many users tend to dismiss this as it is
rarely imposed, especially if the other carriages are crowed and they have no choice but to go
in the ‘quiet carriage’.
Mobile phone use on aircraft is starting to be permitted with several airlines already
offering the ability to use phones during flights.Mobile phone use during flights used to be
banned, and many airlines still claim in their in-plane announcements that this prohibition is
due to possible intervention with aircraft radio communications.
Law enforcement has use mobile phone evidence in a number of different ways. Proof
about the physical location of an individual at a given time can be obtained by measuring the
individual’s cellphone among several cellphone towers. This surverying proficiency can be
used to show that an individual’s cellphone was at a certain location at a certain time. The
concerns over terrorism and terrorist use of technology motivated an inquiry by the British
House of Commons Home Affairs Select Committee into the use of evidence from mobile
phone devices, inciting leading mobile telephone forensic specialists to distinguish forensic
available in this area.
In United Kingdom in 2000 it was claimed that recordings of mobile phone
conversations made on the Omagh bombing were important to the police investigation. In
detail, calls made on two mobile phones which were tracked from south of the Irish border to
Omagh and back on the day of the bombing, were thought of vital significance. Further
example of criminal probes using mobile phones is the initial location and quality
identification of the terrorists of the 2004 Madrid train bombings. In the attacks, mobile
phones had been used to set off the bombs.
Disaster response
The finnish government determined in 2005 that the quickest manner to warn citizens of
disasters was the mobile phone network. In Japan, mobile phone companies render quick
notice of earthquakes and other natural disasters to their customers free of charge. In the
event of an exigency, disaster crews can find out trapped or injured people by using the
signals from their mobile phones. An interactional menu approachable through the phone’s
Internet browser apprises the company if the user is safe or in distress.
However, most mobile phone networks function close to capacity during normal
times, and spikes in call volumes caused by widespread emergencies often overload the
system just when it is needed the most. Examples reported in the media where this have taken
place include the 11 September 2001 attacks, the 2003 Northeast blackouts, the 2005 London
Tube bombings, the 2007 Minnesota bridge collapse, the 2006 Hawaii earthquake, and
Hurricane Katrina.
Interactivity with a TV set is already very common, beginning with the use of the remote
control to change channel-browsing behavior and developing to admit video-on-demand,
video cassette recorder (VCR) like pause, rewind and fast forward and digital video recorders
(DVRs) commercial skipping and the like. It does not alter any content or its inherent
linearity, but only how users control the viewing of that content. DVRs permit users to time
shift content in a way that is impractical with VHS. Though this form of iTV is common,
many people claim that this does not exactly make a television interactive. In the not too
distant future, however, the question of what is real interaction with the TV will be hard.
Panasonic already has face-identification technology carried out its prototype Panasonic Life
Wall. The Life Wall is literally a wall in the house that doubles as a screen.
2. Interactivity with TV programme content
Interactivity with TV programme content essentially is the one that is ‘interactive TV’ but
it is also the most ambitious to develop (see Figure 5.10 and 5.11).
User Interaction
Interactive TV is often depicted by cunning marketing gurus as ‘lean back’ interaction, as
users are typically relaxing in the living room environment with a remote control in one hand.
This is a very simple explanation of interactive television that is less and less descriptive of
interactive television services that are in several stages of market introduction. This is in
contrast to the similarly slick marketing invented descriptor of personal computer- oriented
‘lean forward ‘ experience of a keyboard, mouse , and monitor. This depiction is becoming
more deflecting than useful as video game users. For example, do not run forward while they
are playing video games on their television sets, a precursor to interactive TV. A more useful
mechanism for categorizing the deviations between PC and TV-based user interaction is by
measuring the distance the user is from the device. Typically a TV viewer is ‘leaning back’ in
the sofa, using only a remote control as ameans of interaction. While a PC user is 2 feet or 3
feet from his high-resolution screen using a mouse and keuboard .
In the case of two-screen solutions interactive TV, the distinctions of ‘lean-back’ and
‘lean-forward’ interaction turn more and more identical. There has been an increasing leaning
to media multitasking , in which multiple media devices are used at the same time (especially
among younger viewers). This has modified interest in two-screen services, and is making a
new level of multitasking in interactive TV.
For one-screen services , interactivity is provided by the handling of the API of the
particular software installed on a set-top box , denoted as ‘middleware’ due to its mediator
place in the operating envitonment. Software programs are passed around to the set-top box
in a ‘carouse’.
Interactive TV Sites have the necessary to bear interactivity directly from Internet
servers, and there fore need the set-top box’s middleware to support some sort of TV
browser, content translation system or content-rendering system. Middleware examples like
Liberate are established on a version of HTML/ Javascript and have revdering capabilities
built in, whereas others such as Open TV and DVB-MHP can load microbrowsers and
applications to present content from TV Sites.
Typically the distribution system for Standard Definition digital TV is established on
the MPEG-2 specification, whereas high-definition distribution is likely to be founded on the
MPEG-4 meaning that the delivery of HD often requires a new device or set-top box, which
typically is then also able to decrypt Internet Video through broadband return paths.
Interactive television projects
Some interactive television projects are consumer electronics boxes, providing set-top
interactivity whereas other projects other projects are furnished by the cable television
companies (or multiple system operator, or MSO) as a system-wide solution. Even other ,
newer, approaches mix the interactive functionality in the TV, thus nullifying the need for a
separate box. Some examples of interactive in the TV, thus nullifying the need for a separate
box. Some examples of interactive television include the following:
• Hospitality and healthcare solutions.
• MSOs
o Cox Communications (US)
o Time Warner (US)
o Comcast (US)
o Cablevision (US)
• Two-screen Solutions or ‘enhanced TV’ solutions
o Enhanced TV
• Previous MSO trials or demos
o TELE TV from Bell Atlantic, Nynex, Pac Tel, CAA (US) – no longer in
operation.
o Full Service Network from Time Warner (US) – no longer in operation
o Smarbox from TV Cabo, Novabase and Microsoft (OT) – this is also no longer
in operation , although some of the equipments are still used for the digital TV
service. This was the pioneer project.
o LodgeNet
o Consumer electronics solutions
o TiVo
o Replay
o UltimateTV
o Miniweb
o Microsodt Windows XP Media Center
o Philips Net TV
5.8 HYPERMEDIA
Hypermedia is applied as a logical annexe of the term ‘hypertext’ in which graphics,
audio, hyperlinks, plain text and video interlace to create a generally non-linear medium of
information. This demarcates with the broader term multimedia, which may be used to depict
non-interactive linear presentations as well as hypermedia. It is also linked to the field of
electronic literature – a term first used in a 1965 article by Ted Nelson.
The WWW is a classic instance of hypermedia whereas a non-interactive cinema
presentation is an example of standard multimedia because of the absence of hyperlinks.
The first hypermedia work was, arguably, the Aspen Movie Map. Atkinson’s
HyperCard generalized hypermedia writing . Most modern hypermedia is handed over
through electronic pages from a variety of systems , including web browsers, media players
and stand-alone applications. Audio hypermedia is rising with voice-command devices and
voice browsing.
5.6.1 Hypermedia Development Tools
5.9 SUMMARY
• A system of interconnected hypertext documents contained on the Internet is known
as the World Wide Web.
• An Internet forum, or message board, is an online discussion website. It arose as the
modern equivalent of an orthodox bulletin board, and a technological evolution of the
dialup bulletin board system.
• Most current mobile phones link to a cellular network of base stations (cell sites),
which is in turn interrelated to the public switched telephone network (PSTN) (the
exception is satellite phones).
• In the case of computer games, there is acoustic, visual and haptic communication
between the user (player) and the game.
• In mobile telephony, communication occurs among two people and is strictly accustic
at the first glimpse.
• Interactive television constitutes a time from low interactivity (TV on/off, volume,
changing channels) to moderate interactivity (simple movies on demand without
player controls) and high interactivity in which, for example, an audience member
impacts the program being telecasted.
• Hypermedia is applied as a logical annex of the term ‘hypertext’ in which graphics,
audio, hyperlinks, plain text and video interlace yop create a generally non-linear
medium of information.
5.12 BIBLIOGRAPHY
‘Tim Berners Lee: Tome 100 People of the Century’, Time Magezine http:// www.
Time.com/time/time/100/scients/profile/bernerslee.html.
Wardrip-Fruim, Noah and Nick Montfort )ed). 2003. The New Media Reader.
Cambridge, MA: The MIT Press.
Kunert, Tibor,2009. User –Centered Interaction Design Patterns for Interactive Digital
Television Applications. London: Springer.
6.0 INTRODUCTION
In this unit, you will learn about multimedia computers. A computer which is optimized
for high multimedia performance is known as multimedia computer. The Amiga 1000 was
the first multimedia computer from Commodore International.
In addition, you will learn about input and output devices. In order to capture sound
and image or any graphical data, special input devices are rewuired. For personal computers,
such inout devices consist primarily of electronics on a separate card, such as a sound card or
a video card, which is normally installed in the computer. A monitor or display, sometimes
also called a visual display unit, is a piece of electrical device that displays images generated
by other devices such as computers, without producing a permanent record. The components
of a monitor are the display device.
This unit will also introduce you to end user hardware issues. In earlier times,
computer buses were literally parallel electrical buses that had many connectors these days
the term is used for any physical arrangement capable performing the same logical
functionality as a parallel electrical bus.
Fig C.3 A Screenshot of the User Interface in Blender, a Completely Free 3D Modeling
and Rendering Solution
Fig. C4 A Scene from Lotte Reiniger’s 1926 Film The adventures of Prince Achmed. One
of the First Animation Films to be Made
Fig. C5 Walt Disney Cartoons Were Very Popular in the Golden Age of Animation
Fig. C.6 The Lord of the Rings Trilogy Made Extensive Use of 3D Modeling and
Rendering Techniques
Fig. C.7 A Scene From Pixar’s Toy Story (1995). The First 3D Animated Film to be
Made
Fig. C.8 Bose Speakers, a Music Lover’s Delight
Fig. C.9 Second Life, a Virtual 3D World on the Internet Where Users can Interact With
One Another
Fig. C.10 Pacman, One of the Earliest Computer Games to be Developed
Fig. C.11 Screnashot From the Groundbreaking 3D Person Shooter Game QUAKE
Fig. C.12 Screenshot form Microsoft Flight Simulator 2004, a Detailed and Realistic
Flight Simulation Program
Fig. C.13 Google Earth Makes Satellite Imagery of Every Location on Earth Available to
Internet Users for Free
Fig. C.14 The iPhone
Fig. C.15 The Matrix (1999) Dealt With Topies Related to Simulation and Vitule Reality
A serices of related models came up after the Macintosh II (see Figure 6.5) these were the
Macintosh IIx and Macintosh IIfx, all of them used the Motorola 68030 processer. Macintosh
II was upgraded to a Micintosh IIx or IIfx swapping the motherboards. The Macintosh II was
the first system that had the Chimes of Death along with the Sad Mac logo whenever a
serious hardware error occurred.
The defects that occurred until about November 1987 were developed with certain
upgradations. The original ROMs in the Macintosh II were developed with a bug which
prevented the system from recognizing more than one megabyte of memory address space on
a Nubus card. For example, if a video card with four megabytes of video RAM was installed,
only one megabyte of video RAM would be recognized by the system. The new extensions
which were featured for the Macintosh II during those days were A/ROSE and Sound
Manager.
The input devices are summarized in Table 6.1 . While using any of these devices to
input data, it is important to ensure that the data is entered accurately. Various procedures
are used to help ensure data accuracy.
Table 6.1 Summary of Input Devices
DEVICE DESCRIPTION
Keyboard Most commonly used input device; special keys may include
numeric Keypad, cursor control keys and function keys.
Touch screen User interacts with computer by touching screen with finger.
Graphics tablet Digitizer with special processing options built into tablet
Areas where voice input data is used are data entry, command and control, speakes
recognition and speech to text. Questions can be verbally completed with a limited number of
acceptable responses, for example, the product inspection in manufacturing companies use
the voice data entry systems. Rather than manually recording data or using a keyboard, the
inspector dictates the completed product information into a microphone. This can become
successful only if the inspector is focused on the item being inspected. Command and control
applications used a limited vocabulary of words that could cause the computer to perform a
specific action such as save or print a document. Command and control applications can be
used to operate certain types of industrial machinery only.
6.4.2.2 Voice input system
Various hardware and software system are used to convert spoken words into data for
entry into the computer system. This process of conversion of voice input is as follows:
(i) The user’s voice consists of sound waves that are converted into digital form by
digital signal processing (DSP) circuits that are usually on a separate board added
to the computer.
(ii) The digitized voice inout is compared with the patterns stored in the voice
system’s database.
(iii) Word conflicts are solved by using grammar rules. Based on the fact that on how a
word is used, the computer can usually identify the correct word in cases of words
that sound similar, such as to, too and two.
(iv) Words that are not recognized by the computer are left for the user to identify,
especially in lower cost systems with limited vocabularies. In many voice input
systems, the user has to train the system to recognize his or her voice accordingly
systems, the user has to train the system to recognize his or her voice accordingly.
6.4.2.3 Natural Language voice interface
A natural language voice interface is the interface in which a user is allowed to ask a
question and have the computer not only convert the question to understandable words but
also interpret the question and give an appropriate response. Beyond continuous voice
recognition is known as natural language voice interface. For example, think how easy it
would be to use a system if you could simply ask. ‘How soon can we slip 200 red stainless
steel widgets to Delhi?’ Imagine how many different pieces of information the computer
might have needed to generate a correct response. These types of natural language voice
recognition system are not commercially available these days. However, they are in the
process of development using powerful computers and sophisticated software.
6.4.3 Digital Camera
Digital cameras not only record photographs in the form of digital data, these data can
also be stored on a computer. Digital cameras have now almost replaces chemical based film.
Some digital cameras are portable and look similar to traditional film cameras. Some other
digital cameras are stationary and are connected directly to a computer. Most companies have
now started using digital cameras to record images of their products for computer-based
catalogues and also to record photos of their employees for their personnel records.
6.4.4 Helping People with Special Needs
It is impossible for the physically challenged and disabled individuals to work with standard
computers. However, keeping these things in mind, developers have developed a special
software called edaptive or assistant technology to enable many of these individuals to make
use of computers productively. A wide range of hardware and software products were
developed by the adaptive technology which helped the user make the computer meet his or
her special needs. For people with motor disabilities who cold not use a standard keyboard, a
number of alternative input devices were developed. Most of these devices involved the use
of a switch which is controlled by any reliable. Moreover, one type of switch is even
activated by breathing into a tube.
6.4.4.1 For build individuals
In case of blind individuals, voice recognition programs allow for verbal input of the
data. Various software have also been developed to convert data into Braille. This software
converts text to Braille and sends it to Braille printers. Both blind and non verbal people use
speech synthesis equipment to convert text documents into spoken words. For people who
have low vision, i.e., they cannot see properly, several programs are available which
magnifies the information on the screen.
6.4.5 Data Accuracy
Data must be accurately in the system. The Procedures developed for controlling input of
data are important because accurate data must be entered in to a computer to ensure data
integrity. The computer jargon term GIGO states that inaccurate information caused by
inaccurate data is often worse than no information at all. It stands for Garbage Out.
Procedures and documentation must be clear since users often interact directly with the
computer during the input process. Computer programs and procedures must be designed in
such a way to keep a check for accurate data and also should specify the steps to take if the
data is not valid.
Different application has specific criteria for validating input data, and several tests are
performed before the data is processed by the computer. Some of these tests are as follows.
(i) Tests for data type and format: If data is of a particular type, such as alphabetic
or numeric , then its test is carried out for the same data. Sometimes, a data type
test is usually combined with a data format test. For example, in India, the PIN –
the postal index number – is six digits, where as in the US, the ZIP postal code is
either five digits or five digits followed by a dash followed by four digits. If the
data did not fit the same it will be rejected.
(ii) Tests for data reasonableness: A reasonableness check makes sure that the data
entered is within normal or accepted boundaries. In a company an employee can
work for 80 hours only per week. If the value entered in the hours worked field is
greater than 80, then an error will be shown and the value in the field would be
indicated as a probable error.
(iii) Tests for data consistency: In some cases, the date entered cannot, by itself, be
found to be invalid. If however, the data is examined in relation with the other
data entered for the same transaction, discrepancies will be found. In the similar
way, in a hotel reservation system, both the check is and checkout dates are
entered.
Fig. 6.7 A 19.3 “ (48.3 cm tube. 45.9 cm viewable viewSonic CRT Computor Monitor
Many hardware technologies are used for displaying computer –generated output. They
are as follows:
• TFT Liquid Crystal Display (LCD) is the most popular display device for
computers.
o Passive LCDs are known for poor contrast and slow response. They were
generally used in laptops until the mid 1990s.
o All modern LCD monitors are thin film transistors (TFT).
• Cathode ray tube (CRT) (see Figure 6.7)
o The most popular display for older computes is pixels. Raster scan
computer monitors produce images using pixels.
o Vector displays, as used mostly used on the vectrex, scientific and rader
applications and several other early machines; for example, asteroids use
CRT display because of requirement for a deflection system, although a
raster-based display may be used.
o Television sets were connected with the personal and home computers.
These were connected by using a composite video to the television set
using a modulator. The image quality and resolution of the display of the
television had some limitations.
o Penetron had military aircraft displays.
• Plasma display
• Video projectors used CRT, DLP, LCoS, and various other technologies to emit
light to a projection screen. Projectors had two parts – front projector and rear
projector. The function of the front projector was to use screens as reflectors and
send light back, while the function of the rear projector was to use screens as
diffusers to refract light forward. Rear projectors are often combinedly kept into
the same case as their screen.
• Surface – conduction electron-emitter display (SED) and field emission display
(FED)
• Organic light-emitting diode display (OLED).
FIG. 6.8 Comparison of a 21” CRT TV Monitor with a 17” CRT PC Monitor.
The CRT is the generally the picture tube of a monitor. The end of the tube has a
negatively charged particle called the cathode. There is an electron gun that shoots electrons
down to the tube and onto a charged screen. The screen which is coated with a pattern of
phosphor dots glow when an electron strikes the stream. Each cluster has three dots of
different colour and each dot is known as pixel (see Figure 6.8).
The image that you see on the monitor is usually made up from at least tens of
thousands of such tiny glowing dots. Lasser the distance between the pixels sharper the image
is on screen. The distance between two pixels on a computer monitor screen is known as dot
pitch and is measured in millimeters (mm). Most of the monitors have a dot pitch of 0.28
millimetters which is eqyivalent to 0.011 inches or less.
6.5.3 Performance Measurements
The parameters by which the performance of the monitor can be measured are as follows:
• The first thing to be measured is the luminance. This is measured in candelas per
square metre.
• The image size viewed is measured is the luminance. This is measured in candelas per
square metre.
• The image size viewed is measured diagonally. For CRT’s , the viewable size is
mostly 1 inch (125 mm) smaller than the tube itself.
• The ratio of the horizontal length to the vertical length or the aspect ratio is 4:3, which
is the standard aspect ratio. For example., a screen with a width of 1024 pixels will
have a height of 768 pixels . If a widescreen display has an aspect ratio of 16:9, then a
display has 1024 pixels wide and a height of 576 pisels.
• Display resolution is the number of distinct pixels in each dimension that can be
displayed.
• Maximum resolution is limited by dot pitch. Dot pitch is the distance in millimeters
between two pixels. Generally, the smaller the dot pitch, the sharper the image.
• The number of tomes in a second the display is illuminated on the screen is the refresh
rate. Maximum refresh rate is proportional to the response time.
• Response time is the time that a ray takes to hit the screen in a monitor and go back
to the intial position again. This is measured in milliseconds. Lower the number
means faster the transition and therefore fewer visible image artefacts.
• Contrast ratio is the ratio of the luminosity i.e., producing light of the brightest colour
(white) to that of the darkest colour (black) which the monitor is capable of
producing.
• Power consumption by the system is measured in watts.
• Viewing angle is measured in degrees horizontally and vertically. It is the maximum
angle at which images on the monitor can be viewed, without excessive degradation
to the images.
6.5.3.1 Comparison
CRT monitors
The advantages and disadvantages of CRT monitors are as follows:
Advantages
• High contrast ratio of about 20000: 1 or greater, which is much higher than many
modern LCDs and plasma displays.
• High speed response
• Excellent additive colour, wide range of scales and low black level.
• Original display in almost any resolution and refresh rate.
• Nearly zero colour, saturation, contrast or brightness distortion and excellent viewing
angle.
• No input lag.
• A reliable and proven display technology
Disadvantage
• Large size of about 40” and weight over 200 lbs
• Geometric distortion in non-flat CRTs.
• Older CRTs are more prone to burning
• Time consuming as warm up time required prior to peak luminance and proper colour
rendering.
• More power consumed than similar size of LCD displays
• More effective at only highest resolution.
• Intorent of dump conditions with dangerous wet failure characteristics
• Small risk of implosion, due to internal vacuum, if the picture tube is broken in aging
sets.
• Noticeable flickers under lower refresh rates
• Can even cause death in case of high voltages
• Flyback transformer produces a very high-piched noise when set too close
• Increasingly difficult to obtain models at HDTV resolutions
• Spare parts of monitor are not readily available. Standard electronic components
suppliers rarely carry stock of these parts.
• Maximum brightness possible but not as high as LCD.
• Lower contrast ratio in under bright conditions due to gray phosphor and limited
brightness
LCD Monitors
The advantages and disadvantages of LCD monitors are as follows:
Advantages
• Very compact and klight to carry
• Low power consumption
• No geometric distortion
• Very little or no flickering depending on the back light
Disadvantages
• Contrast ratio is low in older LCDs
• The viewing is limited, causing colour, saturation, contrast and brightness to vary,
even within the intented viewing angle, by variations in posture.
• Uneven back lighting is some monitors, which causes distortion in brightness,
especially towards the edges
• The response time is slower, which causes smearing and ghosting artefacts However,
many modern LCDs have response times of 8 ms or even less
• Only one native resolution. In order to display other resolutions it requires a video
scaler, which degrades image quality at lower resolutions
• Many cheaper LCDs are incapable of true colour and have fixed bit depth
• Lag in input
• Dead pixels which occurred during manufacture may be seen.
Plasma
The advantages and disadvantages of plasma monitors are as follows:
Advantages
• Compact in size and light in weight
• High contrast ratio; i.e., 10000: 1 or greater
• High speed response
• Excellent colour, wide range of scales and low black level
• Excellent viewing angle with nearly zero colour, saturation and contrast or brightness
distortion
• No geometric distortion
• Highly scalable, with less weight gain per increase in size (from less than 30 inches
wide to the world’s largest at inches)
• Inputs include DVI, VGA, HDMI or even S-Video
Disadvantage
• Larger pixel pitch for low resolution or large screen
• Flickering can be noticed when viewed at close range
• The temperature required for operating in higher.
• Costlier than LCDs
• Power consumption is comparatively higher
• Only one native resolution. In order to display other resolutions it requires a video
scaler, which degrades image quality at lower resolutions
• Fixed bit depth
• Lag during input
• Older PDPs can burn out easily
• Dead pixel which occurred during manufacture may be seen
Penetron monitors
The advantages and disadvantages of penetron monitors are as follows:
Advantages
• Although LCDs are transparent , they are not self-lighting . See-through effect for
transparent HUDs
• Very high contrast ratios
• The image is extremely sharp
Disadvantages
• Limitation in the colour display. Has about four tints.
• The order of magnitude is more expensive than other display technologies.
Dead pixels
Some LCD monitors are produced with dead pixels . Manufactures tend to sell monitors with
dead pixels owing to the need for affordable monitors . Manufactures generally tent not to
take any responsibility and have warranty clauses that claim monitors with less than a set
number of dead pixels are not broken and are replaceable. The dead pixels are typically syuck
with green , red and/or blue sub-pixels
Stuck Pixels
LCD monitors lack phosphors screens and are thus resistant to phosphor burn-in hey however
have a condition called image persistence where the monitor’s pixels can remember any
specific colour, become stuck and is incapable of changing it. Unlike Phosphor burn-in image
persistence here can be reversed partially or, at times, even completely. This is possible
through rapid displaying of varying colours in order to wake up the stuck pixels.
Phosphor burn-in
This condition is localized ageing of the phosphor layer of a CRT screen in which it has
displayed a still bright image for several years. What happens as a consequence is the
presence of a faint permanent image on the screen even after the monitor has been turned off.
In extreme cases, it is even possible to read some of the text, However, this occurrence is
only due to the displayed text remaining the same for many years.
It was common to see this in single purpose business computers. It still is a problem
with the CRT displays. However, latest computers are not developed in this fashion anymore,
and so this problem is more significant now. Only systems which were playing the same for
years suffered from this defect. With these effects burn-in was not in an abvious effect when
in use, since it corresponded with the displayed image completely, as seen in three situations:
1. When some heavily used monitors were reused at home.
2. Monitors were re-used for display purposes.
3. In some high-security applications, where the high-security data are displayed, the
image did not change for years at a time.
In order to avoid burn-in screen savers were developed, but are pointless for CRTs today,
notwithstanding their popularity.
Plasma burn-in
Plasma burn-in was an issue with early plasma displays, which are noticeably weaker than
CRTs. Screen savers with moving images can be used to control and minimize localized
burn. The periodic of the colour and scheme in use also aids in reducing burn-in.
Glare
Glare occurs due to the relationship between lighting and screen or throught use of monitors
in bright sunlight . The Matte finish LCDs and flat screen CRTs are less likely to reflect glare
than traditional curved CRTs . These are curved in just one axis and are more glare resistant
than other CRTs that are curved on both the axes.
Colour misregistration
Apart from correctly aligned video projectors and stacked LEDs. Display technologies
especially LCD, have an intrinsic misregistration of colour channels. The centres of the red,
green and blue dots are not aligned perfectly . Sub-pixels and performance depend on the
technology’s misalignment. Technologies making use of this application include the Apple II
from 1976 and more recently Microsoft (Clear Type, 1998) and X Free 86 (X Rendering
Extension).
Incomplete spectrum
Not all colours are seen by RGB display. It only produces most of the visible colour of the
spectrum. This can cause a problem where good colour matching to a non-RBG image is
needed. This problem is common to all monitor technologies with three colour channels.
6.5.4 Display Interfaces
Computer terminals
Early the CRT-based visual display units (VDUs), such as the DEC VT05 were without any
graphics capabilities and gained the lable teletypes, because of the functional similarity to
their electromechanical predecessors. Some of the historic computers had no screen display,
using a teletype, modified electric typewriter, or printer instead.
Composite signal
Earlier home computers such as the Apple II and the Commodore 64 used a composite signal
output to drive a CRT monitor or TV. This resulted in a low resolution due to compromises in
the broadcast TV standards. This method is still being used with video game consoles. The S-
Video input of the Commodore monitor had to improve resolution.
Digital monitors
Early digital monitors are now sometimes called TTLs, because the voltages on the red, green
and blue inputs are compatible with TTL logic chips. Later digital monitors were supported
with LVDS or TMDS protocols.
TTL monitors
Fig. 6.9 IBM PC with Green Monochrome Splay
In early IBM PC the monitors used with the MDA, Hercules, CGA graphics adepters
and clones were controlled via TTL logic (see Figure 6.9). Such monitors are usually
identified by a male DB9 connector used in a video cable. The disadvantage of TTL
monitors was that a limited number of colours was available due to the low number of
digital bits used for video signaling.
Modern monochrome monitors use the same 15-pin SBGA connector as a
standard colour monitor. Making an interface with the modern computers they were
capable of displaying 32-bit grayscale at 1024 x 768 resolutions.
Only five pins out of nine were used by the TTL Monochrome monitor . One
pin was used as a ground and two other pins were used for horizontal/vertical
synchronization. The electron gun was controlled by two separate digital signals, like a
video bit and an intensity bit, which is used to control the brightness of the drawn pixels.
Four shades which were possible were black, dim, medium or bright.
In a signaling method known as red, green and blue, plus intensity (RGBI),
CGA monitors used four digital signals to control the three electron guns used in colour
CRTs. Each of these three RGB colours can be switched on or off independently as per
requirement. As the intensity bit increases, the brightness of all guns are switched on, or if
no colours are switched on the intensity bit will switch on all guns at a very low
brightness to produce a dark grey colour. ACGA monitor is only capable of transforming
16 colours so this was not exclusively used by PC-based hardware. Many CGA monitors
were capable of displaying composite video via a separate jack. Therefore, Commodore
128 could also utilize CGA monitors for their purpose.
Single colour screens
In 1980, display colours other than white were popular on monochrome monitors. These
colours were more comfortable on the eye as they did not give any strain or stress. This
problem was a big issue at that time due to the lower refresh rates which caused
flickering, and also the use of less comfortable colour schemes used with most of today’s
software.
Green screens with amber displays were the most popular colour available .
Paper white was also in use, and was known as a warm white.
6.5.5 Modern Technology
Analogue Monitors
Most modern computer displays can show various colours of the RGB colour space by
changing red, green and blue analogue video signals in continuously variable intensities.
These scans been exclusively progressive since the middle of 1980s. Earlier many plasma and
liquid crystal displays had exclusively analogue connections. All signals in such monitor
passed through a completely digital section prior to the display.
The IBM PC and compatible system were standardized on the VGA connector, while
many similar connectors such as 13W3, BNC, etc., were used on other platforms.
Digital and analogue combination
The first popular external digital monitor connectors, such as DVI-I and the various breakout
connectors based on it, included both analogue signals compatible with VGA and digital
signals. These signals are also compatible with new flat-screen in the same connector.
Digital monitors
Newer connectors havew only digital video signals. Many of these, like HDMI and
DisplayPort, also feature integrated audio and data connections. One of the less popular
feature most of these connectors share are DMR encrypted signals. Although the HDCP
technology responsible for implementing the protection was necessarily developed to meet
the cost constraints, and was primarily a barrier aimed towards dissuading a verage
consumers from creating exact duplicates without a noticeable loss in image quality.
6.5.6 Flexible Display Monitors
Flexible display monitors can be text and are much thinner and consume less power. The
Flexible Display Centre (FDC) at Arizona State University and Universal Display
Corporation together carried out a research and introduced the first a –Si;H active matrix
flexible in organic light-emitting diode (OLED) . This flexible display is manufactured
directly on DuPont Teijin’s polyethylene naphthalate (PEN) substrate . Using the Universal
Display Corporation’s phosphorescent organic light-emitting diode (PHOLED) technology
and materials and the FDC’s technology of bond-debond manufacturing the 4.13
monochrome quarter video graphics array (QVGA) display represents a significant milestone.
This enables to echieve a manufacturability solution for flexible OLEDs.
6.5.7 Configuration and Usage
Multiple monitors
One or more monitors can be attached to the same device . Each display can function and
operate in two basec configurations.
• The simpler of the two is mirroring or cloning, in which at least two displays show
the same image. This is generally used for presentations . Hardware with only one
video output can be done with an external splitter device, which is commonly built
into many video projectors as a pass through connection.
• Extension allows each monitor to display a different image, which is the most
sophisticated part, so as to form a contiguous area of arbitrary shape. This requires
software support and extra hardware, and can be obtained from the low end
products by crippleware.
• A primitive software is not capable of recognizing multiple displays, so spanning
must be used, in which a very large virtual display is created, and then pieces are
splitted into multiple video outputs for separate monitors. Hardware with only
single video output can be tricked for doing this with an expensive external
splitter device. This is most often used for very large composite displays made
from many smaller monitors placed edge to edge.
Major Manufacturers
Some of the manufacturers of tablet screens are as follows:
• Acer
• Apple Inc.
• BenQ
• Dell
• Eizo
• Gateway
• Hewlett-Packard
• HannStar Display Corporation
• Iiyama Corporation
• LG
• NEC
• Samsung
• Sony
• Tyco Electronics
6.5.14 Speakers
Computer speakers or multimedia speakers are external to a computer (Figure 6.12) .
They are provided with a low-power internal amplifier. A plug and socket for a two wire
(signal and ground ) coaxial that is widely used connect analog audio and video components.
The standard audio connection is a 3.5 mm (1/8 inch) stereo jack plug which is colour-coded
lime green (following the PC 99 standard) for computer sound cards. It is also known as a
phono connector in which the rows of RCA sockets are found on the backs of sterco amplifier
and numerous A/V products. The dimension of the prong is 1/8” thick and 5/15” long ,
Sometimes the RCA connector is used for input. The USB speakers are powered from the 5
walts at 200 milliamperes and also provided by the USB port which allows about half a watt
of output power.
Common features
Features vary from one manufacturer to the other; however these may include the following:
• An LED power indicator.
• A 3.5-mm (1/8-inch) headphone jack
• Controls for volume and sometimes bass and treble
• A remote control volumn control.
6.7 SUMMARY
• Earlier desktop computers lacked the power and storage necessary for multimedia .
Games or demo scences in these system were able to achieve high sophistication and
technical polish using only simple, blockly graphics and digitally generated sound.
• The Amiga 1000 (A 1000) was the first multimedia computer from Commodore
International. Animation, graphics and sound technologies of this computer enable
multimedia content to develop.
• Atari’s computer division developed a home computer based on the 6502 CPU before
the introduction of the ST computers.
• Atari ST was part of the 16/32 bit generation of home computers. This was based on
the Motorola 68000 CPU, with 512 KB of RAM or more, and 31/2 3 single-density ,
double floppy disks as storage unit (nominally 720 KB ). The mixture of sound and
images with effect of text and graphics is known as multimedia.
• Earlier, a video material recorder was put into the computer using a video camera or a
video recorder . Since the video data requires large storage space, therefore, the video
segments in personal computer application are often limited to only a few seconds.
• A monitor or display also sometimes called as a visual display unit is a piece of
electrical device which displays images generated by other devices, such as
computers, etc. , without producing a permamnent record.
• In computer architecture, a bus is a subsystem that transfers data between different
computer components inside a computer or between two or more computers.
• Modern computer buses can use both parallel and bit-serial connections, and can also
be wired in either a multidrop (electrical parallel) or daisy chain topology.
Long-Answer Questions
1. What is a multimedia computer ? Trace its origin.
2. Explain the evolution of the Atari 520ST .
3. What are input devices ? Discuss.
4. Explain output devices .
5. What are display interfaces ? Discuss .
6. What are the end user hardware issues ? Give a description of a bus.
6.10 BIBLIOGRAPHY
Chen, Sao-jie, Guang-huei Lin and Pao-ann Hsiung . 2009. Hardware Software Co-
design in a Multimedia SOC Platform . New York: Springer.
Brice Richard. 1997. Multimedia and virtual Reality engineering . London: Newnes.
UNIT 7 MULTIMEDIA IN
EDUCATION
Program Name: BSc(MGA)
Written by: Srajan
Structure:
7.0 Introduction
7.1 Unit Objectives
7.2 Introduction to Multimedia in Education
7.2.1 Core Functions: Nature of Education and Training
7.2.2 Nature of the Sector
7.2.3 Multimedia of Education
7.2.4 The Current Situation
7.2.5 Usage of the Term Multimedia
7.3 Education Online
7.3.1 Advantages of Online Education
7.3.2 Goals and Benefits of E-Learning
7.3.3 Market
7.3.4 Approaches to E-Learning Services
7.3.5 E-Learning Technology
7.3.6 Content Issues
7.3.7 Technology Issues
7.4 Future to Interactive Media in Education
7.4.1 A Vision for the Feature
7.4.2 Meltdown Scenarios
7.4.3 Universities
7.4.4 Vocational/Further Education
7.4.5 Schools
7.5 Summary
7.6 Key Terms
7.7 Questions and Exercises
7.8 Further Reading
7.0 INTRODUCTION
In this unit, you will learn about multimedia in education. The development of
multimedia technologies in education offers new ways in which learning can be imparted in
schools and homes. Allowing teachers to have access to multimedia learning resources,
which support constructive concept development, help them to focus more on being a
facilitator of learning while working with individual students. Such provision has the
potential to reduce the need for subject-specific teaching expertise and traditional transfer of
student from primary to secondary schools. Extending the use of multimedia learning
resources to homes presents an opportunity for students to improve learning.
For children, transfer between schools can adversely affect the rate of learning. It is
essential to find ways of reducing the impact of transfer on students and ensure continuity
learning during the student’s transition from childhood to adolescence. Multimedia
technologies have the potential to support continuity by:
• Providing access to multimedia learning resources for both primary and secondary
schools that are based on a common curriculum.
• Giving access to an extensive knowledge base in the form of multimedia learning
resources that could be provided in any classroom.
• Providing a basis for cross-phase project work.
• Allowing data sharing across school phases.
In this unit, you will also learn about goals and benefit of e-learning , approaches to e-
learning services and a vision for the future.
These broad categories or constituencies overlap and the growth and dispersion of
multimedia learning materials will certainly step-up this overlap. This interaction among
components of different education sectors will be a crucial expression of the growth of
multimedia. Within these categories training and education is also furnished in many
different modes that are often mixed. These include:
• Individual learning
• Teacher mediated learning
• Classroom learning
• Group learning
• Distance learning
• Open and ‘Closed’ learning
The balance and mix of different modes of learning is associated with the different needs
for learning and the available resources for learning. New priorities and changing resources,
including technology, will alter the chances and motivations open to teachers, learners,
society and institutions.
7.2.3 Multimedia for Education
Multimedia has had a long chronicle in education and efficaciously, educational multimedia
already survives. Education is accomplished through genuinely multiple media already. There
are two primary ways that multimedia can be used in education and training. It step-ups the
handiness of information and resources in all media to learners and teachers, especially
throught the delivery of so far distributed resources and programmers of activity (such as
exhibition visits, field trips, laboratory experimants library tours,etc.) through an electronic
medium. It also permits the channeling of what already happens into an electronic, more
tightly technologically pachaged form. This process often lets in the concretization and
capture or crystallization of techniques presently occupying in people’s skills. There are then
distilled and concentrated for delivery. It thus offers new chances of several sorts not least for
commoditization of education in new terms.
Not only does interactive multimedia contribute together educational instruments but
it can attract directly on theoretical teachings. Increased machine ‘intelligence’ could grant
for programs that conform themselves to individuals demands. More and more user-friendly
technology will ease user-centered systems. Multimedia and hypermedia could even enable
the entire traditional learning and teaching procedure to be reformulated, substituting the
lecture-absorb-test model, and bestowing the ‘mega change’ that has happened in every field
of human effort excluding education. Current experience and research are that multimedia
facilities and packages can ameliorate learning times and holding substantially over many
traditional approaches and a great many evolutions and experiments of use of IT in all fields
of education have conclusive results. There is surely a great deal of anecdotal ground that
multimedia works. ‘It is much faster in using IT, so that you get more done. With exercises,
pupils will be capable of taking their work much farther than they ever would manually.
‘Networked multimedia extends even more opportunities to ease or raise learning access to
distant information sources of many types to remote tutors and virtual learning groups.
Multimedia not only encroaches on individual learning but also on the many
occasions of the education sector: assessment of students and accreditation of course material
teacher training, teaching, development of teaching materials and pedagogies, Teachers and
establishments have to promote their services and the education sector also has administrative
necessities . Finally, it impresses the backing service industry and other complementary
sectors. Multimedia may ameliorate the existing institutes of learning, but may also guide to a
change in the accessibility, quality and experience of education. Active economic,
organizational, and political pressures within education are probably entails that some
functions and institutions will change radically and new ones unite but equally some
technical possibilities may remain undeveloped. So far, multimedia has been regarded as a
tool for changing purposes of the education sector but it also has an external look. Education
develops people for the ‘outside’ world, which is likely to be heavily impressed by
multimedia, for citizenship, work, leisure, social functions, etc. The education sector will
have to react to the dispute of how to develop people for a progressively multimedia world
and find out to deal with the skills and attitudes that teachers and learners being from
‘outside’.
7.2.4 The Current Situation
Technology-enhanced learning (TEL) is concerned with any learning action through
technology.
TEL is often synonymously with e-learning even though there are important
deviations. The main difference between the two formulas is that TEL centers on the
technological backup of any pedagogical advance that utilizes technology. However, that is
demonstrated as admitting print technology or the exploitations around journals, libraries,
books in the centuries before computers.
A learning activity can be depicted in terms of the following features.
• Learning resources: Compilation, creation, distribution, access, tools and services,
consumption of digital content.
• Actions: Interaction worth software tools, communication and collaboration.
• Context: Surrounding people and location, time and duration.
• Roles: The various actors in changing roles (e.g. learning coach, human resource or
education manager, student, teacher, facilitator).
• Learning objective: To support every human in attaining her or his learning goals,
abiding by individual as well as organizational learning preferences.
Learning activities comply different pedagogical approaches and didactic concepts. The
main focal point in TEL is on the interaction between respective technologies and these
activities. This can very form easing approach to and authoring of a learning resource to
detailed software systems managing (e.g., learning content management systems, learning
management system, adaptive learning hypermedia systems, learning repositories, etc. ) and
managing (tools for self-directed learning, human resource management systems etc.) the
learning procedure of learners with technical means.
The persisting definitions for technology enhanced learning cover very broadly and
alter endlessly because of the active nature of this developing research field. Hence, the
definition of TEL must be as extensive and universal as possible to encompass all facts:
‘Technology-enhanced learning (TEL) has the aim of furnishing socio-technical
inventions (also ameliorating skillfulness and cost effectiveness) for discovering exercises
regarding individuals and organizations, autonomous of pace, time and place. The field of
TEL thus depicts the backup of any learning activity through technology. ‘
7.2.5 Usage of the Term Multimedia
Significant anecdotal ground proposes that discourses about multimedia become futile on a
regular basis because those taking part are not using the same renditions for their words.
Much of this trouble seems inescapable because concrete definitions of frequently used terns
like online, multimedia, new media and others qre invariably being challenged by utilization
ahead of widespread realizing is in place. To help lucidity of communication in this paper, let
us discuss some of these terms. The purpose is not to induce argument about their sheer truth
but merely to elucidate their meaning in this paper to the readers.
Multimedia
The term generally depicts multiple media types being received at interactively via computer.
It is often fixed to CD-ROM (though it needs not be) and is impelled more by
commercializing goals than utility.
Interactive media
This terms is used because it is autonomous of the distribution mechanism (World Wide
Web, CD-ROM, etc.) and holds with it the most crucial attribute, interactivity, without the
prerequisite for multiple media types. There are many practicable uses of interactive media
that utilize one media type only.
Online
This term is used to refer material that is approachable via a computer approachable networks
or telecommunications instead of material accessed on paper or other non-networked
medium.
New media
In many cases, the conversion from analogue to digital media domains permits greater
practicality and lends new features to the media type (such as, compression, image
manipulation, etc.) This term is used to ponder that difference.
A multimedia learning environment implies a number of components or elements to
facilitate learning to take place. Hardware and software are only part of the requirement.
Having the correct type of equipment including software will not inevitably produce the
most appropriate environment to facilitate learning to take place.
Access to multimedia technologies including online systems is presently a major trouble
for both primary and secondary schools. However, there are some potential connection
solutions commencing to come out from the IT industry, which need to be mixed with the
exploration of new systems of resource preparation for schools.
Although there is current concern regarding security and access to unsuitable materials
via the internet, technical solutions are commencing to become usable to get over these
issues.
There is a requirement to enable software producers to formulate a consciousness of the
array of learning styles spread in primary and secondary schools.
A major debate for e-learning is that it empowers learners to grow essential skills for
knowledge-based workers by embedding the use of information and communications
technologies within the curriculum. Using e-learning in this way has major significances for
course design and the assessment of learners.
7.3.3 Market
The universal e-learning industry is estimated to be worth over 38 billion Euros: although in
the European Union only about 20 per cent of e-learning products are developed within the
common market. Growths in Internet and multimedia technologies are the basic facilitator of
e-learning, with support, consulting, content, technologies and services being named as the
five key sectors of the e-learning industry.
7.3.3.1 Higher education
There are more than 3.5 million online students learning at institutions of higher education in
the United States. As per the Sloan Foundation reports, there has been a growth of around 12-
14 per cent per year on average in registrations for fully online learning over the five years
2004-09 in the United States post-secondary system, compared with a mean of about 2 per
cent increase per year in enrollments or registrations overall. According to another study
almost a quarter of all students in post-secondary education were talking fully online courses
in 2008. A report by Ambient Insight Research estimated that in 2009, 44 per cent of post-
secondary students in the United States were taking some or all of their courses online and
figured that this figure would ascend to 81 per cent by 2014. Thus, it can be observed that e-
learning is moving quickly from the borders to being a predominant form of post-secondary
education at least in the United States.
Many higher education’s now provide online classes. By counterpoint, only about half of
private, non-profit schools offer them. The Sloan report, on the basis of a poll of academic
leaders, states that students generally look to be at least as fulfilled with their online classes
as they are with conventional ones. Private institutions may turn more involved with online
presentations as the cost of instituting such a system diminishes Properly trained staff must
also be employed to work with students on-line. These staff members need to realize the
content area, and also be highly trained in the usage of the Internet and computer. Online
education is quickly increasing and online doctoral programs have even formulated at leading
research universities.
7.3.4 Approaches to E-Learning Services
E-learning services have developed since computers were first used in education. There is a
curve to move toward intermingled learning services, where computer based actions are
incorporated with practical or classroom-based situations.
There are proposals that different types or forms of e-learning can be regarded as a
continuum, from e-learning , (i.e., no use of computers and/or the Internet for learning and
teaching), to classroom aids, such as making classroom lecture PowerPoint slides available to
students through a course website or learning management system. In includes laptop
programs, where students bring laptops to class and use them as part of a face-to-face class;
as well as hybrid learning, where classroom time is cut down but not abolished. More time is
given to online learning, through to fully online learning- a form of distance education. This
categorization is rather similar to that of the Sloan Commission reports on e-learning’s status,
which indicate web enhanced and web-dependent to ponder a growing intensity of
technology use.
7.3.4.1 Computer-based learning
Computer-based learning (CBL) denotes computer use as a primary component of the
educational environment, This indicates the use of computers in classrooms; however, the
term more generally indicates a structured environment where computers are used for
educational purposes. The idea is usually regarded as being somewhat different from the use
of computers where learning is at least a marginal element of the experience (e.g., computer
games, browsing, etc.).
7.3.4.2 Computer-based training
Computer-based trainings (CBTs) are self-paced learning actions approachable via a
computer or handheld device. CBTs, typically present content in a linear manner, much like
reading a manual or an online book. For this reason, they are oftentimes used to instruct static
processes, such as using software or finishing mathematical equations, The term computer-
based training is often used interchangeably with web-based training (WBT) with the primary
difference being the delivery method. CBTs are typically imparted via CD-ROM, whereas
WBTs are presented via the Internet using a web browser. Assessing learning in a CBT
usually comes in the manner of multiple-choice questions or other assessments that can be
easily marked by a computer.
CBTs can be a good choice to printed learning materials because rich media, including
animations or videos, can easily be engrafted to raise the learning. Another advantage to
CBTs is that they can be easily administered to a wide audience at a relatively low cost once
the initial development is finished.
7.3.4.3 Computer-supported collaborative learning (CSCL)
Computer-supported collaborative learning (CSCL) is one of the brightest conceptions to
ameliorate learning and teaching with the aid of modern information and communication
technology. Collaborative or group learning denotes to instructional methods whereby
students are promoted or required to work together on learning tasks. It is widely agreed to
differentiate collaborative learning from the traditional ‘direct transfer’ model in which the
instructor is accepted to be the distributor of knowledge and skills.
7.3.4.4 Technology-enhanced learning (TEL)
Technology-enhanced learning (TEL) has the target to furnish socio-technical innovations
(also improving efficiency and cost effectiveness) for e-learning practices, concerning
individuals and organizations, independent of pace, time and place. The field of TEL
therefore employs to the support of any learning activity by technology.
7.3.5 E-Learning Technology
A learning management system (LMS) is a software program for managing training
education, delivering and tracking. LMSs range from systems for managing training
educational records to software for allotting courses over the Internet and extending features
for online collaboration.
A learning content management system (LCMS) is a software program for enduing,
authoring and editing e-learning content (courses, reusable content objects). An LCMS may
be solely devoted to producing and publishing content that is entertained on an LMS or it can
host the content itself (remote AICC content hosting models)
7.3.5.1 Computer-aided assessment
Computer-assisted assessment (known less commonly as e-assessment), ranges from
automated multiple-choice tests to more intricate systems. This form of assessment is
becoming more and more common these days. In some high-end systems, feedback is geared
towards students’ mistakes or the computer can guide students through a series of questions
adjusting to whether a student has discovered or not found out something.
The best example of this type of system is perhaps ‘online formative assessment ‘. This
requires making an early formative assessment by filtering out the incorrect answers. The
author/teacher then figures out what the pupil should ideally have done with each question. It
will then offer the pupil practice at every slight variation of filtered out question. This can be
regarded as the formative learning stage. The subsequent stage is to make a quick assessment
by a new set of questions that covers only the topics already taught . Some will take this up a
level and duplicate the cycle, such as BOFA that caters to the eleven plus exam set in the UK.
The term learning design has sometimes come to denote to the type of activity modified
by software, such as the open-source system LAMS that backs up successions of activities
that can be both adaptive and cooperative. The IMS learning design stipulation is signified as
a standard format for learning intentions and IMS LD Level A is affirmed in LAMS V2. E-
learning has been substituting the traditional settings due to its cost-effectiveness.
7.3.5.2 Electronic performance support systems (EPSS)
Electronic performance support systems (EPSS) are a ‘computer-based system that amends
worker productivity by furnishing on-the-job access to incorporated learning, information and
advice experiences’.
7.3.6 Content Issues
Content is a core constituent of e-learning and includes issues such as pedagogy and learning
object re-use.
7.3.6.1 Pedagogical elements
Pedagogical components are an effort to fix structures or units of educational material. For
example, this could be an assignment, a lesson, a discussion group, a multiple-choice
question, a quiz or a case study. These units should be format independent, so although it
may be in any following methods, pedagogical structures would not include a video
conference, a textbook, a web page or Podcast.
When beginning to produce e-learning content , the pedagogical advances need to be
measured. Simple pedagogical approaches make it easy to create content but lack
downstream functionality, flexibility and richness. On the other hand, intricate pedagogical
approaches can be hard to establish and slow to formulate, though they have the potential to
allow more engaging learning experiences for students. Somewhere between these extremes
in an ideal pedagogy that permits a particular education to efficaciously produce educational
materials while at the same time allowing for the most engaging educational experiences for
students.
7.3.6.2 Pedagogical approaches or perspectives
It is possible to use various pedagogical approaches for e-learning. They include the
following:
• Instructional design: The traditional pedagogy of education which is curriculum
focused and is formulated by a centralized educating group or a single teacher.
• Social constructivist: This pedagogy is especially well-afforded by the use of WIKI,
discussion forums, blogs and online collaborative activities. It is a collaborative
aspect that opens educational content creation to a broader group encompassing the
students themselves. The One Laptop Per Child Foundation tried to use a
constructivist way in its project.
• Laurillard’s conversational model: This is especially applicable to e-learning, and
Gilly Salmon’s Five-Stage Model is a pedagogical method to the use of discussion
boards.
• Cognitive perspective: This centers on the cognitive processes involved in learning
as well as how the brain functions.
• Emotional perspective: This centers on the emotional views of learning, like
engagement, motivation, fun, etc.
• Behavioral perspective: This concentrates on the skills and behavioral effects of the
learning process. Role-playing and application on-the-job settings.
• Contextual perspective: This concentrates on the environmental and social scene
that stimulates learning: Interaction with other people, the importance of peer support
as well as pressure and collaborative discovery.
7.3.6.3 Reusability: Standards and learning objects
Much research has been made into the technical process of electronic teaching materials and
in making or re-using learning objects. These are self-contained units that are decently
marked with keywords or other metadata and often put in an XML file format . Creating a
course needs putting together a sequence of learning objects. There are proprietary and open,
non-commercial and commercial, peer-revised repositories of learning objects, such as the
Merlot repository.
A common standard data format for e-learning content is shareable content object
reference model (SCORM); whereas other stipulations permit for the channeling of learning
objects metadata (LOM).
These standards themselves are of recent origin. The post-Secondary Education Standards
Council (PESC) USA is also producing headroom in developing standards and learning
objects for the higher education space, whereas school interoperability framework (SIF) is
commencing to severely turn towards instructional and curriculum learning objects.
In the United States, there are numerous content standards that are critical as –well-the
NCES data standards are a key instance. Each state government’s content standards and
accomplishment benchmarks are decisive metadata for linking e-learning objects in that
space.
An excellent instance of e-learning that concerns knowledge management and reusability
is navy e-learning, which is available to desirable, active duty or retired military members.
This online tool supplier certificate courses to ameliorate the user in several subjects
concerned to military training and civilian skill sets. The e-learning system not only furnishes
learning objectives but also assesses the buildup of the student and credit can be made
towards higher learning institutions. This reuse is a splendid example of knowledge memory
and the cyclical procedure of cognition transfer and use of data and records.
7.3.7 Technology Issues
As early as 1993, W.D. Graziadei depicted an online computer-delivered assessment, lecture
and tutorial project using e-mail, two VAX notes conferences and Gopher/Lynx together with
various software programs that allowed students and instructor to make a virtual instructional
classroom environment in science (VICES) in research, education, service & teaching
(REST) . In 1997, he along with others brought an article entitled ‘Building Asynchronous
and Synchronous Teaching-Learning Environments: Exploring a Course/Classroom
Management System Solution’. They depicted a process at the State University of New York
(SUNY) of measuring products and formulating an overall scheme for technology-based
course development and management in teaching-learning. The project(s) had to be easy to
use and preserve, portable, immediately affordable, replicable, scalable, and they had to have
a high chance of success with long-term cost-effectiveness. Today many technologies can be,
and are, used in e-learning. From collaborative software, to blogs, e-portfolios and virtual
classrooms. Most e0learning situations use combinations of these proficiencies.
Along with the terms educational technology, learning technology and instructional
technology, the term e-learning is generally used to denote to the use of technology in
learning in a much wider sense than the computer-based training or computer-aided
instruction of the 1980s. It is also wider than the terms online learning or online education
which generally denote to strictly web-based learning. In cases where mobile technologies
are used, the term m-learning has become more common. E-learning, however, also has
significances outside the technology and denotes to the actual learning that takes place using
these systems.
E-learning is by nature fitted to distance learning and conciliatory learning but can also in
conjunction -with face-to-face teaching in which event the term blended learning is
commonly used. E-learning pioneer Bernard Luskin debates that the ‘E’ must be realized to
have broad meaning if e-learning is to be efficient. Lusk in notes that the ‘e’ should be
interpreted to mean emotional, exciting , energetic , enthusiastic, extended , excellent and
educational in addition to ‘electronic’ that is a traditional national rendition. This broader
rendition allows for 21st-century applications and adds learning and media psychology into
the equation.
In higher education particularly, the increasing propensity is to produce a virtual learning
environment (VLE) (which is sometimes mixed with a management information system
(MIS) to make a managed learning environment) in which all facets of a course are managed
through a uniform user-interface measure during the institution. A rising number of physical
universities, as well newer online-only colleges, have commenced to provide a select set of
academic degree and certificate programs through the Internet at a board range of levels and
in a broad range of disciplines. While some programs need students to attend some campus
classes or orientations, many are extradite completely online. Student support services, such
as student newspapers e-counseling, online textbook purchase, online advising and
registration and student governments.
E-learning can also denote to educational web sites such as those offering interactive
exercises for children, learning scenarios and worksheets. The term is also used extensively
in the business sphere it generally denotes to cost-effective online training.
The recent fashion in the e-learning sector is screen casting. There are many screen
casting tools available but the latest buzz is all about the web-based screen casting tools that
permit the users to produce screen casts immediately from their browser and make the video
available online so that the viewers can flow the video directly. The advantage of such tools
is that it gives the presenter the power to show his ideas and flow of ideas rather than simply
explain them, which may be more confusing when delivered via simple text instructions.
With the combination of video and audio, an expert can mimic the one-on-one experience of
the classroom and deliver complete and clear instructions. From the learner’s point of view,
this furnishes the power to rewind and pause and provides the learner the reward to impress at
their own pace. Something a classroom cannot always provide. One such example of e-
learning platform on the basis of on screen casts is YoHelpOnline.
7.3.7.1 Communication technologies used in e-learning
Communication technologies are normally categorized as asynchronous or synchronous.
Asynchronous activities use technologies like blogs, wikis and discussion boards. The
thought here is that players may employ in the exchange of ideas or information without the
dependency of other player’s engagement at the same time. E-mail is also asynchronous in
that mail can be sent or obtained without having both the participants’ involvement at the
same time.
Synchronous activities need the substitute of ideas and information with one or more
players during the same period of time. A face-to-face discourse is an example of
synchronous communications. Synchronous actions happen with all players joining in at
once, as with an online chat session or a practical classroom or meeting.
Virtual classrooms and meetings can frequently use a mix of communication
technologies.
In many models, the writing community and the communication channels associate with
the e-learning and the m-learning communities. Both the communities furnish a general
overview of the fundamental learning models and the activities needed for the participants to
link up the learning sessions across the virtual classroom or even across standard classrooms
enable by technology. Many activities, indispensable for the learners in these environments,
need frequent chat sessions on the form of virtual classrooms and/or blog meetings. Lately,
context-aware omnipresent technology has been furnishing an advanced way for written and
oral communications by using a mobile device with sensors and RFID readers and tags.
7.5 SUMMARY
• Multimedia is expected pioneer new opportunities for educational activity and new
forms of delivery. However, these opportunities will only be part of the wider scale
changes in education and training.
• In general multimedia is an add-on to the existing formal structure in education.
• Multimedia will enable the opening up of the education system, but it may reduce the
chances for many people to go through learning in a real institution. It will provide a
vehicle fro researching more comprehensively than the hitherto learning process and
its ingredients centering on the effectiveness of alternative teaching and delivery
modes.
• The effects of multimedia in education change widely across the sector, making any
overall summary unmanageable . The methodology demands major uncertainties
,retardation factors and drivers, and constituencies to be covered.
• The most significant components of multimedia education constituency are not the
lagre institutions, although they mey act as poles of attraction, but the decision market
and the educators in schools and training companies , as these are the people who
make the decision whether multimedia technology can allow for better education and
at a reasonable personal or institutional ‘cost’.
• The education and training purchasers may have their own schedule in terms of
technology, but will be largely reactive to the professional, in spite of the efforts of
technology and network firms.
Long-Answer Questions
1. What are the core functions of education? Explain the nature of education and
training.
2. Discuss multimedia in education.
3. Explain e-learning technology.
4. What are the goals and benefits of e-learning? Explain.
5. Explain the future scenario of e-learning.
7.8 BIBLIOGRAPHY
Allen, I.E. Seaman, 2008. Staying the course: Online Education in the United States.
Needham MA: Sloan Consortium.
Allen, I.E. and J. Seaman, 2003. Sizing the Opportunity: The Quality and Extent of
Online Education in the United States. Wellesley, M.A: The Sloan Consortium.
Mason, R. and A. Kaye. 1989. Mindweave: Communication. Computers and Distance
Education. Oxford, UK: Pergamon Press.
Bates. A. 2005. Technology, R-learning and Distance Education. London: Routledge.
Harasim, L.S. Hiltz, L. Teles, and M. Turoff, 1995. Learning Networks: A Field Guide to
Teaching and Learning Online . Cambridge, MA: MIT Press.
Mayer, R.E. 2001. Multimedia Learning New York: Cambridge University Press Paivio,
A. 1971. Imagery and verbal Processes: New York: Holt, Rinehart and Winston.
8.0 INTRODUCTION
In this unit, you will learn about virtual reality (VR). Virtual reality allows a user to
interact with a computer-simulated environment. The environment may be a simulation of the
real world or an imaginary world. The current VR environment are often primarily visual
experiences, displayed either on a computer screen or through special or stereoscopic
displays. Some of the simulations use additional sensory information. Such as sound through
speakers or headphones. In medical and gaming applications , advanced and haptic systems
now include tactile information, generally known as force feedback.
Users can interact with a virtual environment or a virtual artifact (VA) either through
standard input devices, such as a keyboard and mouse or multimodal devices, such as a wired
glove, the Pothemus boom arm and omni-directional treadmill.
In addition, you will learn about technological issues, computer science aspects, user
interface, interaction with geographic information, application and potential of multimedia
and VR.
Over the next 20 years, the technologies involved in each of these components will
undergo major development. Ongoing endeavors in miniaturization and nanotechnology
should make effectors ultra lightweight and wireless by the end of this period. Moreover, the
application of nanotechnology to materials science should make it possible to simulate
textures on skin with a much higher degree of verisimilitude than is currently available.
Responsible scientists are researching on ways of controlling prostheses and robots directly
through the activity of the human brain. (Zimmer, 2004). Very soon the same will be possible
with the control of the virtual world Reality simulators are expected to become much smaller
and more powerful over this period. Currently, there are hardware deficits. For example, the
current lack of graphic processing power in the Quantum 3D Thermite wearable computer
(knerr, Garrity and Lampton 2004) These deficits must be overcome in the course of normal
technological development, making only reasonable and conservative extrapolations from
present capabilities and development efforts. VR software is expected to see major
development during this period.
Graphical frame rates are critical to sustain the illusion of presence or immersion in a VE.
Note that rates may be independent. The graphical scene may vary without a new
computation and data access due to the motion of the user’s point of view. Experience has
shown that whereas the graphical frame rate should be high as possible. Frame rates of lower
than 10 frames per second severely diminish the illusion of presence. If the graphics
displayed relies on computation or data access, computation and data access frame of 8 to 10
frames per second are necessary to sustain the visual illusion that the user is watching the
time evolution of the VE.
8.4.1 Hardware for computer Graphics
The key development behind the current push for VEs today is the ubiquity of computer
graphics workstations capable of real-time, three-dimensional display at high frame rates. We
have had flight simulators with significant graphics capability for years. These simulators
have been expensive and not widely available. Making it worse, they have not been readily
programmable. Flight simulators are generally constructed with a specific objective in mind,
such as providing training for a particular military plane. To reduce the total number of
graphics and central processing unit cycles required, such simulators are micro coded and
programmed in assembled language. It is difficult to change and maintain systems
programmed in this manner. Hardware upgrades for such systems usually calls for major
undertakings with a small customer base.
8.4.2 Notable Graphics Workstations and Graphics Hardware
Graphics performance is difficult to measure owing to the widely varying complexity of
visual scenes and the different hardware and software approaches to computing and
displaying visual imagery. The most straightforward measure is given in terms of
polygons/second. However, this crudely indicates the scene complexity that can be displayed
at useful interactive update rates. Polygons are the building blocks that are common for
creating a graphics; visual reality is 80 million polygons per picture . If we need
photorealistic VEs at 10 frames/s, this translates into 800 million polygons/s . No current
graphics hardware provides this, so there must be approximations at the moment. This means
living with less detailed virtual worlds, perhaps by judiciously using hierarchical data
structures or off-loading some of the graphics requirements by utilizing available CPU
resources instead.
8.4.3 Graphics Architectures for VE Rendering
In this section, the high-level computer architecture issues are explained. These issues
determine the applicability of a graphics system to VE rendering. The following assumptions
are made about the systems included in our discussion:
• The systems use a z-buffer (or depth buffer), for hidden surface elimination. A z-
buffer stores the depth – or distance from the eye point – of the closest surface seen at
that pixel. When a new surface is scan converted the depth at each pixel is computed.
If the new depth at a given pixel is closer to the eye point in comparison to the depth
currently stored in the z-buffer at that pixel, the new depth and intensity information
are written into both the z-buffer and the frame buffer. Otherwise, the new
information is discarded and the next pixel is examined. In this way, the nearer
objects always overwrite the distant objects, and when every object has been scan
converted all surfaces have been correctly arranged in depth.
• The systems use an application-programmable, general-purpose processor to cull the
database.
Each of these components is large in its own right. All of them must act in consort and in
real time to create VEs. The objective behind the interconnectedness of these components is a
fully detailed, fully interactive and seamless VE . Seamless implies that you can drive a
vehicle across a terrin, stop in front of a building, get out of the vehicle, enter the building on
foot, go up the stairs, enter a room and interact with items on a desktop, all without delay or
hesitation in the system. To build seamless systems, substantial improvements in software
development are required. The following sections describe construction of the software in
support of virtual worlds.
8.4.6 Interaction Software
Interaction software renders the mechanism to:
• Construct a dialogue from various control devices (for example, trackers and haptic
interfaces).
• Apply that dialogue to a system or application, so that the multimodal display changes
appropriately.
The first part of this software accepts raw inputs from a controls device and interprets
them. Several libraries are available both as commercial products and as ‘shareware’. These
libraries read the most common interface devices, such as the DataGlove and various
trackers.
This second part of the building interaction software turns the information about a
system’s state from a control device into a dialogue that is meaningful to the system or
application. At the same time, it filters out erroneous or unlikely portions of dialogue that
might be generated by faulty data from the input device.
Interaction is a critical component of VE systems involving both hardware and software, .
In VEs, interface hardware provides the positions or states of various parts of the body. This
information is typically used to:
• Map user actions to changes in the environment (for example, moving objects by
hand, and so on).
• Pass commands to the environment (for example, a hand gesture or button push).
• Provide information input (for example, speech recognition for spoken commands,
text or numerical input).
The user’s intent must be inferred from the hardware output as read by the computer
system. Inaccuracies in the hardware providing the single may complicate this inference
8.4.6.1 Existing technologies
Despite the existence of several paradigms for interaction in VEs ,including direct
manipulation, indirect manipulation, logical commands and data input, the problem of
realistic, real-time interaction is still comparatively unexplored. Generally, a combination of
these paradigms performs takes in VEs. Other paradigms certainly need development to
realize the potential of a natural interface. The following is an overview of some existing
technologies.
With directly manipulating, the position and orientation of a part of the user’s body,
usually the hand is mapped continuously to certain aspects of the environment. Typically, the
position and orientation of an object in the VE is controlled through direct manipulation.
Pointing to move is another example of direct manipulation in which orientation information
is used to determine a direction in the VE. Analogs of manual tasks, such as picking and
placing require display of forces as well. Therefore, they are well suited to direct
manipulation, though more abstract of the environment, such as background lighting, can also
be controlled in this way.
8.4.6.2 Design approaches and issues to be addressed
The choice of conceptual approach is a crucial decision in designing the interaction. Two of
the more important issues involved in interacting in a three-dimensional environment are line
of sight and acting at a distance . With regard to line of sight . VE applications have to
contend that some useful information might be obscured or distorted due to an unfortunate
choice of user viewpoint or object placement. In some cases, the result may lead to
misinformation, confusion and misunderstanding. Common pitfalls include obscuration and
unfortunate coincidences.
• Obscuration: At times, a user must interact with an object that is currently out of
sight or might be hidden behind other objects. How does dealing with this special case
change the general form of any user interface techniques?
• Unfortunate coincidences: The archetypical example of this phenomenon is the
famous optical illusion in which a person stands hill. A friend stands near the camera,
aligning his hand so that it appears as if the distant friend is a small person standing in
the palm of his hand. Such devices, while amusing in some contexts, may under
circumstances, such as air traffic control, prove dangerous.
The following are a few potentially useful selection techniques for use in 3D computer-
generated environments:
• Pointing and ray casting: It allows selection of objects in clear view but not those
inside or behind other objects.
• Dragging: Analogous to ‘swipe select’ in traditional GUIs. Selections can be made
on the picture plane with a rectangle or in an arbitrary space with a volume by
‘lassoing’. Lassoing allows the user to select a space of any shape. It is an extremely
powerful technique in the two-dimensional paradigm. A three-dimensional input
device is required to carry this idea over to three dimensions and perhaps a volume
selector instead of a two-dimensional lasso.
• Naming: Voice input for selection techniques is vital in three-dimensional
environments. You should not ignore ‘delete my chair’ that is powerful command
archetype. Managing is extremely important and difficult. It forms a subset of the
more general problem of naming objects by generalized attributes.
• Naming attributes: Specifying a selection set by a common attribute or set of
attributes (‘all red chairs with arms’) is a technique that should be utilized . As some
attributes are spatial in nature, it is easy to see how these might be specified with a
gesture and with voice, offering a fluid and powerful multimodal selection technique:
all red chairs, shorter than this (user gestures with two hands) in that room (user looks
over shoulder into adjoining room).
8.4.7 Visual Scene Navigation Software:
Visual scene navigation software renders the means for moving the user through the three-
dimensional virtual world. There are many component parts of this software, including:
• Control device gesture interpretation (gesture message from the input subsystem to
movement processing).
• Virtual camera viewpoint and view volume control.
• Hierarchical data structures for polygon flow minimization to the graphics pipeline.
In the software , all act together in real time to produce the next frame in a continuous
series of frames of coherent condition through the virtual world. The following sections
provide a survey of currently developed navigation software and a discussion on special
hierarchical data structures for polygon flow.
8.4.7.1 Survey of currently developed navigation software
Navigation refers to the problem of controlling the point and direction of view in the VE.
Using conventional computer graphics techniques, navigation can be reduced to the problem
of determining a position and orientation transformation matrix (in homogeneous graphics
coordinates ) for rendering an object. Due to the user’s head motion and the transformation
due to motions over long distance (travel in a virtual vehicle). This transformation matrix
can be usefully decomposed into the transformation . There may also be several virtual
vehicles concatenated together.
8.4.7.2 Survey of hierarchical data structure techniques for polygon flow minimization.
The back end of visual scene navigation comprises hierarchical data structures for the
minimization of polygon flow to the graphics pipeline. When a matrix representing the
chosen view is generated, you need to send the scene description transformed by that matrix
to the visual display. One key technique to get the visual scene updated in real time in
interactive update rates is to minimize the total number of polygons sent to the graphics
pipeline.
In a paper, Clark (1976) presents a general approach for solving the polygon flow
minimization problem. He lays stress on the construction of a hierarchical data structure for
the virtual world (Figure 8.4). The approach is to visualize a world database for which a
bounding volume is known for each drawn object. The bounding volumes are organized
hierarchically in a tree that is used to rapidly discard large numbers of polygons.
8.5.1 Introduction
To work with a system, users must be able to control the system and assess the state of the
system, for example, when driving an automobile, the driver uses the steering wheel to
control the direction of the vehicle, and the accelerator pedal, brake pedal and gearstick to
control the speed of the vehicle. The driver identifies the position of the vehicle by looking
through windscreen and expect speed of the vehicle by reading the speedometer . The user
interface of the automobile is composed of the instruments the driver can use to accomplish
the tasks of driving and maintaining the automobile.
8.5.2 Usability
The design of a user interface effects the effort the user must disseminate to provide input for
the system and to interpret the output of the system and how much effort it takes to learn how
to do this. Usability is the extent or degree to which the design of a particular user interface
keeps an account of the human psychology and physiology of the users, and makes the
process of using the system efficient and satisfying.
8.5.3 User Interfaces in Computing
In computer science and human-computer interaction, the user interface (of a computer
program)implies the graphical , textual and auditory information the program presents to the
user and the control sequences (such as keystrokes with the computer keyboard, movements
of the computer mouse and selections with the touch screen) the user applies to control the
program.
8.5.3.1 Types
At present the following types of user interface are the most common:
• Graphical user interfaces (GUI) accept input through devices, such as computer
keyboard and mouse and provide articulated graphical output on the computer
monitor. There are at least two different principles widely used in GUI design:
Object-oriented user interfaces (OOUIs) and application-oriented interfaces.
• Web-based user interfaces or web user interfaces (WUI) accept input and provide
output by generating web pages that are transmitted via the Internet and viewed by the
user using a web browser program. The relatively new implementations utilize Java,
AJAX, Adobe Flex, Microsoft . NET or similar technologies to provide real-time
control in a separate program, eliminating the need to refresh a traditional HTML –
based web browser. Administrative web interfaces for web servers, servers and
networked computers are often known as control panels.
The following user interfaces are common in various fields outside desktop computing:
• Command line interfaces, where the user provides the input by typing a command
string from the computer keyboard and the system provides output by printing text on
the computer monitor. This is used by programmers and system administrators in
engineering and scientific environment and by technically advanced personal
computer users.
• Tactile interfaces supplement or replace other output forms with haptic feedback
methods. Used in computerized simulators, and so on.
• Touch user interface are GUIs using a touch screen display as a combined input and
output device. It is used in many types of point of sale, industrial processes and
machines, self-service machines, and so on.
Other types of user interfaces:
• Attentive user interfaces manage the user’s attention deciding when to
interrupt the user, the type of warnings, and the level of detail of the messages
presented to the user.
• Batch interfaces are non-interactive user interfaces. Here, the user specifies all
the details of the batch job in advance to batch processing, and receives the
output when all the processing is done. The computer does not prompt for
further input till the processing has started.
• Conversational Interface Agents personifies the computer interface in the form
of an animated person. Robot, or other character (such as Microsoft’s Clippy
the paperclip), and present interactions in a conversational form.
• Crossing-based interfaces are GUIs in which the primary task comprises of
crossing boundaries instead of pointing.
• Gesture interface are GUIs that accept input in a form of hand gestures, or
mouse gestures sketched with a computer mouse or a stylus.
• Intelligent user interfaces are human-machine interfaces aimed to improve the
efficiency, effectiveness, and naturalness of human-machine interaction by
representing , reasoning , and acting on models of the user, domain, task,
discourse, and media (for example, graphics, natural language, gesture).
• Motion tracking interfaces monitor the user’s body motions and translate them
into commands, as currently developed by Apple.
• Multi-screen interfaces, utilize multiple displays to provide a more flexible
interaction. This is often applied in computer gene interaction in both the
commercial arcades and more recently the handheld markets.
• Non-command user interfaces observe the user to infer his/her needs and
intentions, without requiring that he/she formulate explicit commands.
• Objects-oriented user interface (OOUI).
• Reflexive user interfaces allows the users control and redefine the entire
system via the user interface alone, for instance to change its command verbs.
Typically this is only feasible with very rich GUIs.
• Tangible user interfaces lay greater emphasis on touch and physical
environment or its element.
• Task-focused interfaces are user interfaces that address the information
overload problem of the desktop metaphor by making tasks, not files, the
primary unit of interaction.
• Text user interfaces are user interfaces that provide text as output , but accept
other form of input other than the typed command strings.
• Voice user interfaces accept input and provide output by generating voice
prompts. The user inputs by pressing keys or buttons, or responding verbally
to the interface.
• Natural-language interfaces used in search engines and web pages. User types
in a question and waits for a response.
• Zero-input interfaces receive inputs from a set of sensors instead of querying
the user with input dialogs.
• Zooming user interfaces are GUIs in which information objects are
represented at different levels of scale and detail, and where the user can
change the scale of the viewed area to show more detail.
• Archy is a keyboard-driven user interface by JefRaskin, arguably more
efficient than mouse-driven user interfaces for document editing and
programming.
A mode is a distinct method of operation within a computer program, in which the same
input may provide different perceived results depending of the state of the computer program.
Heavy use of modes often reduces the usability of a user interface because the user must
expend effort to remember current mode states, and switch between mode states as necessary.
Fig. 8.6 Digital Elevation Model, Map (image) and Vector Data
Digital Elevation Model, Map (image) and Vector Data Raster data type comprise of
columns and rows of cells. Each cell will have a single value. Raster data may be images
(raster images) with a pixel or cell having a colour value. Other values captured for each cell
may be a separate value (e.g., land use) a continuous value (e.g., temperature) or a null if
there is no data available. Despite the ability of the raster cell to store a single value, it can be
extended by using raster bands to represent RGB (red, green ,blue) colours, colourmaps (a
mapping between a thematic code and RGB value) or an extended attribute table with one
row for each unique cell value. The resolution of the raster data set is expressed in its cell
width in ground units.
One-dimensional lines or poly lines are used for linear features, such as rivers, roads,
railroads, trails and topographic lines. As with point features, linear features displayed at
a small scale will be represented as linear features rather than as a polygon. Line features
can measure distance.
• Polygons
Non-spatial data
Additional non-spatial data can also be stored with the spatial data which are represented by
the coordinates of vector geometry or the position of a raster cell. In vector data, the
additional data comprises attributes of the feature. For example, a forest inventory polygon
may also have an identifier value as well as information about tree species. In raster data, the
cell value can store attribute information. However, it can also be used as an identifier that
can relate to records in another table.
8.6.3.4 Data capture
Data capture – feeding information into the system consumes much of the time of GIS
practitioners. Variety of methods is used to enter data into a GIS where it is stored in a digital
format.
To produce digital data, existing data printed on paper or PET film maps can be
digitized or scanned. A digitizer produces vector data in a way similar to an operator who
traces points, lines and polygon boundaries from a map. Scanning a map produces output in
raster data that could be further processed to produce vector data.
Coordinate Geometry (COGO) is a technique that is enter survey data directly into a
GIS from digital data collection systems on survey instruments. Positions from a Global
Navigation Satellite System (GNSS) like Global Positioning System (GPS), another survey
tool, can also be directly fed into a GIS.
Remotely sensed data also plays a vital role in data collection and comprises sensors
attached to a platform. Sensors contain cameras, digital scanners and LIDER, while platforms
usually comprises of aircraft and satellites.
8.6.3.5 Raster-to-vector translation
A GIS can perform data restructuring to convert data into different formats, for example, a
GIS can be used to convert a satellite image map to a vector structure by generating lines
around all cells with the same classification, while determining the cell spatial relationships,
such as adjacency or inclusion.
More advanced data processing can take place with image processing, a technique
developed in the late 1060s by NASA and the private sector to provide contrast enhancement,
false colour rendering and a variety of other techniques that include use of two dimensional
Fourier transforms.
8.6.3.6 Projections coordinate systems and registration
A property ownership map and a soils map might display data at different scales. In a GIS,
map information must be manipulated so that it registers, or fits, with information gathered
from other maps. Prior to the analysis of digital data , they may have to undergo other
manipulations projection and coordinate conversions, for example that integrate them into a
GIS.
Projection is a basic component of map making . A projection is a mathematical
means of transferring information from a model of the Earth that represents a three-
dimensional curved surface, to a two-dimensional medium paper or a computer screen.
Different projections are used for different map types because each projection particularly
suits specific uses. A projection, for example, that represents the shapes of the continents
precisely will distort their relative sixes.
8.6.3.7 Spatial analysis with GIS
Given the vast range of techniques for spatial analysis that have been developed over the past
half century, any summary or review can only cover the subject to a limited depth. This is a
rapidly changing field. GIS packages are increasingly including analytical tools, such as
standard built-in facilities, optional toolsets, add-ins or ‘analysts’. In many instances, such
facilities are rendered by the original software suppliers (commercial vendors or collaborative
non commercial development teams) while in other cases facilities have been developed and
are provided by third parties.
Data modeling
It is cumbersome to rather wetlands maps to rainfall amounts captured at different points,
such as airports, television stations and high schools. However, a GIS can be used to depict
two-and three-dimensional characteristics of the Earth’s surface, subsurface and atmosphere
from information points. For example, a GIS can quickly generate a map with isopleths or
contour lines that shows differing amounts of rainfall.
Topological modeling
A GIS can identify and analyze the spatial relationships that exist within the digitally stored
spatial data. These topological relationships enable complex spatial modeling and analysis.
Topological relationships between geometric entities traditionally contain adjacency (what
adjoins what), containment (what encloses what) and proximity (how close something is to
something else).
Networks
If all the factories near a wetland were to release chemicals accidentally into the river at the
same time, how long would it take for a damaging amount of pollutant to enter the wetland
reserve? A GIS can simulate the routing of materials along a linear network. Values, such as
shape, speed limit to pipe diameter can be applied in network modeling to represent the flow
of the phenomenon more accurately. Network modeling is commonly employed in
transportation more accurately. Network modeling is commonly employed in transportation
planning, hydrology modeling and infrastructure modeling.
8.7 APPLICATIONS
Virtual reality is often used to describe a wide variety of applications, commonly associated
with its immersive, highly visual and 3D environments. The development of CAD software,
graphics hardware acceleration, head mounted displayed, database gloves and miniaturization
have helped popularizing the notion of VR. Michael Heim in his book. The metaphysics of
Virtual Reality has identified seven different concepts of VR.
• Simulation
• Interaction
• Artificiality
• Immersion
• Telepresence
• Full-body immersion
• Network communication
The definition still has a futuristic romanticism attached. People often identify VR with
head mounted displays and data suits.
8.8 POTENTIAL
Virtual reality is a powerful technology that has potential for far-ranging social and
psychological impact. Disciplinary psychology and other social sciences should take a
proactive stance in relation to VR and conduct research to determine the outlines of this
potential impact, with the hope of affecting its direction. There are potential psychosocial
effects of a ‘seamless VR’ in relation to several societal domains: private experience, home
and family, and religion and spirituality. Engineering and social science professionals must
cooperate in research regarding the potential societal effects of VR.
8.9 SUMMARY
• Virtual reality is a technology that allows a user to interact with a computer simulated
environment.
• The current virtual reality environments often are primarily visual experiences
displayed either on a computer screen or through special or stereoscopic displays.
Any user can interact with a virtual environment or a virtual artifact (VA) either
through standard input devices (e.g., a keyboard and mouse) or through multimodal
devices (e.g., such as a wired glove , the Polhemus boom arm and omni-directional
treadmill).
• The computer technology that enables us to develop three-dimensional virtual
environment (VEs) comprises both hardware and software.
• User interface (also known as human computer interface or man-machine interface
(MMI) is the aggregate of means by which the users interact with the system – a
particular machine, device, computer program or other complex tool.
• A geographic information system (GIS) or geographical information system captures,
stores, analyses, manages and presents data that is linked to locations. Technically, a
GIS includes mapping software and its application to remote sensing, land surveying ,
e=aerial photography, mathematics, etc.
• Virtual reality is often used to describe a wide variety of applications, commonly
associated with its immersive, highly visual and 3D environments.
• Virtual reality is a powerful technology that has potential for far-ranging social and
psychological impact. Disciplinary psychology and other social sciences should take a
proactive stance in relation to VR and conduct research to determine the outlines of
this potential impact, with the hope of affecting its direction.
Long-Answer Questions
1. Discuss the fundamentals of multimedia and VR.
2. Explain the technological issues in multimedia and VR.
3. Discuss graphics capabilities in PC-based VE systems.
4. What is dynamic model matching and augmented reality?
5. What is user interface? Explain user interface in computing.
6. What do you understand by geographic information?
8.12 BIBLIOGRAPHY
Castranova, E.2007, Exodus to the Virtual World: How Online Fun is Changing Reality.
New York: Palgrave Macmillan.
Burdea G and P. Coffet, 2003. Virtual Reality Technology, 2nd edition. New Jersey:
Wiley-IEEE Press.
Goslin, M and J.F. Morie,1996.”Virtopia” Emotional experiences in Virtual
Environments’, Leonardo, Vol. 29, No, 2, pp. 95-100.
UNIT 9 MULTIMEDIA:
APPLICATION AND FUTURE
Program Name:BSc(MGA)
Written by: Srajan
Structure:
4.0 Introduction
4.1 Unit Objectives
4.2 Applications for multimedia
4.3 Future Applications
9.3.1 Bokode: The Better Barcode
9.3.2 Chameleon Guitar
9.3.3 GIRLS involved in real-life sharing
9.3.4 TOFU: A squash and Stretch Robot
9.3.5 Merry Miser
9.3.6 Mycrocosm
9.3.7 Quickies
9.3.8 SixthSense
4.4 Summary
4.5 Key Terms
4.6 End Questions
9.0 INTRODUCTION
In this unit we are going to learn about multimedia and its applications in daily life and its
future applications. As we all know Multimedia is a combination of different media elements
like, text, graphics. Nowadays multimedia is used in every field education, advertising,
medicine, business etc. These days, the rapid evolution of multimedia is the result of the
emergence and convergence of all these technology.
We are going to learn about different applications like Bokode, TOFU, Mycrocosm,
Merry Miser and how they are helpful to us in the daily life.
Bokode is a new optical data-storage tag of taking only 3mm of space can store data a
million times more than a bar code (Figure 9.1).
The typical barcodes on product packaging disseminate information to the scanner at
the checkout counter—that is what they do. Now, researchers have come up with a new tiny
barcode at the Media Lab that can disseminate a variety of useful information to shoppers as
they scan the shelves and can even lead to new devices for business meetings, classroom
presentations, videogames or motion-capture systems.
The tiny labels, just 3 mm across, are equivalent to the size of the @ symbol on a
computer keyboard. Yet, they can contain thousands of bits of information far more than an
ordinary barcode. Currently, they need a lens and a built-in LED light source. However,
future versions can be made reflective, just like the holographic images on credit cards, which
will be much cheaper and more unnoticeable.
One of the few advantages of the new labels, unlike today's barcodes, is that they can
be read from a distance—from a few meters. In addition, unlike the laser scanners that are
required to read today's labels, any standard digital camera can read these, such as the ones
built in to cell phones around the world.
9.3.2 Chameleon Guitar
Fig. 9.2: Chameleon Guitar
(Source: Webb Chappell)
You can implement a special guitar that combines physical acoustic properties with
virtual capabilities in this research. A wooden resonator, a replaceable and unique piece of
wood that creates this acoustic sound, will have the acoustical values (Fig 9.2). The acoustic
signal this wooden heart creates is digitally processed in a virtual sound box to create a
flexible sound design.
Today's musical or graphical tools and instruments fall into two distinct classes, each
with its unique benefits and drawbacks:
• Traditional physical instruments: These instruments offer a richness and
uniqueness of qualities because of the unique properties of the physical
materials used. The hand-crafted construction qualities are also very important
for these tools.
• Electronic and computer-based instruments: These instruments lack the
richness and uniqueness of the traditional physical instruments. They produce
predictable and generic results but at the same time offer the advantage of
flexibility, such as they can be many instruments embedded into one.
Here, a novel approach is proposed to design and build instruments that attempts to
combine the best of both. The approach will be characterized by a sampling of the
instrument's physical matter along with its physical properties and complemented by a
physically simulated, virtual shape. This method to build digital objects holds some of the
rich qualities and variations that are found in real instruments (the blend of natural materials
with craft) while maintaining flexibility and open-endedness of the digital ones.
Girls Involved in Real-Life Sharing (GIRLS) allows users to actively reflect the emotions
related to their situations by constructing pictorial narratives (Fig 9.3). The system applies
common-sense reasoning to derive effective digital content to support emotional reflection
from the users' stories. The users are able to gain new knowledge and understanding about
themselves and others by exploring personal and authentic experiences. Currently, this
project is converted into an online system for the school counselors to use.
TOFU aims to explore new ways of robotic social expression by using the techniques TOW
2D animation for decades have been using for their projects (Figure 9.4). Disney Animation
Studios has pioneered animation tools, such as 'squash and stretch' and 'secondary motion'
in the 1950s. Since then, animators have been widely using such techniques, which are not
used commonly for designing robots. TOFU can equally squash and stretch, which is named
after the "squashing and stretching" food product. Clever use of elastic coupling
complemented by compliant materials provides a vibrant yet robust actuation method.
TOFU uses inexpensive OLED displays instead of using eyes actuated by motors. These
displays are highly dynamic and lifelike in motion.
Check your progress-1
What is augmented reality (AR)?
What is TOFU?
What is Bokode?
Define Traditional physical instrument.
Define electronic and computer-based instruments.
Advertisers and marketers use innovative ways to hawk their products to you, and
never stop searching for different avenues as well. The chief manner of advertisement has
been to appeal our emotions, our irrationality and our fears, and these are in use since the
1930s, which convinces us and pushes us to the brink of the thought that we cannot be happy
without their products. Many of us might think that we are not susceptible to this
manipulation. Studies and advertisers' success show that this strategy succeeds far too often.
Merry Miser E
attempts by providing contextual information that it
thinks work against the trend that can help users to:
• Track their finances
• Maintain budgets
• Track how past purchases have made them feel
It relates expectations of users on the type of purchase that is going to be, and how good
the purchase actually ends up being. It allows users to train themselves about their own
assessments. It also promotes long-term, intelligent thinking in the face of the manipulative
marketers.
9.3.6 Mycrocosm
Minutes Late for Work
9.3.7 Quickies
9.3.8 SixthSense
SixthSense gestural interface enhances our physical world with digital information which
can easily be put on. It also allows us to use hand gestures to interact with the information.
SixthSense brings forth intangible and digital information into the tangible world. It
also allows us to interact with this information through natural hand gestures. SixthSense
frees information from its confinement, seamlessly integrating it with reality, thus confining
the entire world into your computer.
Fig. 9.8 Sixth Sense and Some of its Applications: Watching News Video,
Taking Photographs, Using a Map, Checking the Time, Drawing,
and Recognizing Gestures
The Sixth Sense prototype constitutes a pocket projector, a mirror and a camera worn in
a mobile device that is pendant-like in structure. Both the camera and the projector are
connected to a mobile computing device that is in the user's pocket. The system converts any
surface into a digital one by projecting information onto the surfaces and physical objects
around us. Using computer-vision-based techniques, the camera recognizes and tracks the
user's hand gestures and physical objects. SixthSense employs a techniques of simple
computer-vision to process the video stream data from the camera, and follows the locations
of the colored markers on the user's fingertips (that are used normally for visual tracking). In
addition to this is the software reads the data into gestures to use for interaction with the
projected application interfaces.
The recent SixthSense prototype supports various types of gesture-based interactions,
showing the viability, flexibility and usefulness of the system. The cost incurred in building
the current prototype system would be approximately $350.
9.4 SUMMARY
Multimedia comprises the integration of graphics, text, animation, and video, audio to
provide the user with high levels of control.
Some new developments in multimedia technology include the bokode and the
chameleon guitar.
1. Castranova, E. 2007. Exodus to the Virtual World: How Online Fun is Changing
Reality. New York: Palgrave Macmillan.
2. Burdea, G and P. Coffet. 2003. Virtual Reality Technology, 2nd edition. New Jersey:
Wiley-IEEE Press.
4. Grau, Oliver. 2003. Virtual Art: From Illusion to Immersion (Leonardo Book Series).
Cambridge, MASS: MIT-Press.
5. Hillis, Ken. 1999. Digital Sensations: Space, Identity and Embodiment in Virtual
Reality. Minneapolis, MN: University of Minnesota Press.
6. Kalawsky, R. S. 1993. The Science of Virtual Reality and Virtual Environments: A
Technical, Scientific and Engineering Reference on Virtual Environmen4 Reading,
Mass: Addison-Wesley.