0% found this document useful (0 votes)
34 views

Unit 2

Uploaded by

snvskomal8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Unit 2

Uploaded by

snvskomal8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 114

UNIT-2

• Introduction
• Organizational Support for Design
• The Design Process
• Design Frameworks
• Design Methods
• Design Tools, Practices, and Patterns
• Social Impact Analysis
• Legal Issues
Introduction
Designing a user interface (UI) is a crucial aspect
of creating a positive user experience for any
digital product, whether it's a website, mobile
app, or software application. Here are some
essential principles and guidelines to consider
when designing a user interface:
• User-Centered Design • Mobile Responsiveness
• Simplicity and Clarity • Error Handling
• Consistency • User Testing and
• Hierarchy and Visual Iteration
Organization • Loading Times
• Navigation • Micro interactions
• Feedback and • Prioritize Content
Responsiveness
• Accessibility
• Visual Appeal
• User-Centered Design: The most important
aspect of UI design is to keep the user at the
center. Conduct user research, interviews, and
usability testing to gain insights into user
behavior and expectations.
• Simplicity and Clarity: Keep the interface simple,
clean, and easy to understand. Avoid clutter and
unnecessary elements. Use concise and clear
language in labels, instructions, and error
messages.
• Consistency: Maintain consistency throughout the
interface to create a sense of familiarity for users.
Consistent visual elements, layout, and interactions help
users understand how to navigate and use the product.
• Hierarchy and Visual Organization: Use visual cues such
as size, color, and typography to establish a clear
hierarchy of information. Important elements should
stand out, and related items should be grouped
together logically.
• Navigation: Design an intuitive navigation system that
allows users to move seamlessly through the product.
Use standard navigation patterns when possible, such as
a top menu or a hamburger menu for mobile devices.
• Feedback and Responsiveness: Provide instant and
meaningful feedback to user actions. For example,
buttons should change appearance when clicked, and
loading indicators should be displayed during lengthy
processes.
• Accessibility: Ensure that your UI is accessible to all
users, including those with disabilities. Use proper
color contrast, provide alternative text for images, and
make sure the interface is navigable with a keyboard.
• Visual Appeal: While simplicity is essential, aesthetics
matter too. Use a visually appealing color scheme,
appropriate typography, and relevant imagery to
create an engaging interface.
• Mobile Responsiveness: If designing for multiple
devices, ensure the interface adapts well to
different screen sizes and resolutions. Optimize
touch controls for mobile devices.
• Error Handling: Design error messages that are
clear, helpful, and offer solutions to resolve the
issue. Avoid technical jargon that might confuse
users.
• User Testing and Iteration: Test your interface
with real users and gather feedback. Use this
feedback to make iterative improvements to the
design.
• Loading Times: Optimize the UI for fast loading
times, as slow interfaces can lead to user frustration.
• Micro interactions: Add subtle animations and micro
interactions to improve user engagement and
delight. For example, a button changing color on
hover or a subtle transition when opening a menu.
• Prioritize Content: Ensure that the most important
content and features are easily accessible and
prominently displayed.
Remember that UI design is an iterative process, and
it's essential to continuously gather feedback and
make improvements to create a user-friendly and
visually appealing interface.
Organizational Support for Design

• Organizational support for design in digital


interaction is crucial for creating successful
products and services. When an organization
prioritizes and fosters a design-centric culture,
it leads to better user experiences, increased
customer satisfaction, and ultimately, higher
business success.
Here are some key elements of organizational support for
design:
• Design Leadership and Advocacy: Having design leaders
at the executive level who understand the value of design
and advocate for its integration across the organization is
essential. These leaders can champion design initiatives,
allocate resources, and ensure that design is considered
in strategic decision-making.
• Design Team and Expertise: Employing skilled and
diverse design professionals who can bring a human-
centered approach to problem-solving is crucial. This
includes designers with expertise in various domains such
as graphic design, user experience (UX) design, industrial
design, etc.
• Cross-Functional Collaboration: Encouraging
collaboration between design teams and other
departments (e.g., engineering, marketing,
product management) helps break down silos and
ensures that design is integrated throughout the
entire product development or service delivery
process.
• User-Centric Approach: Organizations should
promote a user-centric mindset where
understanding the needs and pain points of
customers or end-users is prioritized. User
research and feedback should be regularly
incorporated into the design process.
• Design Thinking Workshops and Training: Providing
employees with opportunities to learn about design
thinking methodologies can foster a culture of
innovation and problem-solving. Training programs
can help non-designers understand the value of
design and how it complements their work.
• Design Guidelines and Standards: Developing and
adhering to design guidelines and standards ensures
consistency and a cohesive brand identity across
products and services.
• Investment in Design Tools and Technology:
Equipping design teams with the necessary tools and
software can boost productivity and enable them to
create high-quality designs efficiently.
• Recognition and Rewards: Recognizing and
rewarding design contributions and successes
can motivate teams and individuals to continue
striving for excellence in their work.
• Prototyping and Iteration: Supporting iterative
design processes, where prototypes are created,
tested, and refined based on user feedback, can
lead to better end products and services.
• Top-Down Support: Organizational support for
design should come from the top management,
ensuring that design initiatives are given the
attention and resources they need to succeed.
The Design Process
• Design is inherently creative and
unpredictable, regardless of discipline. In the
context of interactive systems, successful
designers blend a thorough knowledge of
technical feasibility with an uncanny aesthetic
sense of what attracts and satisfies users. One
way to define design is by its operational
characteristics
• Design is a process; it is not a state, and it
cannot be adequately represented statically.
• The design process is nonhierarchical; it is
neither strictly bottom-up nor strictly top-down.
• The process is radically transformational; it
involves the development of partial and interim
solutions that may ultimately play no role in the
final design.
• Design intrinsically involves the discovery of
new goals
 These characterizations of design convey the
dynamic nature of the process. An iterative
design process based on this operational
definition would consist of four distinct phases.
 requirements analysis (Phase 1),
 preliminary and detailed design (Phase 2),
 build and implementation (Phase 3),
 evaluation (Phase 4)
Requirements analysis
• This phase collects all of the necessary requirements
for an interactive system or device and yields a
requirements specification or document as its
outcome. In general, soliciting, capturing, and
specifying user requirements are major keys to success
in any development activity (Selby, 2007). Methods to
elicit and reach agreement upon interaction
requirements differ across organizations and
industries, but the end result is the same: a clear
specification of the user community and the tasks the
users perform
• Thus, even requirements documents written specifically
for user experience and interaction design aspects are
often specified in terms of three components :
• Functional requirements: define specific behavior that
the system should support (often captured in so-called
use cases, see below);
• Non-functional requirements: specify overall criteria
governing the operation of the interactive system without
being tied to a specific action or behavior (hardware,
software, system performance, reliability, etc.); and
• User experience requirements: explicitly specify non-
functional requirements for the user interaction and user
interface of the interactive system (navigation, input,
colors, etc.).
Phase 2: Preliminary and detailed design
The design phase in turn consists of two stages: a preliminary
stage, where the high-level design or architecture of the
interactive system is derived, and a detailed stage, where the
specifics of each interaction are planned out. The outcome
from the design phase is a detailed design document.
• The preliminary design is also known as architectural design,
and in engineering settings this stage often entails deriving
the architecture of the system. Preliminary design can also
be called conceptual design, particularly in software
engineering, because it is sometimes useful to organize the
high-level concepts into a conceptual map with their
relations.
Phase 3: Build and implementation
• The implementation phase is where all of the
careful planning gets turned into actual,
running code. The outcome from this phase is
a working system, albeit not necessarily the
final one
some suitable software development platforms for
interactive applications based on your computing
platform:
• Mobile: Building mobile apps typically requires using the
SDK (software development kit) and development
environment provided by the manufacturer of the
operating system: the Android SDK in Java, the Apple iOS
SDK in Objective-C, and the Windows Phone/Mobile SDKs.
Most of these SDKs require registering as a developer to
have access to the app exchange for making your app
available to general users. Since mobile app development
typically is cross-platform—the development is actually
conducted on a personal computer—all of these SDKs
include emulators for testing the app on a virtual phone
residing on the personal computer itself.
• Web: The browser has become a ubiquitous
information access platform, and modern web
technologies are both pervasive and full-
featured to the point that they can emulate or
replace traditional computer software. Web
applications and services typically consist of both
client and server software: Client-side software
runs in the user’s browser and is accordingly
built in JavaScript—the programming language
of the browser—whereas server-side software
runs on the web server or connected hosts and
is often implemented in languages such as PHP,
Ruby, Java, or even JavaScript (using Node.js).
• Personal Computers: Developing dedicated
applications for a personal computer typically
requires using the native SDKs for the specific
operating system. Development environments
such as Microsoft’s Visual Basic/C++ are easy
to get started with yet have an excellent set of
features
Evaluation
• In the final phase of the design cycle, developers
test and validate the system implementation to
ensure that it conforms to the requirements and
design set out earlier in the process. The outcome
of the validation process is a validation report
specifying test performance Depending on this
outcome, the design team can decide to proceed
with production and deployment of the system or
to continue another cycle through the design
process. Validation is a vital part of the design
process.
Design Frameworks
• While the design process discussed above
generally should remain the same for all your
projects, the approach to performing it may
vary. The concept of design frameworks
captures this idea: the specific flavor and
approach the design takes to conducting the
design process.
User-centered design
• Many software development projects fail to achieve their
goals; some estimates of the failure rate put it as high as
50% (Jones, 2005). Much of this problem can be traced
to poor communication between developers and their
business clients or between developers and their users.
The result is often systems and interfaces that force the
users to adapt and change their behavior to fit the
interface rather than an interface that is customized to
the needs of the users. User-centered design (UCD) is a
counterpoint to this fallacy and prescribes a design
process that primarily takes the needs, wants, and
limitations of the actual end users into account during
each phase of the design process
Participatory design
• Going beyond user-centered design, participatory design
(PD) (also known as cooperative design in Scandinavia) is the
direct involvement of people in the collaborative design of
the things and technologies they use. The arguments in favor
suggest that more user involvement brings more accurate
information about tasks and an opportunity for users to
influence design decisions. The sense of participation that
builds users’ ego investment in successful implementation
may be the biggest influence on increased user acceptance
of the final system.
• Participatory design experiences are usually positive,
however, and advocates can point to many important
contributions that would have been missed without user
participation.
Agile interaction design
• Traditional design processes can be described
as heavyweight in that they require significant
investments in time, manpower, and resources
to be successful. In particular, such processes
are often not sufficiently reactive to today’s
fast-moving markets and dynamic user
audiences. Originally hailing from software
engineering, agile development is a family of
development methods for self-organizing,
dynamic teams that facilitate flexible,
adaptive, and rapid
Design Methods
• Design methods are the practical building
blocks that form the actual day-today
activities in the design process. There are
dozens of design methods in the literature,
but designers may want to focus on the most
common ones
Ideation and creativity
• One way to think about design is as an
incremental fixation of the solution space,
where the range of possible solutions is
gradually whittled down until only a single
solution exists. This is the final product or
service that then goes on to ship and be
deployed. Gradually reducing the solution
space in this manner is called convergence or
convergent thinking, particularly for teams of
designers who each bring their own expertise
and visions to the table
shows two concept sketches of personal
hovercrafts drawn by two different designers.
Surveys, interviews, and focus groups
• The most straightforward way to elicit
requirements and desires from users is simply
to ask them. Surveys—online or paper-based
—are the simplest and cheapest approach and
simply entail distributing a questionnaire to
representative users. Online surveys can have
significant reach but often yield a low
response rate. Furthermore, the feedback
received is often superficial in nature
Ethnographic observation
• The early stages of most methodologies include
observation of users. Since interface users form a
unique culture, ethnographic methods for observing
them in the workplace are becoming increasingly
important.
• Ethnographers join work or home environments to
listen and observe carefully, sometimes stepping
forward to ask questions and participate in
activities.
• The goal of ethnographic observation for interaction
design is to obtain the necessary data to influence
interface redesign
Scenario development and storyboarding

• Scenario development builds on the use case


concept and allows for developing specific
scenarios when a user engages the interactive
system to solve a particular task.
Storyboarding is the use of graphical sketches
and illustrations to convey important steps in
a scenario
Prototyping
• Prototypes, or physical sketches as Buxton (2007)
calls them, are particularly powerful design tools
because they allow users and designers alike to see
and hold (for physical prototypes) representations of
the intended interface. They also allow the design
team to play out specific scenarios and tests using the
prototype. For example, a printed version of the
proposed displays can be used for pilot tests, whereas
an interactive display with an active keyboard and
mouse can be used for more realistic tests
• Here are some examples of prototypes at different
levels of fidelity:
• Low-fidelity prototypes are generally created by
sketching, using sticky notes, or cutting and gluing
pieces of paper together (paper mockups);
• Medium-fidelity prototypes are often called
wireframes, provide some standardized elements (such
as buttons, menus, and text fields), even if potentially
drawn in a sketchy fashion, and have some basic
navigation functionality; and
• High-fidelity prototypes look almost like the final
product and may have some rudimentary
computational capabilities; however, the prototype is
typically not complete and may not be fully functional.
Design Tools, Practices, and Patterns
• Beyond the theoretical frameworks and
methods discussed above, design activities
today are supported by current design
practice: tools that have arisen to support
many design methods, guidelines and
standards to inform working designers, and
patterns that provide reusable solutions to
commonly occurring problems encountered
by interaction designers.
Design tools
• Creating prototypes beyond paper mockups
requires using computer programs to prototype a
specific interface or app. The simplest approach is
to use general purpose drawing and drafting
applications for this purpose. For example,
prototypes have been developed with simple
drawing or word-processing tools or even
Microsoft PowerPoint® presentations of screen
drawings manipulated with PowerPoint slideshows
and other animation. Other design tools that can
be used are Adobe InDesign®, Photoshop®, or
Illustrator®
• Many design tools use the actual buttons, dropdown
menus, and scrollbars used in the interfaces on the
specific platform
• Finally, dedicated design tools—so-called graphical user-
interface builders— also exist for the final
implementation phase when the development team is
realizing the planned interface. Many of these builders
use a drag-and-drop graphical editor where the
interaction designer can construct the final interface by
assembling existing interface elements from a library of
elements. Builders often automatically generate the
necessary source code from the graphical specification,
requiring the developer only to write his or her own
source code to manage the events resulting from the
user interaction.
Design guidelines and standards
• Early in the design process, the interaction design team
should generate a set of working guidelines. Two people
might work for one week to produce a 10-page
document, or a dozen people might work for two years
to produce a 300-page document. One component of
Apple’s success with the original Macintosh was the
machine’s early and readable guidelines document,
which provided a clear set of principles for the many
application developers to follow and thus ensured
harmony in design across products. Microsoft’s
Windows User Experience Guidelines, which have been
refined over the years, also provide a good starting point
and an educational experience for many programmers.
Interaction design patterns
• Design patterns, originally proposed for urban
planning (Alexander, 1977) and later software
engineering (Freeman et al., 2004), are best-practice
solutions to commonly occurring problems specified
in such a way that they can be reused and applied to
slightly different variations of a problem over and
over again. Regardless of discipline, patterns help
address a common problem for novice designers:
They have very little experience of past work to draw
upon when tackling a new problem. In this way,
design patterns constitute valuable experience-in-a-
can, ready to be used when needed
Social Impact Analysis
• Interactive systems often have a dramatic impact on
large numbers of users. To minimize risks, a
thoughtful statement of anticipated impacts
circulated among stakeholders can be a useful
process for eliciting productive suggestions early in
the development when changes are easiest.
Governments, utilities, and publicly regulated
industries increasingly require information systems
to provide services. A social impact statement,
similar to an environmental impact statement,
might help to promote high-quality systems in
government-related applications
• An outline for a social impact statement might
include these sections (Shneiderman and Rose,
1996):
---Describe the new system and its benefits.
• Convey the high-level goals of the new system.
• Identify the stakeholders.
• Identify specific benefits.
---Address concerns and potential barriers.
• Anticipate changes in job functions and potential
layoffs.
• Address security and privacy issues.
• Discuss accountability and responsibility for system
misuse and failure.
• Avoid potential biases.
• Weigh individual rights versus societal
benefits.
• Assess tradeoffs between centralization
and decentralization.
• Preserve democratic principles.
• Ensure diverse access.
• Promote simplicity and preserve what
works.
---- Outline the development process.
• Present an estimated project schedule.
• Propose a process for making decisions.
• Discuss expectations of how stakeholders will be
involved.
• Recognize needs for more staff, training, and
hardware.
• Propose a plan for backups of data and
equipment.
• Outline a plan for migrating to the new system.
• Describe a plan for measuring the success of the
new system
Legal Issues
• As user interfaces have become more prominent in
society, serious legal issues have emerged. Every developer
of software and information should review legal issues that
may affect design, implementation, deployment,
marketing, and use. For more information, Baase (2013)
gives an in-depth overview of such social, legal,
philosophical, ethical, political, constitutional, and
economic implications of computing.
• Privacy and security are always a concern whenever
computers are used to store data or to monitor activity.
Medical, legal, financial, and other data often have to be
protected to prevent unapproved access, illegal tampering,
inadvertent loss, or malicious mischief.
• A second concern encompasses safety and
reliability. User interfaces for aircraft,
automobiles, medical equipment, military
systems, utility control rooms, and the like can
affect life-or-death decisions. If air traffic
controllers are confused by the situation
display, they can make fatal errors. If the user
interface for such a system is demonstrated to
be difficult to understand, it could leave the
designer, developer, and operator open to a
lawsuit alleging improper design
• A third issue is copyright or patent protection for
software (Lessig, 2006; Samuelson and Schultz,
2007; McJohn, 2015). Software developers who
have spent time and money developing a
package are understandably frustrated when
potential users make illegal copies of the package
rather than buying it. Technical schemes have
been tried to prevent copying, but clever hackers
can usually circumvent the barriers. It is unusual
for a company to sue an individual for copying a
program, but cases have been brought against
corporations and universities
• A fourth concern is with copyright protection for
online information, images, or music. If customers
access an online resource, do they have the right
to store the information electronically for later
use? Can the customer send an electronic copy to
a colleague or friend? Who owns the “friends” list
and other shared data in social networking sites?
Do individuals, their employers, or network
operators own the information contained in e-mail
messages? The expansion of the web, with its vast
digital libraries, has raised the temperature and
pace of copyright discussions
• A fifth issue is freedom of speech in electronic
environments. Do users have a right to make
controversial or potentially offensive statements through
e-mail or social media? Are such statements protected by
freedom of speech laws, such as the U.S. First
Amendment? Are networks similar to street corners,
where freedom of speech is guaranteed, or are networks
similar to television broadcasting, where community
standards must be protected? Should network operators
be responsible for or prohibited from eliminating
offensive or obscene jokes, stories, or images?
Controversy has raged over whether Internet service
providers have a right to prohibit e-mail messages that are
used to organize consumer rebellions against themselves
• Other legal concerns include adherence
to laws requiring equal access for users
with disabilities and attention to
changing laws in countries around the
world. Do Yahoo! and eBay have to
enforce the laws of every country in
which they have customers? These and
other issues mean that developers of
online services must be sure to consider
all the legal implications of their design
decisions
Direct manipulation
• Direct manipulation as a concept has been
around since before computers. The metaphor
of direct manipulation works well in
computing environments and was introduced
in the early days of Xerox PARC and then
widely disseminated by Shneiderman (1983).
Direct-manipulation designs can provide the
capability for differing populations and easily
stretch across international boundaries
• favorite example of direct manipulation is driving an
automobile. The scene is directly visible through the
front window, and performance of actions such as
braking and steering has become common
knowledge in our culture. To turn left, for example,
the driver simply rotates the steering wheel to the
left. The response is immediate and the scene
changes, providing feedback to refine the turn. Now
imagine how difficult it would be trying to accurately
turn a car by typing a command or selecting “turn left
30 degrees” from a menu. The graceful interaction in
many applications is due to the increasingly elegant
application of direct manipulation.
• Driverless cars may soon respond to
commands like “take me to Baltimore airport,”
but they are a long way from matching the
skills of drivers at the wheel while navigating
snow-covered roads or police hand signals at
accident sites
• Before designing for current devices, it makes sense to
reflect where early design has been. In the early days of
office automation, there was no such thing as a direct-
manipulation word processor or a presentation system
like PowerPoint. Word processors were command-line–
driven programs where the user typically saw a single
line at a time. Keyboard commands were used along
with inserting special commands to provide instructions
for viewing and printing the documents often as a
separate operation. Similarly, with presentation
programs, specialized commands were used to set the
font style, color, and size. Obviously, these were very
limited compared to the numerous font families
available today. Most users today are used to a
WYSIWYG (What You See Is What You Get) environment
enhanced by direct-manipulation widgets.
The three principles and attributes of direct
manipulation
• The attraction of direct manipulation is apparent in the
enthusiasm of the users. Each example has problematic
features, but they demonstrate the potent advantages of
direct manipulation, which can be summarized by three
principles:
1. Continuous representations of the objects and actions of
interest with meaningful visual metaphors
2. Physical actions or presses of labeled interface objects
(i.e., buttons) instead of complex
3. Rapid, incremental, reversible actions whose effects on
the objects of interest are visible immediately
• Using these three principles, it is possible to design systems
that have these beneficial attributes:
• Novices can learn basic functionality quickly, usually through a
demonstration by a more experienced user.
• Experts can work rapidly to carry out a wide range of tasks,
even defining new functions and features. Knowledgeable
intermittent users can retain operational concepts.
• Error messages are rarely needed.
• Users can immediately see whether their actions are
furthering their goals, and if the actions are
counterproductive, they can simply change the direction of
their activity.
• Users experience less anxiety because the interface is
comprehensible and because actions can be reversed easily.
• Users gain a sense of confidence and mastery because they
are the initiators of action, they feel in control, and they can
predict the interface’s responses.
Translational distances with direct
manipulation
• The effectiveness and reality of the direct-
manipulation interface are based on the validity
and strength of the metaphor chosen to represent
the actions and objects. Using familiar metaphors
creates easier learning conditions for users and
lessens the number of mistakes and incorrect
actions. Adequate testing is needed to validate the
metaphor. Special attention needs to be paid to the
user characteristics such as age, reading level,
educational background, prior experiences, and any
physical disabilities.
Examples of translational distances (strength).
• Weak—early video game controllers (Fig. 7.5)
• Medium—touch screens, multi-touch (Fig.
7.1)
• Strong—data glove, gesturing, manipulating
tangible objects (Fig. 7.2)
• Immersive—virtual reality, i.e, oculus rift (Fig.
7.14)
• Weak direct manipulation is what can be
described as basic direct manipulation. There
is a mouse, trackpad, joystick, or similar device
translating the user’s physical action into
action in the virtual space using some
mapping function.
• Medium direct manipulation is the next step
moving along the continuum. The translational
distance is reduced. Instead of communicating with
the virtual space with the device, the user reaches
out and touches, moves, and grabs the entities in
the on-screen representation. Examples of this
include touchscreens (mobile, kiosk, and desktop).
• strong direct manipulation involves actions such as
gesture recognition with various body parts. It may
be the user’s hand, foot, head, or full body
(whatever controls the action) that is “virtually”
placed inside the physical space
Disadvantages of Direct manipulation

• May be hard to code.


• High resource usage.
• The requirement for a lot of screen space may
be cumbersome.
• Poing may be slower than typing.
• May increase difficulty for the visually impaired.
• May require graphics display and pointing
devices.
Some Examples of Direct Manipulation
• Geographical systems including GPS (global
positioning systems)
• Video games
• Computer-aided design and fabrication
• Direct-manipulation programming and
configuration
Geographical systems including GPS (global positioning
systems)
• For centuries, travelers have relied on maps and globes to better
understand the Earth and geographical systems. As graphic- and
image-capture capabilities increased (both real-world and human-
generated), it was a natural progression to create systems to
represent both a current location—”where we are”—and a target
location—”where we want to go.” Of course, as prices dropped, these
types of systems became available as commercial GPS systems for
cars, for walking, and even for the mobile phone. Being able to
directly see the alternatives on the devices as well as how to move
from the current location to the target location including
manipulating the routes is another application of direct manipulation.
• Google Maps™, MapQuest, Google Street View, Garmin, National
Geographic, and Google Earth™ combine geographic information
from aerial photographs, satellite imagery, and other sources to
create a vast database of graphical information that can easily be
viewed and displayed.
Video games
• For many people, the most exciting, well-engineered,
and commercially successful application of the direct-
manipulation concepts lies in the world of video
games. The early but simple and popular game Pong®
(created in 1972) required the user to rotate a knob
that moved a white rectangle on the screen. A white
spot acted as a ping-pong ball that ricocheted off the
wall and had to be hit back by the movable white
rectangle. Users developed speed and accuracy in
placing the “paddle” to keep the increasingly speedy
ball from getting past, while the computer speaker
emitted a ponging sound when the ball bounced.
• Some cataloguers state that we are in the eighth
generation of video games. Parkin (2014) provides an
illustrated history of five decades of video games. Last
generation’s Nintendo Wii, Sony PlayStation 3, and
Microsoft Xbox 360™ have given way to this
generation’s Nintendo Wii U, Sony PlayStation 4, and
Microsoft Xbox One in a very short time, and
continued advances are expected. These gaming
platforms have brought powerful 3-D graphics
hardware to the home and have created a remarkable
international market. Gaming experiences are being
enhanced by combining 3-D user-interface
technologies, such as stereoscopic 3-D, head tracking,
and finger-count gestures
• Some web-based game environments may
involve millions of users and thousands of user-
constructed “worlds,” such as schools,
shopping malls, or urban neighborhoods. Game
devotees may spend dozens of hours per week
immersed in their virtual worlds, chatting with
collaborators or negotiating with opponents.
World of Warcraft (developed and published by
Blizzard Entertainment) has been the mainstay
and most popular of the MMORPG games with
more than 5.6 million subscribers as of 2015
• Most games continuously display a numeric score
so that users can measure their progress and
compete with their previous performance, with
friends, or with the highest scorers. Typically, the
10 highest scorers get to store their initials in the
game for public display. This strategy provides
one form of positive reinforcement that
encourages mastery. Studies with elementary-
school children have shown that continuous
display of scores is extremely valuable. Machine-
generated feedback—such as “Very good” or
“You’re doing great!
Computer-aided design and fabrication
• Most computer-aided design (CAD) systems for
automobiles, electronic circuitry, aircraft, or
mechanical engineering use principles of direct
manipulation. Building and home architects now have
at their disposal powerful tools, provided by
companies such as Autodesk, that provide
components to handle structural engineering, floor
plans, interiors, landscaping, plumbing, electrical
installation, and much more. With such applications,
the designer may see a circuit schematic on the screen
and, with mouse clicks, be able to move components
into or out of the proposed circuit.
• There are large manufacturing companies using
AutoCAD® and similar systems, but there are also
other specialized design programs for kitchen and
bathroom layouts, landscaping plans, and other
homeowner-type situations. These programs
allow users to control the angle of the sun during
the various seasons to see the impact of the
landscaping and shadows on various portions of
the house. They allow users to view a kitchen
layout and calculate square footage estimates for
floors and countertops and even print out
materials lists directly from the software.
• Another emerging use of direct manipulation involves
home automation. Since so much of home control
involves floor plans, direct-manipulation actions naturally
take place on a display of the floor plan with selectable
icons for each status indicator (such as a burglar alarm,
heat sensor, or smoke detector) and for each activator
(such as controls for opening and closing curtains or
shades, for air conditioning and heating, or for audio and
video speakers or screens). For example, users can route a
recorded TV program being watched in the living room to
the bedroom and kitchen by merely dragging the on-
screen icon into those rooms, and they can adjust the
volume by moving a marker on a linear scale. The action is
usually immediate and visible and can be easily reversed
as wel
Direct-manipulation programming and configuration
Performing tasks by direct manipulation is not the only
goal. It should be possible to do programming by
direct manipulation as well, at least for certain
problems. How about moving a drill press or a
surgical tool through a complex series of motions
that are then repeated exactly? Automobile seating
positions and mirror settings can be set as a group of
preferences for a particular driver and then adjusted
as the driver settles in place. Likewise, some
professional television-camera supports allow the
operator to program a sequence of pans or zooms
and then to replay it smoothly when required.
• Programming of physical devices by direct manipulation
seems quite natural, and an adequate visual
representation of information may make direct
manipulation programming possible in other domains.
Spreadsheet packages such as Excel™ have rich
programming languages and allow users to create portions
of programs by carrying out standard spreadsheet actions.
The result of the actions is stored in another part of the
spreadsheet and can be edited, printed, and stored in a
textual form. Database programs such as Access™ allow
users to create buttons that when activated will set off a
series of actions and commands and even generate a
report. Similarly, Adobe Photoshop records a history of
user actions and then allows users to create programs with
action sequences and repetition using direct manipulation
2-D and 3-D Interfaces
• some designers dream about building interfaces that
approach the richness of 3-D reality. They believe that the
closer the interfaces are to the real world, the easier usage
will be. This extreme interpretation of direct manipulation is
a dubious proposition, since user studies show that
disorienting navigation, complex user actions, and annoying
occlusions can slow performance in the real world as well as
in 3-D interfaces (Cockburn and McKenzie, 2002). Many
interfaces (sometimes called 2-D interfaces) are designed to
be simpler than the real world by constraining movement,
limiting interface actions, and ensuring visibility of interface
objects
• seek merely to mimic reality. For some computer-
based tasks—such as medical
imagery ,architectural drawing, computer-
assisted design, chemical-structure
modeling ,and ­scientific simulations—pure 3-D
representations are clearly helpful and have
become major industries. However, even in these
cases, the ­successes are often due to design
features that make the interface better than
reality.
enumeration of features for effective 3-D interfaces might serve as a
checklist for designers, researchers, and educators:
• Use occlusion, shadows, perspective, and other 3-D techniques carefully.
• Minimize the number of navigation steps required for users to accomplish
their tasks.
• Keep text readable (better rendering, good contrast with background, and
no more than 30-degree tilt).
• Avoid unnecessary visual clutter, distraction, contrast shifts, and
reflections.
• Simplify user movement (keep movements planar, avoid surprises like
going through walls).
• Prevent errors (that is, create surgical tools that cut only where needed
and chemistry kits that produce only realistic molecules and safe
compounds).
• Simplify object movement (facilitate docking, follow predictable paths,
limit rotation).
• Organize groups of items in aligned structures to allow rapid visual
search.
• Enable users to construct visual groups to support spatial recall (placing
Breakthroughs based on clever ideas seem possible. Enriching
interfaces with stereo displays, haptic feedback, and 3-D sound
may yet prove beneficial in more than specialized applications.
Bigger payoffs are more likely to come sooner if these
guidelines for inclusion of enhanced 3-D features are followed:
• Provide overviews so users can see the big picture (plan view
display, aggregated views).
• Allow teleportation (rapid context shifts by selecting destination
in an overview).
• Offer x-ray vision so users can see into or beyond objects.
• Provide history keeping (recording, undoing, replaying, editing).
• Permit rich user actions on objects (save, copy, annotate, share,
send).
• Enable remote collaboration (synchronous, asynchronous).
• Give users control over explanatory text (pop-up, floating,
or excentric labels and screen tips) and let them view
details on demand.
• Offer tools to select, mark, and measure.
• Implement dynamic queries to rapidly filter out unneeded
items.
• Support semantic zooming and movement (simple action
brings object front and center and reveals more details)
• Enable landmarks to show themselves even at a distance.
• Allow multiple coordinated views (users can be in more
than one place at a time and see data in more than one
arrangement at a time).
• Develop novel 3-D icons to represent concepts that are
more recognizable and memorable.
Teleoperation and Presence
• Teleoperation has two parents: direct manipulation in
personal computers and process control, where human
operators control physical processes in complex
environments. Typical tasks are operating power or
chemical plants, controlling manufacturing, surgery, flying
airplanes or drones, or steering vehicles. If the physical
processes take place in a remote location, we talk about
teleoperation or remote control. To perform the control
task remotely, the human operator may interact with a
computer, which may carry out some of the control tasks
without any interference by the human operator.
• There are great opportunities for the remote control or
tele operation of devices if acceptable user interfaces
can be constructed. When designers can provide
adequate feedback in sufficient time to permit effective
decision making, attractive applications in
manufacturing, medicine, military operations, and
computer-supported collaborative work are viable.
Home-automation applications extend remote
operation of various devices to security and access
systems, energy control, and operation of appliances.
Scientific applications in space, underwater, or in
hostile environments enable new research projects to
be conducted economically and safely. The recent
introduction of affordable drones will be yet another
facet of tele operation.
• A typical remote application is telemedicine,
or medical care delivered over communication
links (Sonnenwald et al., 2014). Telemedicine
can be used more broadly to allow physicians
to examine patients remotely and surgeons to
carry out operations across continents.
Telehealth is being widely used in the
Veteran’s Administration Veterans can come
into the local VA office where technology visits
with the various medical personnel can be
conducted via Telehealth.
The architecture of remote environments introduces several
complicating factors:
• Time delays. The network hardware and software cause delays
in sending user actions and receiving feedback: a transmission
delay, or the time it takes for the command to reach the
microscope (in our example, transmitting the command over
the network), and an operation delay, or the time until
the microscope responds. These delays in the system prevent
the operator from knowing the current status of the system.
• Incomplete feedback. Devices originally designed for direct
control may not have adequate sensors or status indicators. For
instance, the microscope can transmit its current position, but it
operates so slowly that it does not indicate the exact current
position.
• Unanticipated interferences. Since the operated devices are
remote, unanticipated interferences are more likely to occur
than with physically present direct-manipulation environments
• One solution to these problems is to make explicit the network
delays and breakdowns as part of the system. The user sees a
model of the starting state of the system, the action that has
been initiated, and the current state of the system as it carries
out the action. It may be preferable for users to specify a
destination (rather than a motion) and wait until the action is
completed before readjusting the destination if necessary.
Avenues for continuous feedback also are important.
• Teleoperation is also commonly used by the military and by
civilian space projects. Military applications for unmanned
aircraft gained visibility during the recent wars in Afghanistan
and Iraq. Reconnaissance drones and teleoperated missile-
firing aircraft were widely used. Agile and flexible mobile
robots exist for many hazardous duty situations (Murphy,
2014). Military missions and harsh environments, such as
undersea and space exploration, are strong drivers for
improved designs.
Augmented and Virtual Reality
• Flight-simulator designers work hard to create the most
realistic experience for fighter and airline pilots. The cockpit
displays and controls are taken from the same production
line that creates the real ones. Then the windows are
replaced by high-resolution computer displays, and sounds
are choreographed to give the impression of engine start or
reverse thrust. Finally, the vibration and tilting during
climbing or turning are created by hydraulic jacks and
intricate suspension systems. This elaborate technology
may cost $100 million, but even so, it is a lot cheaper, safer,
and more useful for training than the $400-million jet that it
simulates.
• Flying a plane is a complicated and specialized
skill, but simulators are available for more
common—and some surprising—tasks under
the alluring name of virtual reality or the more
descriptive virtual environments.
Augmented reality
• Augmented reality enables users to see the real
world with an overlay of additional information; for
example, while users are looking at the walls of a
building, their semitransparent eyeglasses may
show the location of electrical wires and studwork.
Medical applications, such as allowing surgeons or
their assistants to look at patient while they see an
overlay of a sonogram or other pertinent
information to help locate a tumor, also seem
compelling
• An interior designer walking through a house
with a client should be able to pick up a
window-stretching tool or pull on a handle to
try out a larger window or to use a room-
painting tool to change the wall colors while
leaving the windows and furniture untouched.
Companies like IKEA are providing augmented
reality tools so customers can visualize the
products via their catalog in their own homes
and rooms
Virtual reality
• The presence aspect of virtual reality breaks the
physical limitations of space and allows users to act
as though they are somewhere else. Practical
thinkers immediately grasp the connection to
remote direct manipulation, remote control, and
remote vision, but the fantasists see the potential
to escap reality and to visit science-fiction worlds,
cartoonlands, previous times in history, galaxies
with different laws of physics, or unexplored
emotional territories e current

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy