Hci Unit1 Notes
Hci Unit1 Notes
INTRODUCTION
• With today's technology and tools, and our motivation to create really effective and
usable interfaces and screens, why do we continue to produce systems that are inefficient and
confusing or, at worst, just plain unusable? Is it because:
1. We don't care?
2. We don't possess common sense?
3. We don't have the time?
4. We still don't know what really makes good design
DEFINITION
• "Human-computer interaction is a discipline concerned with the design,
evaluation and implementation of interactive computing systems for human use and with the
study of major phenomena surrounding them."
GOALS
• A basic goal of HCI is
– to improve the interactions between users and computers
– by making computers more usable and receptive to the user's needs.
• A long term goal of HCI is
– to design systems that minimize the barrier between the human's
cognitive model of what they want to accomplish and the computer's understanding of the
user's task
• The user interface has essentially two components: input and output.
• Output is how the computer conveys the results of its computations and requirements to the user.
– Today, the most common computer output mechanism is the display screen,
followed by mechanisms that take advantage of a person's auditory capabilities: voice and sound.
• The use of the human senses of smell and touch output in interface design still remain
largely unexplored.
• Proper interface design will provide a mix of well-designed input and output
mechanisms that satisfy the user's needs, capabilities, and limitations in the most effective way
possible.
• The best interface is one that it not noticed, one that permits the user to focus on the
information and task at hand, not the mechanisms used to present the information and perform the
task.
• We don't care?
• We don't possess common sense?
• We don't have the time?
• We still don't know what really makes good design?
• But we never seem to have time to find out what makes good design, nor
to properly apply it. After all, many of us have other things to do in addition to designing
interfaces and screens.
• So we take our best shot given the workload and time constraints imposed upon us. The
result, too often, is woefully inadequate.
• Interface and screen design were really a matter of common sense, we
developers would have been producing almost identical screens for representing the real
world.
• Example bad designs
– Closed door with complete wood
– suggestion : glass door
THE IMPORTANCE OF THE USER INTERFACE
• It is also the vehicle through which many critical tasks are presented. These tasks often
have a direct impact on an organization's relations with its customers, and its profitability.
• A screen's layout and appearance affect a person in a variety of ways. If they are
confusing and inefficient, people will have greater difficulty in doing their jobs and will make
more mistakes.
• Poor design may even chase some people away from a system permanently. It can also
lead to aggravation, frustration, and increased stress.
• Poor clarity forced screen users to spend one extra second per screen.
– Almost one additional year would be required to process all screens.
– Twenty extra seconds in screen usage time adds an additional 14 person years.
• The benefits of a well-designed screen have also been under experimental scrutiny for many years.
– One researcher, for example, attempted to improve screen clarity
and readability by making screens less crowded.
– Separate items, which had been combined on the same display line to
conserve space, were placed on separate lines instead.
– The result screen users were about 20 percent more productive with the
less crowded version.
• Proper formatting of information on screens does have a significant positive effect on
performance.
– In recent years, the productivity benefits of well-designed Web pages have also been
scrutinized.
• Training costs are lowered because training time is reduced.
• support line costs are lowered because fewer assist calls are necessary.
• Employee satisfaction is increased because aggravation and frustration are reduced.
• Ultimately, that an organization's customers benefit because of the improved service they receive.
• Identifying and resolving problems during the design and development process also has
significant economic benefits
• How many screens are used each day in our technological world?
• How many screens are used each day in your organization? Thousands? Millions?
• Imagine the possible savings. Proper screen design might also, of course, lower the
costs of replacing "broken" PCs.
A BRIEF HISTORY OF THE HUMAN-COMPUTER INTERFACE
• The need for people to communicate with each other has existed since we first walked
upon this planet.
• The lowest and most common level of communication modes we share are movements
and gestures.
• Movements and gestures are language independent, that is, they permit people who do
not speak the same language to deal with one another.
• The next higher level, in terms of universality and complexity, is spoken language.
• Most people can speak one language, some two or more. A spoken language is a very
efficient mode of communication if both parties to the communication understand it.
• At the third and highest level of complexity is written language. While most people speak,
not all can write.
• But for those who can, writing is still nowhere near as efficient a means of communication
as speaking.
• In modem times, we have the typewriter, another step upward in communication complexity.
• Significantly fewer people type than write. (While a practiced typist can find typing
faster and more efficient than handwriting, the unskilled may not find this the case.)
• Spoken language, however, is still more efficient than typing, regardless' of typing skill level.
• Through its first few decades, a computer's ability to deal with human communication
was inversely related to what was easy for people to do.
-- The computer demanded rigid, typed input through a keyboard; people responded slowly
using this device and with varying degrees of skill.
-- The human-computer dialog reflected the computer's preferences, consisting of one style
or a combination of styles using keyboards, commonly referred to as Command Language,
Question and Answer, Menu selection, Function Key Selection, and Form Fill-In.
• Throughout the computer's history, designers have been developing, with varying
degrees of success, other human-computer interaction methods that utilize more general,
widespread, and easier-to-learn capabilities: voice and handwriting.
--Systems that recognize human speech and handwriting now exist, although they still lack
the universality and richness of typed input.
INTRODUCTION OF THE GRAPHICAL USER INTERFACE
• The Xerox systems, Altus and STAR, introduced the mouse and pointing and selecting
as the primary human-computer communication method.
• The user simply pointed at the screen, using the mouse as an intermediary.
• These systems also introduced the graphical user interface as we know it a new concept
was born, revolutionizing the human-computer interface.
• While developers have been designing screens since a cathode ray tube display (CRT)
was first attached to a computer, more widespread interest in the application of good design
principles to screens did not begin to emerge until the early 1970s, when IBM introduced its 3270
cathode ray tube text-based terminal.
It usually consisted of many fields (more than are illustrated here) with very cryptic and often
unintelligible captions.
• It was visually cluttered, and often possessed a command field that challenged the user
to remember what had to be keyed into it.
• Effectively using this kind of screen required a great deal of practice and patience.
• Most early screens were monochromatic, typically presenting green text on black backgrounds.
• At the turn of the decade guidelines for text-based screen design were finally made
widely available and many screens began to take on a much less cluttered look through concepts
such as grouping and alignment of elements, as illustrated in Figure 1.2.
• User memory was supported by providing clear and meaningful field captions and by
listing commands on the screen, and enabling them to be applied, through function keys. Messages
also became clearer.
• These screens were not entirely clutter-free, however. Instructions and reminders to the
user had to be inscribed on the screen in the form of prompts or completion aids such as the codes
PR and Sc.
• Not all 1980s screens looked like this, however. In the 1980s, 1970s-type screens were
still being designed, and many still reside in systems today.
• The advent of graphics yielded another milestone in the evolution of screen design, as
illustrated in Figure above
While some basic "design principles did not change, groupings and alignment, for example,
Borders were made available to visually enhance groupings, and buttons and menus for
implementing commands replaced function keys.
• Multiple properties of elements were also provided, including many different font sizes and
styles, line thicknesses, and colors.
• The entry field was supplemented by a multitude of other kinds of controls, including list
boxes, drop-down combination boxes, spin boxes, and so forth.
• These new controls were much more effective in supporting a person's memory, now
simply allowing for selection from a list instead of requiring a remembered key entry.
• Completion aids disappeared from screens, replaced by one of the new listing controls.
Screens could also be simplified, the much more powerful computers being able to quickly present
a new screen.
• In the 1990s, our knowledge concerning what makes effective screen design continued to
expand. Coupled with ever-improving technology, the result was even greater improvements in the
user- computer screen interface as the new century dawned.
DISADVANTAGES
DIRECT MANIPULATION: DEFINITION
Direct manipulation is an interaction style in which the objects of interest in the UI are visible and
can be acted upon via physical, reversible, incremental actions that receive immediate feedback.
Let’s say that you’re looking at an image of yourself on a roller coaster and want to see if your
terrified expression has been caught on camera. What do you do? Something like this?
The action of using your fingertips to zoom in and out of the image is an example of a direct-
manipulation interaction. Another classic example is dragging a file from a folder to another one in
order to move it.
Definition: Direct manipulation (DM) is an interaction style in which users act on displayed
objects of interest using physical, incremental, reversible actions whose effects are immediately
visible on the screen.
Ben Shneiderman first coined the term “direct manipulation” in the early 1980s, at a time when the
dominant interaction style was the command line. In command-line interfaces, the user must
remember the system label for a desired action, and type it in together with the names for the objects
of the action.
Moving a file in a command-line interface involves remembering the name of the command (“mv”
in this case), the names of the source and destination folders, as well as the name of the file to be
moved.
Direct manipulation is one of the central concepts of graphical user interfaces (GUIs) and is
sometimes equated with “what you see is what you get” (WYSIWYG). These interfaces combine
menu-based interaction with physical actions such as dragging and dropping in order to help the
user use the interface with minimal learning.
In his analysis of direct manipulation, Shneiderman identified several attributes of this interaction
style that make it superior to command-line interfaces:
Continuous representation of the object of interest. Users can see visual representations
of the objects that they can interact with. As soon as they perform an action, they can see its
effects on the state of the system. For example, when moving a file using drag-and-drop,
users can see the initial file displayed in the source folder, select it, and, as soon as the action
was completed, they can see it disappear from the source and appear in the destination —
an immediate confirmation that their action had the intended result. Thus, direct-
manipulation UIs satisfy, by definition, the first usability heuristic: the visibility of the
system status. In contrast, in a command-line interface, users usually must explicitly check
that their actions had indeed the intended result (for example, by listing the content of the
destination directory).
Physical actions instead of complex syntax. Actions are invoked physically via clicks,
button presses, menu selections, and touch gestures. In the move-file example, drag-and-
drop has a direct analog in the real world, so this implementation for the move action has the
right signifiers and can be easily learned and remembered. In contrast, the command- line
interface requires users to recall not only the name of the command (“mv”), but also the names
of the objects involved (files and paths to the source and destination folders). Thus, unlike
DM interfaces, command-line interfaces are based on recall instead of recognition and
violate an important usability heuristic.
Continuous feedback and reversible, incremental actions. Because of the visibility of the
system state, it’s easy to validate that each action caused the right result. Thus, when users
make mistakes, they can see right away the cause of the mistake and they should be able to
easily undo it. In contrast, with command-line interfaces, one single user command may
have multiple components that can cause the error. For instance, in the example below, the
name of the destination folder contains a typo “Measuring Usablty” instead of “Measuring
Usability”. The system simply assumed that the file name should be changed to “Measuring
Usablty”. If users check the destination folder, they will discover that there was a problem,
but will have no way of knowing what caused it: did they use the wrong command, the
wrong source filename, or the wrong destination?
The command contains a typo in the destination name. Users have no way of identifying this error
and must do detective work to understand what went wrong.
This type of problem is familiar to everyone who has written a computer program. Finding a bug
when there are variety of potential causes often takes more time than actually producing the code.
Rapid learning. Because the objects of interest and the potential actions in the system are
visually represented, users can use recognition instead of recall to see what they could do
and select an operation most likely to fulfill their goal. They don’t have to learn and
remember complex syntax. Thus, although direct-manipulation interfaces may require some
initial adjustment, the learning required is likely to be less substantial.
When direct manipulation first appeared, it was based on the office-desk metaphor — the computer
screen was an office desk, and different documents (or files) were placed in folders, moved around,
or thrown to trash. This underlying metaphor indicates the skeuomorphic origin of the concept. The
DM systems described originally by Shneiderman are also skeuomorphic — that is, they are based
on resemblance with a physical object in the real world. Thus, he talks about software interfaces
that copy Rolodexes and physical checkbooks to support tasks done (at the time) with these tools.
As we all know, skeuomorphism saw a huge revival in the early iPhone days, and has now come
out of fashion.
A skeuomorphic direct-manipulation interface for “playing” the piano on a phone
While skeuomorphic interfaces are indeed based on direct manipulation, not all direct-manipulation
interfaces need to be skeuomorphic. In fact, today’s flat interfaces are a reaction to skeuomorphism
and depart from the real-world metaphors, yet they do rely on direct manipulation.
Continuous representation of the objects? It means that you can only act on the small number
of objects that can be seen at any given time. And objects that are out of sight, but not out
of mind, can only be dealt with after the user has laboriously navigated to the place that
holds those objects so that they can be made visible.
Physical actions? One word: RSI (repetitive strain injury). It’s a lot of work to move all
those icons and sliders around the screen. Actually, two more words: accidental activation,
which is particularly common on touchscreens, but can also happen on mouse-driven
systems.
Continuous feedback? Only if you attempt an operation that the system feels like letting you
do. If you want to do something that’s not available, you can push and drag buttons and
icons as much as you want with no effect whatsoever. No feedback, only frustration. (A good
UI will show in-context help to explain why the desired action isn’t available and how to
enable it. Sadly, UIs this good are not very common.)
Rapid learning? Yes, if the design is good, but in practice learnability depends on how well
designed the interface is. We’ve all seen menus with poorly chosen labels, buttons that did
not look clickable, or drop-down boxes with more options than the length of the screen.
And there are even more disadvantages:
DM is slow. If the user needs to perform a large number of actions, on many objects, using
direct manipulation takes a lot longer than a command-line UI. Have you encountered any
software engineers who use DM to write their code? Sure, they might use DM elements in
their software-development interfaces, but the majority of the code will be typed in.
Repetitive tasks are not well supported. DM interfaces are great for novices because they
are easy to learn, but because they are slow, experts who have to perform the same set of
tasks with high frequency, usually rely on keyboard shortcuts, macros, and other command-
language interactions to speed up the process. For example, when you need to send an email
attachment to one recipient, it is easy to drag the desired file and drop it into the attachment
section. However, if you needed to do this for 50 different recipients with customized subject
lines, a macro or script will be faster and less tedious.
Some gestures can be more error-prone than typing. Whereas in theory, because of the
continuous feedback, DM minimizes the chance of certain errors, in practice, there are
situations when a gesture is harder to perform than typing equivalent information. For
example, good luck trying to move the 50th column of a spreadsheet into the 2nd position
using drag and drop. For this exact reason, Netflix offers 3 interaction techniques for
reordering subscribers’ DVD queues: dragging the movie to the desired position (easy for
short moves), a one-button shortcut for moving into the #1 position (handy when
you must watch a particular movie ASAP), and the indirect option of typing the number of
the desired new position (useful in most other cases).
Netflix allows 3 interactions for rearranging a queue: dragging a movie to the desired position (not
shown), moving it directly to top (Move to top option), or typing in the position where it needs to be
moved (Move to option).
Accessibility may suffer. DM UIs may fail visually impaired users or users with motor skill
impairments, especially if they are heavily based on physical actions, as opposed to button
presses and menu selections. (Workarounds exist, but it can be difficult to implement them.)
Web user – Interface popularity, characteristics- Principles of user interface.
WEB
• User hardware variations enormous.
• Screen appearance influenced by hardware being used.
• Information and navigation
• Full of unknown content.
• Source not always trusted.
• Often not placed onto the Web by users or known people and organizations.
• Highly variable organization.
• Privacy often suspect
• Link to a site, browse or read pages, fill out forms, register for services, participate in
transactions, download and save things.
• Movement between pages and sites very rapid. Familiarity with many sites not
established.
• Infinite and generally unorganized.
• Two components, browser and page.
• Within page, any combination of text, images, audio, video, and animation.
• May not be presented as specified by the designer dependent on browser, monitor, and
user specifications.
• Little standardization
• Through links: bookmarks, and typed URLs. Significant and highly visible concept.
• Few constraints ,frequently causing a lost “sense of place”
Few standards.
• Typically part of page design, fostering an lack of consistency
• Poorer maintenance of a sense of context. Single-page entities.
• Unlimited navigation paths.
• Contextual clues become limited or are difficult to find.
• Basic interaction is a single click. This can cause extreme changes in context, which
may not be noticed.
• Quite variable, depending on transmission speeds, page content, and so on. Long
times can upset the user
• Fosters a more artistic, individual, and unrestricted presentation style.
• Complicated by differing browser and display capabilities, and bandwidth limitations.
• Limited personalization available.
• Limited by constraints imposed by the hardware, browser, software,
client support, and user willingness to allow features because of response time,
security, and privacy concerns
• No similar help systems.
• The little available help is built into the page. Customer service support, if provided,
oriented to product or service offered.
• Apparent for some basic functions within most Web sites (navigation, printing,and so
on.)
• Sites tend to achieve individual distinction rather than integration.
• Susceptible to disruptions caused by user, telephone line and cable providers, Internet
service providers, hosting servers, and remotely accessed sites.
PRINCIPLES OF USER INTERFACE DESIGN
• An interface must really be just an extension of a person. This means that the system
and its software must reflect a person's capabilities and respond to his or her specific needs.
• It should be useful, accomplishing some business objectives faster and more
efficiently than the previously used method or tool did.
• It must also be easy to learn, for people want to do, not learn to do.
• Finally, the system must be easy and fun to use, evoking a sense of
pleasure and accomplishment not tedium and frustration.
• The interface itself should serve as both a connector and a separator
• a connector in that it ties the user to the power of the computer, and a
separator in that it minimizes the possibility of the participants damaging one another.
• While the damage the user inflicts on the computer tends to be physical (a frustrated
pounding of the keyboard), the damage caused by the computer is more psychological.
• Throughout the history of the human-computer interface, various researchers
and writers have attempted to define a set of general principles of interface design.
• What follows is a compilation of these principles. They reflect not only what
we know today, but also what we think we know today.
• Many are based on research, others on the collective thinking of behaviorists
working with user interfaces.
• These principles will continue to evolve, expand, and be refined as our
experience with Gills and the Web increases.
GENERAL PRINCIPLES
• The design goals in creating a user interface are described below.
• They are fundamental to the design and implementation of all effective
interfaces, including GUI and Web ones.
• These principles are general characteristics of the interface, and they apply to all aspects.
• The compilation is presented alphabetically, and the ordering is not intended to
imply degree of importance.
Aesthetically Pleasing
Provide visual appeal by following these presentation and graphic design principles:
• Provide meaningful contrast between screen elements.
• Create groupings.
• Align screen elements and groups.
• Provide three-dimensional representation.
• Use color and graphics effectively and simply.
Clarity
The interface should be visually, conceptually, and linguistically clear, including
• Visual elements
• Functions
• Metaphors
• Words and Text
Compatibility
Provide compatibility with the following:
- The user
- The task and job
- The Product
Comprehensibility
A system should be easily learned and understood: A user should know the following:
- What to look at
- What to do
- When to do it
- Where to do it
- Why to do it
- How to do it
The flow of actions, responses, visual presentations, and information should be in a
sensible order that is easy to recollect and place in context.
Consistency
A system should look, act, and operate the same throughout. Similar components should:
- Have a similar look.
Control
The user must control the interaction.
- Actions should result from explicit user requests.
- Actions should be performed quickly.
- Actions should be capable of interruption or termination.
- The user should never be interrupted for errors
• The context maintained must be from the perspective of the user.
• The means to achieve goals should be flexible and compatible with the user's skills,
experiences, habits, and preferences.
• Avoid modes since they constrain the actions available to the user.
• Permit the user to customize aspects of the interface, while always providing a
Proper set of defaults
Directness
Provide direct ways to accomplish tasks.
- Available alternatives should be visible.
- The effect of actions on objects should be visible.
Flexibility
A system must be sensitive to the differing needs of its users, enabling a level and type of
performance based upon:
- Each user's knowledge and skills.
- Each user's experience.
- Each user's personal preference.
- Each user's habits.
- The conditions at that moment.
Efficiency
Minimize eye and hand movements, and other control actions.
- Transitions between various system controls should flow easily and freely.
- Navigation paths should be as short as possible.
- Eye movement through a screen should be obvious
and sequential. Anticipate the user's wants and needs whenever
possible.
Familiarity
• Employ familiar concepts and use a language that is familiar to the user.
• Keep the interface natural, mimicking the user's behavior patterns.
• Use real-world metaphors.
Forgiveness
• Tolerate and forgive common and unavoidable human errors.
• Prevent errors from occurring whenever possible.
• Protect against possible catastrophic errors.
• When an error does occur, provide constructive messages.
Predictability
• The user should be able to anticipate the natural progression of each task.
o Provide distinct and recognizable screen elements.
o Provide cues to the result of an action to be performed.
• All expectations should be fulfilled uniformly and completely.
Recovery
A system should permit:
- Commands or actions to be abolished or reversed.
- Immediate return to a certain point if
difficulties arise. Ensure that users never lose their
work as a result of:
- An error on their part.
- Hardware, software, or communication problems
Responsiveness
The system must rapidly respond to the user's requests Provide immediate
acknowledgment for all user actions:
- Visual.
- Textual
- A
uditory.
Transparency
Permit the user to focus on the task or job, without concern for the mechanics of the interface.
- Workings and reminders of workings inside the computer should be invisible to the user.
Simplicity
Provide as simple an interface as possible.
Five ways to provide simplicity:
- Use progressive disclosure, hiding things until they are needed
- Present common and necessary functions first
- Prominently feature important functions
- Hide more sophisticated and less frequently used functions.
- Provide defaults.
- Minimize screen alignment points.
- Make common actions simple at the expense of uncommon actions being made harder.
- Provide uniformity and consistency.