Investigating Human Computer
Investigating Human Computer
Investigating Human Computer
BY EXEVILCLOUD
Today, when an enormous number of computer-based systems exist, the human activities are being
computer mediated. Usually, in designing the interface to those systems, the human-computer
interaction is left behind without consideration. In this paper, a literature in human-computer
interaction is to be reviewed and the technology aspect of human computer interaction is to be
analyzed. Also, general design principles are to be reviewed. According to all these issues,
recommendations to designing a good human-computer interface for e-learning programming
environment are going to be analyzed and proposed.
1 INTRODUCTION
Today, computers and computer and information technologies have an important role in
education through utilizing e-learning environments and different computer based systems. So, for
their effective use, efficient human-computer interactions must be designed. The involvement of
ICT have made a movement in education environments from physical environments to virtual
learning environments. The usage of VLEs in learning is a new field of research, because virtual
environments become attractive alternative for developing more realistic and interesting user
interfaces. According to research literature, the user interface is a crucial component that influences
the efficiency and quality of usage and communication between user and the virtual environment as
well as in the learning process. Today, there are a number of VLE developed with very advanced
graphical user interface, but the role of the human computer interaction is left behind any
consideration. This influences to appear a collision among expected learning goals and outcomes,
the virtual learning environment and learners.
To overrun these problems a need for research for improving the human- computer
interactions emerges, as Jones and O'Shea (1982) claim “that the perceived educational benefits
of a computer system have little to do with the amount of use it gets. Instead, it seems that the quality
and ease of the interaction are the most important factors. It is therefore argued that if human-
computer interface can be improved, one further barrier to CAI use will be removed” (Jones, A. &
O'Shea, 1982). In this paper, we search for knowledge how to design good human computer
interactions where in Section 2 a literature review in human computer interaction is done, then in
Section 3 a research for the interaction styles and interfaces is done where for each interaction
advantages and disadvantages are searched, in Section 4 a review of design principles of HCI is
presented, and in Section 5 a conclusion is drown and recommendations are proposed. The main
contribution of this paper is investigation of advantages and disadvantages of the interaction
styles and the recommendations for designing a good human-computer interaction.
There is confusion what HCI is, a science, a design science or an engineering discipline.
The definition as a science is “HCI is tempered by approximation, providingengineering-style
theories and tools for designers” (Newell & Card, 1985). HCI as a design science, “developing a craft-
based approach and new research methods to evaluate existing systems in their intended and tasks
context, using the results to inform designers for the next generation of systems “ defined by (Carroll
& Campbell, 1989). HCI as an engineering discipline, Long & Dowell (1989) define as “...the
design of humans and computers interacting to perform work effectively" while they decompose the
discipline into design of humans interacting with computers and design of computers interacting with
humans.
Human-computer interaction (HCI) studies how people interact with computing technology
and how a computer system is designed more easily, more practically, and more intuitively. These
interactions have specific emphasis on the 'interaction at the interface' with the technology in a
broader sense. Today, HCI has attracted considerable attention by researchers and “it is one of
the most critical challenges facing computer science and engineering” (IEEE).
While designing user interface of these systems, the cognitive processes whereby users interact
with computers must be taken into account because usually users’ attributes do not match to
computer attributes. Also we should take into account that computer systems can have non-
cognitive effects on the user, for example the user’s response to virtual worlds. (Reeves & Nass,
1996) showed that “humans have a strong tendency to respond to computers in similar ways as they do
to other humans” Reeves, B., Nass, C. (1996).
HCI is interdisciplinary field that interrelates with many disciplines as psychology, computer
science, cognitive psychology, engineering, artificial intelligence, ergonomics end recently other
discipline are input as sociology, anthropology, art sciences etc. So, it incorporates the social as
well as cognitive aspects of computing. Crucial factor in HCI design is the interrelation between
Psychology and Computer science as (Carroll & Thomas, 1982) state:
“Psychological theory and methods ... can provide a foundation for better interface design;
but reciprocally, interface design provides a rich and detailed practical domain in which to assess
and refine psychological theories of complex learning behavior. Perhaps both disciplines are now mature
enough to contemplate a serious relationship.”
Due to the rapid development of hardware and software technologies and their
decreasing costs and development of new techniques like speech and audio processing and
computer vision, people more and more will use computers in their everyday lives, even people
that are from other fields not very familiar with computers. Also, “due to one reason or another some
users cannot be able to interact with machines using a mouse and keyboard”(Rudnicky, A.I., Lee,
K.F., and Hauptmann, A.G., 1992).
This will lead to designing new multimodal human computer interactions that involve different
input techniques like speech or voice, paper-like writing or pen, computer vision (giving the
computer the ability to see its surroundings and to interpret them), eye-input technology and
gesture. A multimodal HCI application responds to input in more than one mode of communication
in a sense of sight, touch, hearing, smell that can be input in a computer through respective
input devices. Until now, desktop applications have used mechanical input techniques via
keyboard, mouse and visual display and using familiar WIMP conventional interfaces. At the
beginning there was a single user –computer interaction in the traditional HCI applications. Now,
we have multi-user multimodalinteraction to the computer utilizing new hardware technologies
(cameras, haptic sensors, olfactory, microphones and other) which give “the promise for effecting
a natural and intuitive communication between human and machine” (Jason J. Corso, 2005) (in
the new generation of interfaces that include computer vision, he calls the human computer interaction
a “communication between human and a machine”. Also, (Preece J., 1994) agrees when stating
“Virtual environments and virtual realities typically offer a sense of direct physical presence,
sensory cues in three dimensions, and a natural form of interaction (for example via natural gestures)”.
This implies new quality of interfaces of these systems, as (Faconti, 1996) says: "User interfaces
of many application systems have begun to include multiple devices which can be used together to
input single expressions. Such interfaces are commonly labeled multimodal because they use different
types of communication channels to acquire information". As the number of the interactive
computer-based systems is growing, human activities are rapidly becoming mediated by
computers. HCI is concerned “with the design, implementation and evaluation of those interactive
computer-based systems, as well as with the multi-disciplinary study of various issues affecting this
interaction” (Stephanidis, 2001), while the main concern is to ensure 'ease-of-use', operability,
discoverability, simplicity, and learnability moreover safety, utility, effectiveness, efficiency,
accessibility and usability (Stephanidis, 2001) and flexibility (refers to variations in task completion
strategies supported by the system).
Interaction styles refer to the different ways of communication between a human and a
computer based on a technological platform through interaction techniques which are “way of using a
physical input/output device to perform a generic task in a human-computer dialogue” (Foley at al.,
1990). Interaction style is explained “through prototypical elements of the interface and how they
behave, for instance command line, pull down menu, form fill in, or direct manipulation”
(Shneiderman, 1992).
This popular category covers the interaction between humans and computers using
language by typing the commands to a computer which prompts a message meaning ready toaccept
input. It provides means of expressing instructions to the computer directly, using function keys,
single characters, abbreviations, or whole word commands. The command line interfaces are powerful
in that they offer direct access to the system functionality and can be combined to apply a number of
tools to the same data. The command lines interactions are disadvantageous because text
commands are usually difficult to learn and use as cryptic keywords and a strict associated syntax
which a user has to know before using the system and usually this influences to an increase rate of
errors. They must be remembered. Mnemonics only can be used as cues. They are therefore better for
expert users than for novices.
• Menus
Menus are defined as set of options on screen for choosing the action or among options
for data entry. There are three types of menus Shneiderman, B (1992):
• Pull-down menus
• Pop-up menus
• Hierarchical menus
(Preece, 1994) defines a menu as “a set of options displayed on the screen where the selection and
execution of one (or more) of the options results in a change in the state of the interface. Unlike
command-driven systems, menus have the advantage that users do not have to remember the
item they want, they only need to recognize it” (Preece, J. 1994). The advantage of using menus is
that user needs to recognize rather than recall objects. The menu options need to be grouped logically
and meaningful, so the user could easily recognize the needed option. Although traditionally the
user clicks with a mouse over the item to be selected or using a keyboard, with the new hardware
technologies developed the user can as well respond via voice command. There is evidence that the
number of errors decrease, time to perform a task is shorten unless for complex tasks that need
more operations to perform, the navigation through menus to find the necessary option needs more
time.
• Direct manipulation
Direct manipulation interfaces are very popular and successful, especially with new users,
because they embed manipulations that are analog to human skills (pointing, grabbing, moving objects
in space), rather than trained behaviors and “users have great control over the display and as they select
items, the details appear in windows on the slides” (Shneiderman & Maes 1997) Shneiderman B, Maes
P., (1997).
Direct manipulation interfaces “present a set of objects on a screen and provide the user
a repertoire of manipulations that can be performed on any of them” (Shneiderman, 1983).
Each operation on the interface is done directly and graphically. From programming aspect,
writing a program is done by moving icons onto the screen and connecting them together. The
“editing-compiling –running” cycle is simply realized by directly clicking icons on the screen instead of
strictly syntax-ed commands or operations. There is no need to remember the command name end
syntax. This leads to decreasing syntax errors like you can not compile non-existing code since it is not
on the screen when you click the compile icon and faster performance of a task.
2. Experts can work extremely rapidly to carry out a wide range of tasks,
5. Users can see immediately if their actions are furthering their goals, and if not, they can
simply change the direction of their activity.
• Form fill-in
It is “the simplest style of interaction that consists of the user being required to answer
questions or fill in numbers in a fixed format rather like filling out a form” (Shneiderman, 1992). In
this form, the only kind of user interaction is the provision of information which is useful for data entry
into applications. Also spreadsheets are considered as a sophisticated variation of form filling.
• Natural Language
The researchers and practitioners are more interested in systems that use natural-
language processing as style of human-computer communication, both of speech and written input.
In the case of speech input, the user must learn which phrases the computer
understands since computer requires strict instructions and users may become frustrated if too much is
expected. The advantage of using this interaction style is to users that do not have access to
keyboards or have limited experience. While ambiguities of the language may cause unexpected effects
and makes very difficult for a computer to understand.
A good perspective is that “Natural Language systems should be extended to include non-
verbal dialogues”, since he argues that "Natural" language includes gestures. Gestures can be
used to form clear fluid phrases, and multi-threaded gestures can capitalize on the capabilities of
human performance to enable important concepts to be expressed in a clear, appropriate, and
"natural" manner” (Buxton,1990).
Natural Language interactions are “a perspective on Non-Verbal Dialogues because they are
in many ways, more natural than those based on words” (Buxton,1990).
• Question/answer and query dialogue
A simple mechanism for providing input to an application in a specific domain. The user is
asked a series of questions (mainly with yes/no responses, multiple choice or codes) and so is led
through the interaction step by step. These interfaces are easy to learn and use, but are limited in
functionality and power.
Query languages on the other hand are used to construct queries to retrieve information
from a database.
• WIMP interface
WIMP stands for windows, icons, menus, and pointers (sometimes windows, icons, mice,
and pull-down menus). These interfaces are probably the most popular and influential for interactive
environments. Windows are areas of the screen that behave as if they wereindependent terminals
in their own right. An icon is a small picture used to represent a closed window, file, or any other object.
The pointer is important component of a WIMP interface, since it interfaces the pointing, clicking,
pressing, dragging and selection of objects on the screen which could be moved, edited, explored
and executed as it better fits to the user’s vision. Other tools of computer interface design are
menus, dialog boxes, check boxes, and radio buttons and so on. These make use of visualization
methods and computer graphics to provide a more accessible interface than command-line-based
displays. The fundamental goal of WIMP designs is to give the user a meaningful working metaphor, for
example an office or ‘desktop’ representation as opposed to the command-line interfaces. Its
advantages are general application, make functions explicit and provide immediate feedback.
Humans are highly attuned to images and visual information that in other hand can
communicate some kinds of information much more rapidly and effectively than any other
method., and as is said “a picture is worth a thousand words ”.
• Virtual Reality
“ Virtual environments and virtual realities typically offer a sense of direct physical presence,
sensory cues in three dimensions, and a natural form of interaction (for example via natural gestures)”
(Preece, J. 1994).
Besides these styles, new interaction styles have emerged: “speech input/output,
computer vision based input (e.g., gestures), audio interfaces (e.g., non-speech audio), tactile and force
feedback, biophysical signals (e.g., retina scanner)” (Rauterberg, 2003) which bring us the new
generation of interfaces that are non-command-based with interactions like eye tracking interfaces,
artificial realities, play-along music accompaniment, and agents.
3.2 Input/Output
The conventional input devices used are keyboard, mouse and visual display that are used in
command based interactions.
With emerging of new hardware technologies new input devices are used like cameras,
haptic sensors, olfactory, microphones and other.
The new input technologies used are speech recognition, gesture recognition technologies,
eye tracking technology as non command based interaction, techniques for communication and
manipulation of multidimensional data;
Output devices used are the conventional computer desktop display, Head-mounted
displays, autostereoscopic displays, touchable three-dimensional displays, non-speech audio output
for ‘visualizing’ data etc.
Users form mental models or conceptual models of tasks and systems. These are used to guide
behavior at the interface. When people encounter new machines, devices or computers, they
begin to construct mental models to represent their behavior and operation. These internal models
provide a means by which people can understand and predict the world around them. But, these models
are individual and very subjective. Every user forms a mental model that depends on number of
psychological, cognitive, cultural, educational, and other human factors. This means that users may
form different models for one system that can notbe predicted in designing the system. Even though
the research literature has shown that the user using own knowledge after experiencing the
system forms more precise and representative model of the system that is working with; we construct
these models as we go along and as a consequence our models tend to be incomplete, unstable,
do not have firm boundaries, and are unscientific.
Some may argue that HCI does not need theory. Any discipline that fails to make a principled
explanation to justify its practice is building on sand. The HCI’s problem is that its theories are shared
with and, in many cases, borrowed from cognitive science. The cognitive science theories are
complex, “big science” endeavors that can only be carried forward by communities of
researchers, notably ACT-R (Anderson, J. R. and Lebiere, C. 1998) and SOAR (Newell, 1990). Both
of these theories have been applied to HCI problems, but the range of phenomena that they can
account for is narrow. According to (Sutcliffe, 2000) cognitive theories, implemented as
computational cognitive models, have a problem of scale.
However, this is away from predicting similar user behavior in a complex multimedia system.
The EPIC model (Kieras, D. E. and Meyer, D. E. 1997) provides an architecture of perceptual and
cognitive processors with rules that predict the user’s attention, recognition, and understanding of
user interface features. While EPIC can accurately predict user performance and behavior with
simple user interfaces (i.e., searching menu displays), it suffers from an increasing burden of
configuration as the complexity of the external artifact is increased.
"Researchers have shown that redesign of the human-computer interface can make a substantial
difference in learning time, performance speed, error rates and user satisfaction” (Shniderman,
1986).
Be consistent
Shneiderman (1992)
Provide online documentation to help the user understand how to operate the application and
Follow the principles of good graphics design in the layout of information in the screen.
We can conclude that in order to design a good human computer interaction, we have to
appropriately choose the type of interface and interaction style to fit with the class of users it is
designed whereas the human factors must be taken in consideration (Fetaji, M., at al., 2007).
Thereby, we recommend the following: to investigate the advantages and disadvantages of
interaction styles and interface types that best support the activities and styles of learning of
users the system is aimed at; to choose the type of interface and interaction styles that best
supports the system goals; to choose the interaction styles that are compatible to user attributes and
that support the users needs, which means to choose the styles that are more advantageous for
aimed users (for example, in a system for learning and practicing programming, direct manipulation
style is more advantageous which are stressed in more detail in section 3.1); and to define the user class
(experts, immediates or novices) that the system is designed for, where the human factors must be
taken in consideration.
Incorporating HCI design principles, we can ensure better design guidance for screen layout,
menu organization, or color usage according to users attributes.
REFERENCES
1. Stephanidis, C. (2001). Interfaces for All - Concepts, Methods, and Tools (pp. 3-17). Mahwah,
NJ: Lawrence Erlbaum Associates (ISBN 0-8058-2967-9, 760 pages).
2. Eisenhauer M., Hoffman B., Kretschmer D., (2002) “State of the Art Human-Computer Interaction”-
GigaMobile/D2.7.1, September 16, 2002
3. Newell, A. & Card, S. K. (1985), “The Prospects for Psychological Science in Human-Computer
Interaction”. Human-Computer Interaction, 1, 209-242.
4. Carroll, J. M. & Campbell, R. L. (1989), “Artifacts as Psychological Theories: the Case of Human-
Computer Interaction”. Behaviour and Information Technology, 8, 247-256.
5. Long, J. & Dowell, J. (1989), “Conceptions of the Discipline of HCI: Craft, Applied Science, and
Engineering”. In Sutcliffe, A. & Macaulay, L. [Eds.]: People and Computer
V. Proceedings of the Fifth Conference of the British Computer Society Human-Computer Interaction
Specialist Group, Univ. of Nottingham, 5-8 Sept. 1989. Cambridge: CUP.
6. Reeves, B., Nass, C. (1996) The Media Equation: how people treat computers, televisions and new
media like real people and places, Cambridge: Cambridge University Press
8. Dix, A.J., Finlay,J., Abowd, G., Beale, R. (1998). Human-Computer Interaction, 2ndedition,
Prentice Hall, Englewood Cliffs, NJ,USA.
9. ACM SIGCHI (1996). Curricula for Human-computer Interaction. ACM Special interest group on
computer-human interaction curriculum development group [On-line]. Available:
http://www.acm.org/sigchi/cdg/cdg2.html.
10. Jones, A. & O'Shea, T. (1982) Barriers to the use of computer assisted learning. British Journal of
Educational Technology, 1982,3(13), 207-217
11. Rudnicky, A.I., Lee, K.F., and Hauptmann, A.G. (1992) Survey of current speech technology.
Communications of the ACM,37(3):52-57.
13. G.P.Faconti, (1996) Reasoning on Gestural Interfaces through Syndetic ModellingACM SIGCHI
Bulletin, V 28(3), July 1996
14. Carroll, J. M. & Thomas, J. C. (1982) Metaphor and the cognitive representation of computing
systems. IEEE Transactions on Systems, Man, and Cybernetics, 1982, 12(2), 107-115.
15. Tufte E.R., (1989) ‘‘Visual Design of the User Interface,’’ IBM Corporation, Armonk, N.Y., 1989.
16. Shneiderman,.B., (1983) ‘‘Direct Manipulation: A Step Beyond Programming Languages,’’ IEEE
Computer, vol. 16, no. 8, pp. 57-69, 1983.
17. J.D. Foley, A. van Dam, S.K. Feiner, and J.F. Hughes, (1990) Computer Graphics: Principles and
Practice, Addison-Wesley, Reading,
Mass http://lipas.uwasa.fi/~mj/hci/hci11.html
19. Shneiderman, B. (1982). “The future of interactive systems and the emergence of direct
manipulation”.
20. Shneiderman B, Maes P., (1997) “Direct manipulation vs. interface agents interactions”,
Vol. 4, No. 6. (1997), pp. 42-61.
21. Buxton, W. (1990). The Natural Language of Interaction: A Perspective on Non-Verbal Dialogues. In
Laurel, B. (Ed.).
22. Rauterberg M., (1990). “ Interaction styles” Vol. 4, No. 6. (1990), pp. 72-82.
23. Fetaji M., Fetaji B., Ebibi M.,-“Designing quality e-learning virtual environment for learning
Java” – To be published in the proceedings of the CIIT conference, Macedonia, Bitola, 21 January 2007.
24. Dumas, J. S., & Redish J. C. (1999) “A practical guide to Usability Testing” revised edition,
Pearson Education Limited, pp.55-62
25. Baeza-Yatez R., Ribeiro-Neto B., (2001) “Modern Information Retrival”, Chapter 10, ACM Press:
Addison-Wesley.
26. Anderson, J. R. and Lebiere, C. (1998). The Atomic Components of Thought. Lawrence Erlbaum
Associates, Inc., Mahwah, NJ.
27. Newell, A. (1990). Unified Theories of Cognition. Harvard University Press, Cambridge, MA.
28. Kieras, D. E. and Meyer, D. E. (1997). An overview of the EPIC architecture for cognition and
performance with application to human-computer interaction. Human-Comput. Interact. 12, 391–
438
29. Sutcliffe, A., (2000); On the effective use and reuse of HCI knowledge; ACM Transactions on
Computer-Human Interaction (TOCHI), Volume 7 Issue 2, Publisher: ACM Press
30. Shneiderman, B., (1986). Designing the User Interface: Strategies for effective human-computer
interaction. Addison Wesley, Reading, Massachusetts.